OpenAI has secured a massive supply line to fuel its next phase of artificial intelligence development, striking a multi-year, multi-billion-dollar infrastructure deal that deepens its ties to Amazon.
In a new statement, Amazon Web Services (AWS) says it has signed a seven-year, $38 billion agreement with ChatGPT creator OpenAI.
Under the deal, AWS will give OpenAI access to hundreds of thousands of Nvidia chips and can ramp to tens of millions of CPUs. The capacity is designed to serve training, inference and emerging agent frameworks, reflecting the sweeping buildout underpinning the AI arms race.
The agreement commits both sides to a rapid deployment schedule, with full capacity targeted by the end of 2026 and expansion into 2027, compressing years of build-out into a single execution sprint. AWS is clustering Nvidia’s GB200 and GB300 chips inside EC2 UltraServers, linking them across the same network fabric to form low-latency AI superclusters built for ChatGPT’s inference needs, as well as for training next-gen models and serving other OpenAI features.
Says OpenAI CEO Sam Altman,
“Scaling frontier AI requires massive, reliable compute. Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”
AWS CEO Matt Garman says the infrastructure will serve as the “backbone” for OpenAI’s AI ambitions.
“The breadth and immediate availability of optimized compute demonstrates why AWS is uniquely positioned to support OpenAI’s vast AI workloads.”
Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

