Amazon Web Services (AWS) and OpenAI have announced a multi-year strategic partnership aimed at providing OpenAI with expanded access to AWS’s infrastructure for advanced artificial intelligence workloads. The agreement, valued at $38 billion, will allow OpenAI to utilize hundreds of thousands of NVIDIA GPUs through Amazon EC2 UltraServers, with the capacity to scale up to tens of millions of CPUs as needed.
Under the terms of the deal, OpenAI will begin using AWS compute resources immediately. All planned capacity is expected to be deployed by the end of 2026, with potential for further expansion into 2027 and beyond. The partnership leverages AWS’s experience in operating large-scale AI infrastructure securely and reliably, supporting clusters that can exceed 500,000 chips.
The infrastructure provided by AWS is designed for efficiency and performance in AI processing. It includes clustering NVIDIA GB200s and GB300s on the same network via EC2 UltraServers, enabling low-latency operations across interconnected systems. This setup supports a range of workloads from serving inference for ChatGPT to training future models.
“Scaling frontier AI requires massive, reliable compute,” said Sam Altman, co-founder and CEO of OpenAI. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”
Matt Garman, CEO of AWS, stated: “As OpenAI continues to push the boundaries of what’s possible, AWS’s best-in-class infrastructure will serve as a backbone for their AI ambitions. The breadth and immediate availability of optimized compute demonstrates why AWS is uniquely positioned to support OpenAI’s vast AI workloads.”
This collaboration comes as demand for computing power in artificial intelligence continues to grow rapidly. Providers seeking higher levels of model intelligence are increasingly choosing cloud services like those offered by AWS due to their performance capabilities and security features.




