OpenAI and Amazon Web Services have unveiled a strategic, multi-year partnership valued at $38 billion. This agreement grants OpenAI immediate and expanding access to AWS’s advanced cloud infrastructure, specifically Amazon EC2 UltraServers equipped with hundreds of thousands of NVIDIA GB200 and GB300 GPUs, and the ability to scale to tens of millions of CPUs. The infrastructure is designed for AI workloads, ranging from supporting inference for ChatGPT to training next-generation models. The compute clusters are optimized for low-latency, high-throughput performance, critical for the demands of generative AI and agentic workloads. The deployment is already underway, with plans to have all capacity operational by the end of 2026, and further expansion possible into 2027 and beyond.
New multi-year, strategic partnership with @OpenAI will provide our industry-leading infrastructure for them to run and scale ChatGPT inference, training, and agentic AI workloads.
— Andy Jassy (@ajassy) November 3, 2025
Allows OpenAI to leverage our unusual experience running large-scale AI infrastructure securely,… pic.twitter.com/HZGeld5M9q
This partnership will primarily impact OpenAI’s millions of users, including enterprise clients leveraging AI for coding, data analysis, scientific research, and more. The offering is available immediately, with OpenAI’s open weight models already accessible through Amazon Bedrock, AWS’s managed platform for foundation models.
AWS’s role in this collaboration is to provide the secure, scalable, and reliable foundation necessary for OpenAI to continue developing and distributing advanced AI technologies. By consolidating workloads on AWS, OpenAI can accelerate model development and support broader adoption among organizations worldwide. This move positions both companies at the forefront of the rapidly evolving AI infrastructure landscape, setting new standards for performance and scalability in cloud-based AI services.