OpenAI, AWS sign $38bn cloud deal to power OpenAI with GPUs

What's new? OpenAI partners with Amazon Web Services to run AI on advanced EC2 UltraServers with NVIDIA GPUs; models are live on Amazon Bedrock for enterprise use;

· 1 min read
OpeanAI

OpenAI and Amazon Web Services have unveiled a strategic, multi-year partnership valued at $38 billion. This agreement grants OpenAI immediate and expanding access to AWS’s advanced cloud infrastructure, specifically Amazon EC2 UltraServers equipped with hundreds of thousands of NVIDIA GB200 and GB300 GPUs, and the ability to scale to tens of millions of CPUs. The infrastructure is designed for AI workloads, ranging from supporting inference for ChatGPT to training next-generation models. The compute clusters are optimized for low-latency, high-throughput performance, critical for the demands of generative AI and agentic workloads. The deployment is already underway, with plans to have all capacity operational by the end of 2026, and further expansion possible into 2027 and beyond.

This partnership will primarily impact OpenAI’s millions of users, including enterprise clients leveraging AI for coding, data analysis, scientific research, and more. The offering is available immediately, with OpenAI’s open weight models already accessible through Amazon Bedrock, AWS’s managed platform for foundation models.

AWS’s role in this collaboration is to provide the secure, scalable, and reliable foundation necessary for OpenAI to continue developing and distributing advanced AI technologies. By consolidating workloads on AWS, OpenAI can accelerate model development and support broader adoption among organizations worldwide. This move positions both companies at the forefront of the rapidly evolving AI infrastructure landscape, setting new standards for performance and scalability in cloud-based AI services.

Source