Amazon and OpenAI agree to a $38B seven year cloud plan that ensures enough GPUs to keep ChatGPT and other AI running with less dependence on Microsoft

11/11/20251 min read

OpenAI agreed to spend $38 billion on Amazon Web Services over seven years, ensuring the compute needed to run ChatGPT and future models while reducing reliance on a single provider. Amazon’s stock rose about 5% to a record high after the announcement, signaling investor confidence that fresh AI workloads will lift growth.

The agreement gives OpenAI access to hundreds of thousands of Nvidia GPUs inside AWS data centers. Capacity begins now, scales through the end of 2026, and includes options to expand in 2027 and beyond. The companies describe a multi year partnership that prioritizes rapid access to cutting edge accelerators as they come online.

Recent governance changes at OpenAI ended Microsoft’s right of first refusal for cloud capacity, clearing the way for multi cloud procurement at scale. Reporting over the last day places OpenAI’s broader compute ambitions at over 1 trillion dollars of long term infrastructure commitments across multiple partners, a figure that has fueled both optimism and caution about the pace of AI investment.

This is a capacity play at industrial scale. OpenAI secures diversified access to advanced chips on a defined timeline. AWS gains a flagship AI tenant and a multi year stream of high value compute demand. The common thread is simple and numeric. More GPUs online by 2026 means faster model training and more reliable service for end users, while multi cloud becomes not a slogan but a signed commitment.