To further reduce dependence on Microsoft, OpenAI has listed Google as a cloud partner

Wallstreetcn
2025.07.17 00:41
portai
I'm PortAI, I can summarize articles.

This cooperation arrangement aims to meet OpenAI's growing demand for computing power, and it significantly benefits Google's cloud business, helping it gain an advantage in competition with Amazon AWS and Microsoft Azure. Previously, Microsoft was the exclusive infrastructure provider for OpenAI, but in January of this year, it was adjusted to a priority supply model

OpenAI officially included Google Cloud in its list of suppliers, marking a key step in diversifying its computing resource supply and further reducing its reliance on long-term partner Microsoft.

On July 17, according to media reports, OpenAI added Google Cloud to its supplier list updated on its official website, with the agreement finalized in May this year after several months of negotiations between the two parties.

This collaboration aims to meet OpenAI's rising demand for computing power, and it significantly benefits Google Cloud's business, helping it gain an advantage in competition with Amazon AWS and Microsoft Azure. Meanwhile, this also reflects the enormous computing demands of large-scale AI model training and deployment, reshaping the competitive landscape of the artificial intelligence industry.

OpenAI's current cloud service suppliers also include Microsoft, Oracle, and CoreWeave. Analysts believe that this diversified supply strategy will enhance OpenAI's bargaining power and reduce supply chain risks.

Accelerating Diversification of Computing Resources

OpenAI's supplier diversification strategy has not been achieved overnight. Earlier this year, the company launched a $500 billion Stargate infrastructure project in collaboration with SoftBank and Oracle, and signed a nearly $12 billion cloud service agreement with CoreWeave for five years.

According to media reports, informed sources revealed that Google and OpenAI had previously discussed collaboration arrangements for several months, but were unable to reach a deal due to OpenAI's exclusive agreement with Microsoft. It wasn't until January this year that Microsoft adjusted its collaboration model with OpenAI, leading to a breakthrough in their relationship.

OpenAI stated that Google Cloud infrastructure will provide service support for ChatGPT and its application programming interface in regions such as the United States, Japan, the Netherlands, Norway, and the United Kingdom.

For Google, acquiring OpenAI as an important client is a significant victory for its cloud business. This collaboration is expected to help OpenAI better meet the enormous market demand for AI assistants like ChatGPT, while providing a more stable infrastructure guarantee for its future technological development.

Evolution of Relationship with Microsoft

Microsoft was the exclusive data center infrastructure provider for OpenAI until a significant adjustment in their relationship occurred in January this year.

Microsoft agreed to shift from an exclusive supplier model to a priority supply model, meaning that when OpenAI needs more computing resources, Microsoft has priority supply rights but is no longer the only option.

It is noteworthy that Microsoft has already listed OpenAI as a competitor last year, reflecting the subtle changes in the relationship between the two companies.

Currently, both parties are selling AI tools to developers and providing subscription services to enterprises. Nevertheless, Microsoft still retains exclusive supply rights to OpenAI's programming interface.

Last year, Oracle announced a collaboration with Microsoft and OpenAI to "extend the Microsoft Azure AI platform to Oracle Cloud infrastructure," providing additional computing capacity for OpenAI OpenAI co-founder and CEO Sam Altman publicly stated in April this year that the company is facing limitations in computing power. He wrote on social media X: "If anyone has 100,000 GPU capacity and can provide it immediately, please contact us!"

This plea highlights the urgent demand for high-end computing hardware, such as NVIDIA graphics processors, among AI companies. As the scale of large language models continues to expand, the demand for computing resources is growing exponentially