
Morgan Stanley: OpenAI collaboration highlights Google's AI chip strength

Morgan Stanley pointed out that OpenAI may use Google's Tensor Processing Units (TPUs) to perform inference tasks for AI, marking an acknowledgment of Google's hardware technology. This move will allow OpenAI to diversify its suppliers and reduce its reliance on NVIDIA chips. Analysts stated that this agreement will promote the development of Google's cloud business and enhance market confidence in Google's AI chips. Although OpenAI cannot use the most powerful TPUs, the choice to collaborate with Google demonstrates Google's strength in AI infrastructure
According to Zhitong Finance APP, Morgan Stanley stated that Microsoft (MSFT.US) supported OpenAI may use Google's (GOOG.US, GOOGL.US) Tensor Processing Units (TPU) to perform inference work for its artificial intelligence (AI) tasks, which would be a strong endorsement of Google's hardware technology.
Using Google's TPU marks a diversification of OpenAI's suppliers, as OpenAI has previously relied on NVIDIA (NVDA.US) chips to train its AI models and perform inference calculations, which is the process of executing the model after training is completed.
The analyst team led by Brian Nowak at Morgan Stanley stated, "Earlier reports indicated that OpenAI and Google were finalizing an agreement for OpenAI to utilize Google's cloud computing capabilities. The latest reports suggest that as part of the agreement, OpenAI will also rent Google's TPU to support its inference workload. Meanwhile, OpenAI is working to meet its growing inference demands while controlling inference costs as much as possible. It is important to note that OpenAI will not be able to use Google's most powerful TPUs, as Google has reserved these TPUs for training its own Gemini models."
Morgan Stanley stated that this deal is expected to drive rapid growth in Google's cloud business and enhance market confidence in Google's AI chips.
Nowak stated, "We believe OpenAI is the most notable TPU customer to date, with other customers including Apple (AAPL.US), Safe Superintelligence, and Cohere. This agreement will be a significant recognition of Google's decade-long evolution of AI infrastructure capabilities. It is also noteworthy that this is the first time OpenAI has meaningfully used non-NVIDIA chips, especially considering that although OpenAI cannot use the cutting-edge Google TPUs, the company still chose to collaborate with Google, further demonstrating Google's leading position in the broader Application-Specific Integrated Circuit (ASIC) ecosystem."
However, Morgan Stanley noted that due to high demand, the limited supply of NVIDIA GPUs may be one reason for OpenAI's decision to use Google's TPUs. Morgan Stanley also indicated that OpenAI's decision is not good news for Amazon (AMZN.US) AWS and its custom Trainium chips.
Nowak stated, "If OpenAI partners with Google, OpenAI will run AI workloads across most major cloud service providers, including Google Cloud, Microsoft Azure, Oracle (ORCL.US), and CoreWeave (CRWV.US)... while Amazon is the only significant participant absent from the list."
He added, "Most notably, reports suggest that OpenAI chose to use the previous generation TPU instead of Trainium."