
OpenAI's shift to TPU: What does it mean for Google, NVIDIA, and Amazon?

Morgan Stanley stated that for Google, this is an important endorsement of its AI infrastructure capabilities by OpenAI, which will drive growth in Google Cloud business and consolidate its leading position in the ASIC ecosystem; for NVIDIA, although facing challenges of capacity constraints, it will still maintain its dominance in the GPU market; for Amazon, the absence of AWS from the OpenAI partner list may expose issues related to its capacity limitations and the competitiveness of its Trainium chips
OpenAI shifts to Google TPU chips, an important turning point in the AI infrastructure landscape.
According to news from the Chasing Wind Trading Desk, Morgan Stanley stated in a report on June 30 that for Google, this is an important endorsement of OpenAI's AI infrastructure capabilities, which will drive growth in Google Cloud's business and consolidate its leading position in the ASIC ecosystem.
For NVIDIA, although facing capacity constraint challenges, it will still maintain its dominant position in the GPU market. For Amazon, the absence of AWS from OpenAI's partner list may expose its capacity limitations and insufficient competitiveness of Trainium chips.
OpenAI's first large-scale adoption of non-NVIDIA chips
Wallstreetcn previously reported that OpenAI has recently begun renting Google’s TPU chips to provide computing power support for products like ChatGPT. This is the company's first large-scale use of non-NVIDIA chips. This collaboration arrangement allows OpenAI to reduce its dependence on Microsoft's data centers while providing Google’s TPU with an opportunity to challenge NVIDIA's GPU market dominance. OpenAI hopes to lower inference computing costs through the TPU chips rented from Google Cloud.
Morgan Stanley believes that OpenAI is currently the most important TPU customer (other customers include Apple, Safe Superintelligence, and Cohere), and this agreement is a significant recognition of Google’s AI infrastructure capabilities. Google’s TPU technology has been developed for ten years, with the first TPU released in 2015.
Analysts believe this collaboration brings two major positive impacts for Google: First, it could become a driving factor for accelerated growth in Google Cloud revenue, which has not yet been reflected in GOOGL's stock price; second, it reflects Google’s confidence in its position in the search field.
Morgan Stanley's chip model shows that spending on NVIDIA GPUs is expected to reach $243 billion and $258 billion in 2027 and 2028, respectively, while spending on TPUs will only be about $21 billion and $24 billion (most of which is used internally by Google).
This indicates that there is a significant opportunity for Google in terms of market share shift or TAM expansion. If OpenAI drives more customers to migrate, Google Cloud's Compute TAM is expected to be rapidly revised upwards.
It is worth noting that although OpenAI cannot access the most advanced versions, it still chooses to use TPUs, further proving Google’s leading position in the broader ASIC ecosystem. As developers become more familiar with TPUs, further adoption of TPUs by companies outside of Google may become an additional growth driver for Google Cloud's business.
NVIDIA's capacity constraints become a key factor
From NVIDIA's perspective, Morgan Stanley emphasizes that the company continues to make progress in its business with Google. NVIDIA's revenue from Google customers is expected to grow more than threefold this year, reaching over $20 billion, surpassing the growth rate of ASICs, with processor share increasing to nearly 65% However, NVIDIA is currently sold out, especially for rack-level products.
Morgan Stanley believes that the strong demand for alternative architectures is at least partially driven by a shortage of inference that needs urgent resolution, rather than a shift in competitive dynamics. Nevertheless, this itself reflects a significant differentiation advantage for Google relative to its peers.
Amazon AWS Faces Competitive Pressure
Notably, OpenAI will now run AI workloads on most major cloud service providers, including Google Cloud, Azure, Oracle, and CoreWeave, while Amazon AWS is noticeably absent.
Morgan Stanley's analysis suggests that while the exact reason for OpenAI not reaching an agreement with AWS is unclear, it may reflect that Amazon still has greater capacity constraints than expected, unable to meet OpenAI's demands. More importantly, this has a negative impact on AWS's Trainium custom silicon chips, particularly given that OpenAI chose to use the previous generation TPU instead of Trainium.
This dynamic is expected to keep investors highly focused on AWS's growth and the anticipated acceleration in growth for the second half of the year.