
Google seeks to collaborate with MediaTek to develop low-cost AI chips, while Broadcom faces the challenge of sharing the TPU big cake

Google is collaborating with MediaTek to develop low-cost AI chips, which may affect Broadcom's exclusive partnership in the TPU business. The new TPU chips are expected to be launched next year, and MediaTek's close partnership with TSMC is a significant factor in Google's move. Google has previously relied on Broadcom's chip design system
According to The Information, American tech giant Google (GOOGL.US) is shifting towards collaborating with Taiwanese chip design giant MediaTek to develop Google's exclusive AI chip—specifically, the TPU chip. This initiative aims to assist Google in designing and developing the next generation of tensor processing units (TPUs) tailored for artificial intelligence training/inference systems. This move by Google suggests that Broadcom (AVGO.US), a leader in the AI ASIC chip field, may lose its exclusive partnership rights for the design and development of Google's TPU chips.
The media cited two informed sources stating that MediaTek may officially launch these new TPU chips as early as next year. The report also added that, according to an insider from TSMC and a MediaTek representative, the extremely close and solid relationship between MediaTek and TSMC is one of the factors influencing Google's decision.
Both MediaTek and TSMC are headquartered in Taiwan, and TSMC (TSM.US), as one of the largest wafer foundries globally, has long been a key partner for MediaTek. Many of MediaTek's flagship smartphone chips are mass-produced using TSMC's advanced process technology. MediaTek is a chip design company focused on wireless communications, AI computing hardware, and other cutting-edge hardware technologies. As a leading fabless chip design company, MediaTek specializes in chip design and does not have manufacturing capabilities, thus relying on TSMC's capacity for chip production. The deep cooperation between the two companies over the years may be a significant factor in Google's collaboration with MediaTek to create TPUs.
Google, headquartered in Mountain View, California, previously unveiled its sixth-generation TPU, named Trillium, at its annual developer conference in May 2024.
Google has long exclusively utilized Broadcom's robust chip design system to co-develop its TPU series products. Although Google is unlikely to completely stop using Broadcom's proprietary chip design system to meet its continuously evolving dedicated AI chips—TPUs—this means that Broadcom will have to share this immensely profitable chip business with MediaTek. Previously, JP Morgan estimated that collaborating with Broadcom to create TPUs could bring over $10 billion in revenue to Broadcom by 2025.
Informed sources indicated that part of Google's choice of MediaTek is due to MediaTek's chip pricing being lower than Broadcom's, which helps optimize supply chain costs. However, Broadcom will continue to benefit from Google's substantial TPU orders and will remain a core support force for TPUs for the foreseeable future.
Led by Chen Fuyang, Broadcom, the dominant player in AI ASIC chips, announced strong quarterly results earlier this month, exceeding expectations. The company anticipates that by 2027, the overall market size for Broadcom's Ethernet switch chips and AI ASIC chips could range between $60 billion and $90 billion, with the vast majority concentrated in customized dedicated AI chips, namely AI ASIC chips. Furthermore, Broadcom's management stated that this figure does not include the newly added major clients, specifically two unnamed hyperscale customers that the company signed during the earnings report quarter During a conference call with analysts, CEO Chen Fuyang stated that Broadcom is accelerating the provision of AI ASIC chips for "hyperscale customers"—operators with large-scale data centers such as Meta, Google, and OpenAI, as well as tech giants like Apple. He pointed out in the earnings meeting that in certain AI application scenarios, Broadcom's customized semiconductors have a performance advantage over the general-purpose AI acceleration chips Blackwell or Hopper architecture AI GPUs sold by NVIDIA.
Chen Fuyang revealed in the earnings meeting that the company is actively expanding its new group of "hyperscale customers." Currently, there are three such customers, with four more in the cooperation process, two of which are about to become major revenue-generating clients. "Our hyperscale partners are still actively investing," he emphasized. Chen Fuyang also expects to complete the tape-out of customized processors (XPU) for two hyperscale customers this year.
It is noteworthy that the revenue from these potential new hyperscale customers has not yet been reflected in the company's current AI revenue forecast (expected to be in the range of $60 billion to $90 billion by 2027). This means that Broadcom's market opportunities may exceed the existing expectations in the market.
Google, Broadcom, MediaTek, and TSMC did not immediately respond to media requests for comments on the TPU chip business.
Affected by the somewhat negative impact of the news regarding Google's collaboration with MediaTek to create TPU chips, Broadcom's stock price fell nearly 3% during Monday's U.S. trading session, but the decline narrowed significantly, ultimately closing down 0.53%.
Broadcom is a core supporter of Google's TPU chip development
Broadcom currently serves as a core supplier of Ethernet switch chips for large AI data centers worldwide, as well as customized AI chips that are crucial for AI training/inference.
With its absolute technological leadership in inter-chip communication and high-speed data transmission between chips, Broadcom has become the most important participant in the ASIC customized chip field in AI in recent years. For example, for Google's self-developed server AI chip—TPU acceleration chip, Broadcom is the core participant, collaborating with the Google team in the development of the TPU acceleration chip.
In addition to assisting Google in the design of TPU chips, Broadcom has also provided critical inter-chip communication intellectual property and has been responsible for the manufacturing, testing, and packaging of new chips in collaboration with TSMC, thereby ensuring the expansion of new AI data centers for Google. Therefore, unlike the collaboration with Broadcom to create TPU chips, Google will lead most of the work in the design of the next-generation TPU, including processor design, while MediaTek's main responsibility will be to manage the input/output (I/O) module, which is used for high-speed communication and interconnection between the main processor and peripheral core components. This collaboration model differs from Google's partnership with Broadcom, where Broadcom is involved in the entire design, packaging, and testing process of the TPU.
Based on Broadcom's unique inter-chip communication technology and numerous patents related to data transmission flows between chips, Broadcom has become the most important participant in the AI ASIC chip market in the AI hardware field. Not only does Google continue to choose to collaborate with Broadcom to design and develop customized AI ASIC chips, but giants like Apple and Meta, as well as more data center service operators, are expected to collaborate with Broadcom in the long term to create high-performance AI ASICs Cloud Giants Focus on Massive Expansion of Inference AI Computing Resources
There is no doubt that with the "new paradigm of low-cost computing power" led by DeepSeek sweeping the globe—DeepSeek has trained an open-source AI model with performance comparable to OpenAI's o1 at an extremely low investment cost of less than $6 million and under the conditions of 2048 chips with performance far below H100 and Blackwell's H800—the costs of AI training and application inference are increasingly declining.
The latest financial reports and performance outlooks show that American tech giants such as Amazon, Microsoft, and Google continue to adhere to their massive spending plans in the field of artificial intelligence. The core logic is that they bet on the new paradigm of low-cost computing power to drive the accelerated penetration of AI applications into various industries worldwide, thereby leading to an exponential growth in demand for inference AI computing power. This is also why the lithography machine giant ASML emphasized in its earnings meeting that the reduction in AI costs means that the scope of AI applications is expected to expand significantly.
As cutting-edge AI applications such as generative AI software and AI agents become widely adopted in the future, the demand for AI computing power at the cloud inference level will become increasingly enormous. Coupled with the new paradigm created by DeepSeek that significantly reduces inference costs, the collaborative self-developed AI ASICs created by cloud computing giants in partnership with Broadcom, Marvell, or MediaTek will have hardware performance, cost, and energy consumption advantages that are far superior to NVIDIA's AI GPUs, particularly in the field of efficient and massive-scale neural network parallel computing for AI inference.
DeepSeek has completely initiated an "efficiency revolution" in AI training and inference, pushing the development of future AI large models to focus on the two core aspects of "low cost" and "high performance." Under the backdrop of significantly optimized training engineering and a surge in demand for cloud AI inference computing power, AI ASICs are entering a trajectory of demand expansion that is even stronger than the AI boom period of 2023-2024. Broadcom's latest announced performance indicates that major clients such as Google, OpenAI, and Meta are expected to continue investing heavily in collaboration with Broadcom to develop AI ASIC chips.
Previously, Meta co-designed its first and second-generation AI training/inference acceleration processors with Broadcom, and it is expected that Meta and Broadcom will accelerate the development of Meta's next-generation AI chip MTIA 3 in 2025. OpenAI, which received significant investment from Microsoft and established deep cooperation, stated last October that it would collaborate with Broadcom to develop OpenAI's first AI ASIC chip