
Meta expands its self-developed chip product line: launching four new products by the end of 2027 to strengthen computing power autonomy

Meta is accelerating its self-developed chip layout, planning to launch four AI chips by the end of 2027, covering scenarios such as content recommendation and generative AI inference. The latest model, MTIA 300, has already been mass-produced, and MTIA 400 is about to be deployed, with the other two scheduled to go online in 2027. Meanwhile, Meta continues to sign procurement agreements worth hundreds of billions of dollars with NVIDIA and AMD, forming a dual-track computing power supply system of "self-developed + external procurement," maintaining supply chain flexibility while controlling costs
Meta is responding to the rising cost pressures in the AI arms race through its self-developed chip strategy, while maintaining large-scale purchases from NVIDIA and AMD to balance technological independence with supply chain stability.
On March 11, Bloomberg reported that Meta plans to deploy four new generations of self-developed AI chips by the end of 2027 to meet its rapidly expanding AI computing demands. Currently, the latest generation of chips has been put into training tasks for content ranking and recommendation systems; the second chip has completed laboratory testing and is advancing towards deployment; two more chips are scheduled for mass production in 2027.
Although self-developed chips help reduce dependence on external suppliers and compress long-term costs, Meta has not reduced its external procurement scale. The company recently signed procurement agreements worth tens of billions of dollars with NVIDIA and AMD to secure several gigawatts of AI computing power capacity for the coming years, forming a dual-track supply system of "self-developed + external procurement."
Clear roadmap for four generations of chips, compact deployment rhythm
The self-developed AI chip roadmap disclosed by Meta on Wednesday shows that four products are being advanced in parallel. Among them, MTIA 300 has entered mass production, primarily used for training tasks in content ranking and recommendation systems; MTIA 400 (codenamed Iris) has completed laboratory testing and is about to enter the deployment process.
The subsequent two chips are scheduled for large-scale deployment in 2027: the MTIA 450, codenamed Arke, is expected to go live in early 2027, while the MTIA 500, codenamed Astrid, will be delayed by about six months.
Meta's Vice President of Engineering, Yee Jiun Song, stated, "The pace of AI development in the past two to three months has left everyone astonished. The chip projects must keep up with the evolution of workloads, and we are continuously reviewing the roadmap to ensure that the products we develop have the highest practical value."
Specialized replaces general-purpose, exchanging efficiency for cost
Meta's self-developed chip team, "Meta Training and Inference Accelerator" (MTIA), focuses on building customized computing architectures for internal needs, with application scenarios covering Instagram content ranking and recommendation systems, as well as large-scale generative AI inference tasks.
Yee Jiun Song explained the cost logic of customized chips: "Since we are not targeting the general market, we can trim unnecessary functional modules, directly converting the savings into cost advantages. Our chips do not need to be all-encompassing, which gives us the space to truly achieve cost reduction."
This strategy reflects Meta's dual-track layout: on one hand, continuing to procure traditional GPUs from partners like NVIDIA and AMD to support large-scale general AI training; on the other hand, continuously investing in customized chips, focusing on specialized tasks that are more aligned with the characteristics of the Meta platform, seeking a balance between computing power independence and cost control.
Acquisitions to expand and fill the talent gap in chips
The key to advancing Meta's chip roadmap lies in its recently significantly expanded self-developed chip team. According to Bloomberg, last year, Meta CEO Mark Zuckerberg was dissatisfied with the company's internal progress and attempted to acquire the South Korean chip startup FuriosaAI for $800 million, but was rejected by the other party Subsequently, Meta acquired the California-based startup Rivos Inc., and successfully brought in over 400 employees, significantly enhancing the R&D capabilities of the MTIA team, allowing it to simultaneously advance multiple chip projects in parallel.
External procurement remains strong, NVIDIA and AMD orders locked in for years of capacity
Despite accelerating the pace of self-developed chips, Meta's investment in external procurement has not diminished. The company recently signed procurement agreements with NVIDIA and AMD, each worth hundreds of billions of dollars, securing large-scale GPU computing power supply for Meta over the coming years.
This positioning reflects Meta's chip strategy logic: it is not about replacing external procurement with self-development, but rather about using self-development to supplement specific scenarios that external procurement cannot efficiently cover. Against the backdrop of continued investment in AI infrastructure, Meta is attempting to strike a balance between controlling long-term costs and maintaining computing power flexibility to meet the rapidly evolving demands of AI workloads
