
Broadcom conference call: Demand from hyperscale customers is strong, expecting the AI market to reach $90 billion by 2027

Broadcom also stated that three hyperscale customers are making large purchases of XPU, which is expected to bring a market of USD 60-90 billion in fiscal year 2027. Additionally, four partners are developing custom accelerators, which are not currently included in the above market estimates
After the US stock market closed on March 6, Eastern Time, Broadcom released its first-quarter financial report, with both first-quarter performance and second-quarter revenue guidance exceeding expectations. It is also expected that AI semiconductor revenue in the second quarter will reach $4.4 billion, sending a strong signal to investors that AI computing spending remains robust, which stimulated the company's stock price to rise by as much as 17% in after-hours trading.
In the first quarter, Broadcom's revenue was $14.92 billion, a year-on-year increase of 25%, surpassing analysts' expectations of $14.61 billion. Among them, semiconductor revenue was $8.2 billion, a year-on-year increase of 11%, exceeding the market expectation of $8.1 billion.
The following is a summary of key points from Broadcom's earnings call:
- The company's total revenue for the first quarter of fiscal year 2025 reached $14.9 billion, a year-on-year increase of 25%; adjusted EBITDA was $10.1 billion, a year-on-year increase of 41%. Semiconductor revenue was $8.2 billion, a year-on-year increase of 11%; infrastructure software revenue was $6.7 billion, a year-on-year increase of 47%.
- AI revenue in the first quarter was $4.1 billion, a year-on-year increase of 77%, due to higher-than-expected shipments of network solutions. AI revenue in the second quarter is expected to be $4.4 billion, a year-on-year increase of 44%.
- Broadcom's chip business focuses on both training workloads and inference as an independent product line, together forming a market size of $60 billion to $90 billion, but currently, revenue from training business accounts for a larger share.
- The company will increase R&D in the AI field, advancing the 2-nanometer AI XPU 3.5D packaging tape-out; on the other hand, it will enhance the performance of the "Tomahawk" switch and deliver next-generation 100 terabit 6 switch samples to customers.
- The company has advantages in its connectivity product portfolio, with many hyperscale customers being new construction projects, and there are many opportunities in fiber optic connections, including multimode and single-mode laser products. In addition to Ethernet, there are layouts in protocols such as PCI Express and architectures like intelligent switches, with related product revenue averaging about 30% of AI business.
- Three hyperscale customers are making large purchases of XPU, expected to bring a market of $60 billion to $90 billion in fiscal year 2027, and four other partners are developing custom accelerators, which are currently not included in the above market estimates.
- Winning design orders requires large-scale production and deployment of products, choosing to cooperate with customers with long-term high demand, and avoiding partnerships with startups.
The full transcript of the call is as follows (translated by AI):
Operator: Welcome to Broadcom's fiscal year 2025 first-quarter financial performance conference call. Now, I would like to invite Broadcom's Head of Investor Relations, Ji Yoo, to give the opening remarks and introduction.
Head of Investor Relations Ji Yoo: Thank you, Cherie. Good afternoon, everyone. Joining me on the call today are President and CEO Hock Tan, Chief Financial Officer Kirsten Spears, and President of the Semiconductor Solutions Group Charlie Kawwas Broadcom released a press release and financial statements after the stock market closed, detailing our financial performance for the first quarter of fiscal year 2025. If you did not receive a copy, you can obtain the relevant information from the investor section of Broadcom's website. This conference call is being streamed live online, and you can listen to the recorded playback of this conference call in the investor section of Broadcom's website, which will be available for one year. In the prepared remarks, Hock E. Tan and Kirsten will provide detailed information about our performance for the first quarter of fiscal year 2025, the outlook for the second quarter of fiscal year 2025, and comments on the business environment. After we complete the prepared remarks, we will accept questions. Please refer to our press release today and our recent filings with the U.S. Securities and Exchange Commission (SEC) for specific risk factors that may cause our actual performance to differ materially from the forward-looking statements made during this conference call.
In addition to reporting in accordance with U.S. Generally Accepted Accounting Principles (US GAAP), Broadcom also reports certain financial metrics on a non-GAAP basis. The reconciliation between GAAP and non-GAAP metrics is included in the tables attached to today's press release. Comments during today's conference call will primarily refer to our non-GAAP financial performance. Now I will hand the call over to Hock E. Tan.
President and CEO Hock E. Tan: Thank you, Liu Zhi, and thank you all for joining the call today. In our first quarter of fiscal year 2025, total revenue reached a record $14.9 billion, an increase of 25% year-over-year, and adjusted EBITDA also set a record at $10.1 billion, up 41% year-over-year. Let me first introduce the situation of our semiconductor business. Semiconductor revenue for the first quarter was $8.2 billion, an increase of 11% year-over-year. The growth driver came from the artificial intelligence sector, with AI business revenue reaching $4.1 billion, up 77% year-over-year. Due to the increased shipments of AI network solutions provided to hyperscale data center customers, our AI business revenue exceeded the expected $3.8 billion. Our hyperscale data center customers continue to invest heavily in their next-generation frontier models, which indeed require high-performance accelerators and larger clusters of AI data centers.
In line with this, we have increased our R&D investment in two areas. First, we are pushing the limits of technology to create the next-generation accelerators. We are tape-out the industry's first 2-nanometer AI XPU 3.5D package, aiming to achieve an XPU performance of 100 trillion floating-point operations. Second, we are looking to expand the cluster size of 500,000 accelerators for hyperscale customers. We have doubled the capacity of existing sites (voice unclear). In addition, to enable AI clusters to scale to 1 million XPUs via Ethernet, we have completed the tape-out of the next-generation 100 terabit 6 switch, which supports 200G rates at a 1.6 terabit rate We will deliver samples to customers in the coming months. These R&D investments align closely with the roadmaps of our three hyperscale customers, as they all plan to scale their XPU clusters to 1 million by the end of 2027. Therefore, we reiterate what we said last quarter: we expect these three hyperscale customers to contribute $60 billion to $90 billion in serviceable addressable market (SAM) in fiscal year 2027.
In addition to these three customers, we have previously mentioned that we are working closely with two other hyperscale data center customers to help them build their own custom AI accelerators. We are on track to complete their XPU tape-outs this year. During our collaboration with these hyperscale data center customers, it has become clear that while they excel in software, Broadcom is the best in hardware.
Collaboration is essential to optimize large language models. Since our last earnings call, two more hyperscale data center customers have chosen Broadcom to develop custom accelerators to train their next-generation cutting-edge models, which is not surprising for us. So, even though we currently have three hyperscale data center customers making large purchases of our XPUs, there are now four additional customers working closely with us to build their own accelerators. It is important to clarify that, of course, these four customers are not included in our estimate of the $60 billion to $90 billion serviceable addressable market for 2027.
So we are indeed seeing an exciting trend. New cutting-edge models and technologies are putting unexpected pressure on AI systems. It is challenging to meet the demands of all model clusters with a single system design. Therefore, it is hard to imagine a universal accelerator that can be configured and optimized across multiple cutting-edge models.
As I mentioned earlier, the development trend of XPUs is a multi-year process. So going back to 2025, we see steady progress in the deployment of our XPUs and networking products. In the first quarter, AI business revenue was $4.1 billion, and we expect AI business revenue to grow to $4.4 billion in the second quarter, a year-over-year increase of 44%. Looking at the non-AI semiconductor business, revenue was $4.1 billion, which declined 9% quarter-over-quarter due to seasonal declines in the wireless business. Overall, the recovery of the non-AI semiconductor business remained slow in the first quarter. The broadband business rebounded with double-digit quarter-over-quarter growth after hitting bottom in the fourth quarter of 2024, and similar growth is expected in the second quarter as service providers and telecom operators increase spending.
The server storage business saw a single-digit decline quarter-over-quarter in the first quarter but is expected to grow in the high single digits quarter-over-quarter in the second quarter. Meanwhile, the enterprise networking business will remain stable in the first half of fiscal year 2025 as customers continue to digest channel inventory. Although the wireless business declined quarter-over-quarter due to seasonal factors, it remained flat year-over-year. The wireless business is also expected to remain flat year-over-year in the second quarter. The industrial resale business declined by double digits in the first quarter and is expected to continue to decline in the second quarter. In summary, we expect non-AI semiconductor business revenue to remain stable quarter-over-quarter in the second quarter, although we see order volumes continuing to grow year-over-year In summary, for the second quarter, we expect total revenue from the semiconductor business to grow by 2% quarter-over-quarter and 17% year-over-year, reaching $8.4 billion. Now let's look at the infrastructure software business segment. In the first quarter, revenue from the infrastructure software business was $6.7 billion, a year-over-year increase of 47% and a quarter-over-quarter increase of 15%. However, this was somewhat exaggerated due to some transactions that were pushed from the second quarter to the first quarter. This is for the first quarter of fiscal year 2025, where the year-over-year data includes VMware's business for both quarters. We have seen significant growth in the software business segment for two reasons. First, we are transitioning from a primarily perpetual license model to a fully subscription model. To date, we have completed over 60% of this transition. Second, these perpetual licenses were previously mainly used for compute virtualization, which is (inaudible). We are marketing the full suite of VCS (Virtual Cloud Suite) to customers, enabling virtualization across the entire data center. This allows customers to create their own private cloud environments on-premises. As of the end of the first quarter, approximately 70% of our largest 10,000 customers have adopted VCF (Virtual Cloud Infrastructure). As these customers use VCF, we still see further growth opportunities ahead. As large enterprises adopt artificial intelligence, they must run AI workloads in their on-premises data centers, which will include GPU servers and traditional CPU servers. Just as VCF virtualizes these traditional data centers using CPUs, VCF will also virtualize GPUs on a unified platform, enabling enterprises to import AI models and run their own data locally. This platform for GPU virtualization is called VMware Private AI Foundation. To date, through our partnership with NVIDIA, we have 39 enterprise customers using the VMware Private AI Foundation. Customer demand is driven by our open ecosystem, excellent low-latency, and automation capabilities, which allow them to intelligently allocate and run workloads on GPU and CPU infrastructure, significantly reducing costs. Next, let's look at the outlook for the software business in the second quarter. We expect revenue to be $6.5 billion, a year-over-year increase of 23%. So overall, we expect consolidated revenue for the second quarter to be approximately $14.9 billion, a year-over-year increase of 19%. Furthermore, we expect this will result in adjusted EBITDA for the second quarter to be about 66% of revenue. Now I will hand the call over to Kirsten.
Chief Financial Officer Kirsten Spears: Thank you, Chen Fuyang. Now let me provide further details on our financial performance for the first quarter. From a year-over-year comparable perspective, please note that the first quarter of fiscal year 2024 is a 14-week quarter, while the first quarter of fiscal year 2025 is a 13-week quarter. This quarter's consolidated revenue was $14.9 billion, a 25% increase compared to the same period last year This quarter, the gross margin was 79.1% of revenue, exceeding our initial expectations due to increased revenue from the infrastructure software business and a more favorable revenue mix in the semiconductor business.
Consolidated operating expenses were $2 billion, of which $1.4 billion was for R&D. The operating profit for the first quarter was $9.8 billion, a 44% increase year-over-year, with an operating margin of 66% of revenue. Adjusted EBITDA reached a record $10.1 billion, accounting for 68% of revenue, higher than our expectation of 66%. This figure does not include $142 million in depreciation.
Now let's review the income statements of our two business segments, starting with the semiconductor business. Our semiconductor solutions segment generated $8.2 billion in revenue, accounting for 55% of total revenue this quarter, with an 11% year-over-year growth. The gross margin for our semiconductor solutions segment was approximately 68%, an increase of 70 basis points year-over-year, driven by the revenue mix. Operating expenses increased by 3% year-over-year to $890 million, due to increased R&D investments in leading AI semiconductors, resulting in an operating margin of 57% for the semiconductor business. Now, looking at the infrastructure software business. The infrastructure software business generated $6.7 billion in revenue, accounting for 45% of total revenue, with a 47% year-over-year growth, primarily due to increased revenue from VMware.
This quarter, the gross margin for the infrastructure software business was 92.5%, compared to 88% in the same period last year. Operating expenses for this quarter were approximately $1.1 billion, resulting in an operating margin of 76% for the infrastructure software business. In comparison, the operating margin for the same period last year was 59%. This year-over-year improvement reflects our orderly integration of VMware and our strong focus on deploying our VCF strategy.
Next, let's look at cash flow. This quarter, free cash flow was $6 billion, accounting for 40% of revenue. The percentage of free cash flow to revenue continues to be affected by cash interest expenses related to the debt incurred from the VMware acquisition, as well as cash taxes arising from the combination of U.S. taxable income, ongoing delays in the redefinition of Section 174, and the impact of the Alternative Minimum Tax (AMT).
We spent $100 million on capital expenditures. The accounts receivable turnover days for the first quarter were 30 days, compared to 41 days in the same period last year. At the end of the first quarter, our inventory was $1.9 billion, an 8% increase quarter-over-quarter, to support revenue in the coming quarters. Our inventory days for the first quarter were 65 days, as we maintain strict control over inventory management across the ecosystem.
At the end of the first quarter, we had $9.3 billion in cash and $68.8 billion in total principal debt. During this quarter, we repaid $495 million of fixed-rate debt and $7.6 billion of floating-rate debt using new senior notes, commercial paper, and cash on hand, resulting in a net debt reduction of $1.1 billion. After these actions, the weighted average coupon rate and remaining maturity of our $58.8 billion fixed-rate debt were 3.8% and 7.3 years, respectively Our $6 billion floating rate debt has a weighted average coupon rate of 5.4% and a remaining maturity of 3.8 years, while our $4 billion commercial paper has an average interest rate of 4.6%. Next, let's look at capital allocation. In the first quarter, we paid $2.8 billion in cash dividends to shareholders based on a quarterly common stock cash dividend of $0.59 per share.
We spent $2 billion to repurchase 8.7 million shares of Broadcom (AVGO) stock from employees to cover withholding taxes upon vesting of these shares. We expect the diluted share count under non-GAAP to be approximately 4.95 billion shares in the second quarter. Now, let's look at the performance outlook. Our outlook for the second quarter is consolidated revenue of $14.9 billion, with semiconductor business revenue of approximately $8.4 billion, a year-over-year increase of 17%. We expect AI business revenue in the second quarter to be $4.4 billion, a year-over-year increase of 44%. For non-AI semiconductor business, we expect second-quarter revenue to be $4 billion. We expect second-quarter infrastructure software business revenue to be approximately $6.5 billion, a year-over-year increase of 23%. We expect second-quarter adjusted EBITDA to be about 66% of revenue. For modeling purposes, we anticipate a sequential decline of approximately 20 basis points in consolidated gross margin due to the revenue mix in the infrastructure software business and the product mix within the semiconductor business. As Chen Fuyang discussed earlier, we increased our R&D investment in leading AI areas in the second quarter, so we expect adjusted EBITDA to be about 66% of revenue. We expect the non-GAAP tax rate for the second quarter and fiscal year 2025 to be approximately 14%. That concludes my prepared remarks. Operator, please open the line for questions.
Q&A Session
Operator: Our first question comes from Ben Reitzes of Melius. Please go ahead.
Analyst Ben Reitzes: Hello everyone, thank you very much, and congratulations on such great results. Chen Fuyang, you mentioned that four new customers have joined. Can you elaborate on the trend you are seeing? Is there a possibility that any of these customers could reach the scale of the current three customers? Overall, what does this mean for the trend in custom chips, and what is your long-term optimistic outlook and growth potential for the business? Thank you.
President and CEO Chen Fuyang: Very interesting question, Ben, and thank you for your congratulations. However, by the way, these four cannot yet be considered customers as we define them. As I have always said, when developing and building the XPU, to be honest, we are not the true creators of these XPUs. We help each of our hyperscale data center customers build that chip, essentially building that computing system, so to speak.
It includes models, that is, software models, closely integrated with the computing engine (i.e., XPU) and the network, connecting multiple XPU clusters into a whole to train those large frontier models So what I mean is that while we have built the hardware, it still needs to work in conjunction with the software models and algorithms of our partners before it can be fully deployed and scaled. This is why we define those customers who we know have already been deployed at scale and will receive mass-produced products for operation as customers.
In this regard, I want to reiterate that we currently only have three (voice unclear). These four, I refer to as partners, are working hard to do the same thing as the first three customers, which is to run their own cutting-edge models and train their own cutting-edge models. And as I have mentioned, this is not something that happens overnight; building the first chip can typically take a year and a half, and that is already a very fast pace. We are able to accelerate this because we basically already have a viable framework and approach.
This approach works for the three customers, and there is no reason it wouldn't work for these four customers. But we still need these four partners to develop the software, which is not something we do, and only then can the entire system operate. To answer your question, there is no reason to believe that these four customers will not generate demand on a scale similar to the first three customers, although it may be a bit later. They are starting later, so it may take them a bit longer to reach that level.
Analyst Ben Reitzes: Thank you very much.
Operator: Thank you. Please hold, the next question will come from Harlan Sur of JP Morgan. Please go ahead.
Analyst Harlan Sur: Good afternoon, Chen Fuyang team, excellent execution this quarter. It's great to see the continued growth momentum in your AI business in the first half of your fiscal year, and the customer base for AI-specific integrated circuits (ASIC) is also expanding. Chen Fuyang, I know you mentioned in the last earnings call that you expect strong growth in the second half of the fiscal year due to the advancement of the new 3-nanometer AI acceleration projects. Can you qualitatively or quantitatively share with us how the performance in the second half will improve compared to the first half?
Is the situation becoming more favorable or less favorable compared to what you expected 90 days ago? Because frankly, a lot has happened since the last earnings call, right? For example, DeepSee (voice unclear) focusing on improving the efficiency of AI models, but on the other hand, your cloud services and hyperscale data center customers have significant capital expenditure expectations. So any information regarding the AI business in the second half will be very helpful for us.
President and CEO Chen Fuyang: You're asking me to speculate on the thoughts of customers, and I don't want to say that they won't share all their thoughts with me. However, on one hand, so far our performance in the first quarter has exceeded expectations, and the second quarter looks optimistic, partly due to the increase in shipments of networking products, as I mentioned before, which helps to integrate those XPUs and AI accelerators, and in some cases even GPUs, for hyperscale data center customers, which is a good thing In addition, we also believe that there will be some advance and acceleration in shipments in fiscal year 2025, so to speak.
Analyst Harlan Sur: You mentioned the progress of the 3-nanometer project in the second half of the year 90 days ago; is it still on track?
President and CEO Chen Fuyang: Harlan, thank you. I can only say, sorry. Let's not speculate on the situation in the second half of the year for now.
Analyst Harlan Sur: Okay. Thank you, Chen Fuyang.
President and CEO Chen Fuyang: Thank you.
Operator: Thank you. Please hold on, the next question will come from William Stein of Truist Securities. Please go ahead.
Analyst William Stein: Great. Thank you for answering my question. Congratulations on such outstanding performance. From the news headlines regarding tariffs and DeepSeek, there may be some disruptive factors; some customers and other complementary suppliers seem a bit hesitant and may be struggling with tough decisions. For excellent companies, these times are often a good opportunity to stand out and become stronger and better than before. Over the past decade, you have grown this company in an amazing way.
Today, your performance is also outstanding, especially in the field of artificial intelligence. But I wonder, based on what we infer from news headlines about other companies, have you noticed these kinds of disruptive factors? Besides adding these customers in the AI field, I believe there are other great advancements, but can we expect Broadcom to undergo some significant changes as a result?
President and CEO Chen Fuyang: You raised a series of very interesting questions. These questions are all very relevant and intriguing. The only issue we face right now is that I have to say it is still too early to judge our ultimate situation. What I mean is that the threat and discussion of tariffs, especially tariffs on chips, have not yet materialized, and we are also unclear about what the specific implementation framework will look like.
So we are not sure. But we are indeed experiencing and adapting to such a transformation — I must add that this is a positive transformation, namely the change brought by generative artificial intelligence to the semiconductor industry. There is no doubt that generative artificial intelligence, as I mentioned before and will repeat now, is accelerating the development of semiconductor technology more profoundly than ever, whether in processes, packaging, or design, moving towards higher performance accelerators and network functions. We believe that as we face new interesting challenges, innovation and upgrades are happening every month. Especially in terms of XPU, we are striving — we are being asked to optimize for the frontier models of our partners, customers, and hyperscale data center clients. For us, being able to participate and try to optimize is almost an honor — and when I say optimize, I mean that when you look at an accelerator, from a simple level and a higher perspective, what you want to achieve is not just a single computing capability metric, such as how many trillions of floating-point operations per second, far beyond that This also involves a fact that this is a distributed computing problem. It is not just about the computing power of a single XPU or GPU, but also about the network bandwidth between it and the adjacent next XPU or GPU. So this will have an impact. When you optimize, you have to make trade-offs in these areas. Then you have to decide whether to train or to pre-fill and fine-tune after training. Next, you need to consider how much memory to balance in this regard. At the same time, you have to consider how much latency you can tolerate, which is the memory bandwidth. So you have to consider at least four variables, and if you consider memory bandwidth in addition to memory capacity during direct inference, you may even have to consider five variables. Therefore, we need to handle all these variables and strive to optimize them. For our engineers, this is a fantastic experience that allows them to push the limits and think about how to manufacture all these chips. So this is the biggest transformation we are currently seeing, simply because we are trying to innovate and break through in the field of generative artificial intelligence, attempting to create the best hardware infrastructure to run it. Besides that, as I pointed out, there are other factors at play because artificial intelligence is not only driving the demand for hardware from enterprises but also influencing how they build data centers. For enterprises, controlling data privacy has become crucial. Therefore, the trend of pushing workloads to the public cloud may slow down slightly, especially for large enterprises that must realize that if they want to run AI workloads, they may need to seriously consider running them locally. Then you suddenly realize that you have to upgrade your data center to manage and run your data locally, which has also driven a trend we have been seeing over the past 12 months, which is why I mentioned the VMware private AI infrastructure platform. Indeed, especially those enterprises that are transforming quickly realize how and where to run their AI workloads. So these are the trends we are seeing today, many of which stem from artificial intelligence, and many also arise from sensitive regulations regarding cloud sovereignty and data sovereignty. As for the tariff issues you mentioned, I think it is still too early for us to determine a response strategy; perhaps in three to six months, we may have a clearer idea of what to do.
Analyst William Stein: Thank you.
Operator: Thank you. Please hold on, the next question will come from Ross Seymore of Deutsche Bank. Please go ahead.
Analyst Ross Seymore: Great, thank you for the opportunity to ask a question. Chen Fuyang, I want to return to the topic related to XPU. From the four new partners you mentioned last quarter (who have not yet become formal customers) to the two additional ones announced today, I want to discuss how you assess the process from winning design orders to actual deployment. There is some controversy regarding the situation where a large number of design orders are won but actual deployments fail to materialize, or the actual deployment volume never reaches the initial commitment. How do you view this conversion rate? Is the fluctuation range of this conversion rate large? Or can you help us understand how this process works? President and CEO Chen Fuyang: Well, Ross, that's an interesting question. I take this opportunity to clarify that our approach to winning design orders may be quite different from many of our peers. First, we believe that we only win a design order when we know our products can be mass-produced and actually deployed in real production.
This requires a long lead time because, based on our experience, it can easily take a year from tape-out to getting the product, and from receiving the product from our partners to achieving mass production. Typically, it takes six months to a year from product delivery to mass production. That's the first point. Secondly, I mean, producing and deploying 5,000 units is simply a joke. In our view, that doesn't count as real production.
Therefore, we also have limitations when selecting partners, only choosing those customers who genuinely need large quantities of products. From our perspective, these customers currently have a significant demand for large-scale training, that is, for continuously training large language models and cutting-edge models. So we will filter the existing or potential customer base (voice unclear), and from the very beginning, we are very cautious in selecting partners.
So when we say we win design orders, it really is on a large scale. This is not the kind of situation that fizzles out after six months or fails after a year. Essentially, this is a kind of filtering of customers. This has been our consistent approach in running the Application-Specific Integrated Circuit (ASIC) business for the past 15 years. We select customers because we understand them, and we will work with these customers to develop multi-year roadmaps because we know these customers are sustainable partners.
Frankly, we do not work with startups.
Analyst Ross Seymour: Thank you.
Operator: Please hold on for the next question. This question will come from Stacy Rasgon of Bernstein Research. Please go ahead.
Analyst Stacy Rasgon: Hello, everyone. Thank you for taking my question. I want to talk about the three customers you currently have significant business with. I want to ask whether you are concerned that some new regulations, or the artificial intelligence dissemination rules that are said to be implemented in May, might impact those design orders or product shipments you have already won? It sounds like you currently believe that the business of these three customers is unaffected, but if you could share your concerns regarding how the new regulations or AI dissemination rules might impact these orders, that would be great.
President and CEO Chen Fuyang: Thank you. In this era of geopolitical tension and frequent government actions, yes, everyone has some concerns to varying degrees. But to directly answer your question, no, we are not worried.
Analyst Stacy Rasgon: Okay, that's very helpful. Thank you. Thank you Operator: Please hold on, the next question will come from Vivek Arya of Bank of America. Please go ahead.
Analyst Vivek Arya: Thank you for taking my question. Chen Fuyang, whenever you describe the AI business opportunities, you always emphasize training workloads. However, it is widely believed that the AI market may be dominated by inference workloads, especially with the emergence of these new inference models. So, if the market focus shifts more towards inference workloads, what impact would that have on your business opportunities and market share? Would it push your serviceable available market (SAM) beyond $60 billion to $90 billion? Or would it remain unchanged, just with a different product mix? Or next year, would a market with a larger proportion of inference favor GPUs more? Thank you.
President and CEO Chen Fuyang: That's a great question — a very interesting question. By the way, I do often talk about training. Our chips — our experience also focuses on inference as a separate product line. That is indeed the case. That's why I can say that the architecture of these chips is quite different from that of training chips. So I should add that both training and inference together constitute the $60 billion to $90 billion market size. If I wasn't clear about that before, I apologize. It's a combination of both.
That said, a larger portion of the revenue from the serviceable available market we've discussed so far comes from training rather than inference.
Analyst Vivek Arya: Thank you.
Operator: Please hold on, the next question will come from Harsh Kumar of Piper Sandler. Please go ahead.
Analyst Harsh Kumar: Thank you, Broadcom team. Congratulations again on your outstanding performance. I have a brief question. We have been hearing that almost all large clusters with a scale of over 100,000 have related demands. I would like to know if you could help us understand the importance of choice when customers are deciding between a supplier like you with the best switch A6 and a supplier that may have an advantage in computing. Can you talk about how customers consider this, and what the ultimate factors are that they value most when choosing a network interface card (NIC)?
President and CEO Chen Fuyang: Okay, I understand. No, for hyperscale data center customers, this is indeed the case, and it is largely performance-driven, which is what you mentioned regarding the performance of connecting, scaling, and expanding those AI accelerators (whether XPU or GPU) in hyperscale data centers. In most cases, for those hyperscale data center customers we work with, performance is very important when it comes to connecting these clusters What I mean is that if you are striving to get the best performance from hardware while training and continuously training cutting-edge models, then performance is more important than any other factor. So what they pursue first is verified products. This refers to verified hardware products. In our case, it refers to systems—subsystems—that have been verified to work properly.
In this context, we often have a significant advantage because, at least for the past decade, networking, switching, and routing have been our specialties (voice unclear). Moreover, the emergence of artificial intelligence has given our engineers more interesting work content. But this is fundamentally based on verified technology and experience, and we continuously push technological advancements, evolving from 800 gigabits per second with our Weirui (voice unclear) technology to 1.6 terabits, and then moving towards 3.2 terabits. This is precisely why we are continuously increasing our investment and launching new products. We even doubled our production capacity to meet the demands of a hyperscale data center customer, as they wanted to create larger clusters while operating with lower bandwidth, but this has not stopped us from developing the next generation of Tomahawk 6 switches. And I must say, we are now even planning Tomahawk 7 and Tomahawk 8, and we are accelerating the R&D pace. By the way, this is largely for those few customers. So we are investing heavily for a handful of customers, hoping to capture a very large potential market. Even without other factors, this is a significant bet we are making. Thank you, Hash.
Operator: Thank you. Please hold on, the next question will come from Timothy Arcuri of UBS. Please go ahead.
Analyst Timothy Arcuri: Thank you very much. Chen Fuyang, you mentioned in the past that XPU shipments are expected to grow from about 2 million units last year to about 7 million units during 2027 and 2028. My question is, will these four new customers increase that 7 million unit shipment? I know you have mentioned in the past that the average selling price (ASP) for each XPU will be around $20,000 by then. So it’s clear that the first three customers are part of that 7 million unit shipment. Will these four new partners make that number (7 million units) higher? Or are they just filling in the gaps to reach the 7 million unit shipment? Thank you.
President and CEO Chen Fuyang: Thank you, Tim, for the question. To clarify, I think I made it very clear in my remarks that no, the market we are talking about, when you convert it into shipment volume, only involves the three customers we currently have. The other four, we refer to them as partners. We do not currently consider them customers, so they are not included in our serviceable market.
Analyst Timothy Arcuri: Okay, so they will increase that number. Alright. Thank you, Chen Fuyang President and CEO Chen Fuyang: Thank you.
Operator: Please hold on, the next question will come from CJ Muse of Cantor Fitzgerald. Please go ahead.
Analyst CJ Muse: Good afternoon. Thank you for taking my question. I would like to follow up on your previous comments and the remarks about optimizing your best hardware with excellent software for hyperscale data center customers. I'm curious about how expanding your product portfolio to target six hyperscale frontier models will benefit you. At first glance, this may share a lot of information, but at the same time, these six customers do want to achieve differentiation. Clearly, all these participants aim for exaflops capability per unit of capital expenditure and per watt. I wonder how much assistance you provide them in this regard, and in what areas there might be some boundaries, such as their desire to differentiate without sharing some of the work they are doing with you. Thank you.
President and CEO Chen Fuyang: We only provide very fundamental core technologies in the semiconductor field that enable these customers to leverage the technologies we offer and optimize based on their specific models and the algorithms associated with those models. That's it. That's all we do. So the optimization we perform for each customer is at that level. As I mentioned earlier, we may deal with five degrees of freedom and adjust them. So even with five degrees of freedom, what we can do at that level is limited, but it does — and how we really optimize fundamentally depends on what the partners tell us they want to do. So the information we can gather is limited as well. But this is what we are doing now based on the XPU model. Transforming optimization into performance, but it also concerns power consumption. How they balance this is very important; it's not just a cost issue, as power consumption ultimately translates into total cost of ownership. This involves how to design in terms of power consumption, how we balance it based on the scale of the cluster, and whether they are using it for training, pre-training, post-training processing, inference, or testing, all of which have their own characteristics. This is the advantage of adopting XPU and working closely with them to develop relevant products. As for your question about China and other aspects, frankly, I have no opinion on that. For us, it's a technical contest.
Analyst CJ Muse: Thank you very much.
Operator: Please hold on, the next question will come from Christopher Rowland of Susquehanna. Please go ahead.
Analyst Christopher Rowland: Hey, thank you very much for the opportunity to ask a question. This question might be for both Chen Fuyang and Kirsten. I would like to know, since you have a complete connected product portfolio, how do you view the expansion opportunities for new greenfield projects, whether in fiber optics, copper cables, or other areas? What additional benefits will this bring to your company? Then, Kirsten, I noticed that operating expenses have increased. Perhaps you could talk about how these operating expenses are being used in the artificial intelligence business opportunities and what the connections are between them? Thank you very much.
President and CEO Chen Fuyang: Your question involves our product portfolio, which is quite broad. Yes, we have such advantages, and many of the hyperscale data center customers we work with are discussing large-scale expansion plans—almost all are new construction projects. Renovation projects (brownfield projects) are relatively few. Basically, they are all new constructions, all expanding, and what we often do are next-generation projects, which is very exciting. So there are a lot of opportunities.
In terms of deployment—what I mean is, we can use copper cable solutions, but we see many opportunities when you provide network connectivity through fiber optics. There are many active components involved, including multimode lasers (which we call Vixels), or actual aggregation lasers for single-mode. We are involved in both. So in terms of expansion, whether it's scaling up or other aspects of expansion, there are many opportunities.
In addition to Ethernet, we have also been doing a lot of other protocol-related work in the past and present, such as PCI Express (high-speed serial computer expansion bus standard). We are leading in the PCI Express field, and in terms of architecture or networking and switching, we can say that we provide corresponding products. One is very intelligent switches, like our Jerico series products, which come with network card interfaces, or a very intelligent network card paired with a simple switch; we provide both architectures.
So yes, we have many opportunities in this area. In summary, all these excellent product portfolios combined may account for about 20% of our total revenue from the artificial intelligence business, perhaps reaching 30%. Although last quarter we almost reached 40%, that is not the norm. I want to say that usually, all these other product portfolios can still bring us quite considerable revenue, but within the scope of the artificial intelligence business, they average close to 30%, while XPU accelerators account for 70%. If this is what you want to understand, I hope I can clarify the importance of their comparison. But we have a wide range of products in the connectivity network product area, which together account for about 30%.
Analyst Christopher Roland: Thank you very much, Chen Fuyang.
CFO Kirsten Spears: Then in terms of R&D, as I outlined earlier, from a consolidated financial statement perspective, our R&D expenditure in the first quarter was $1.4 billion, and I mentioned that it would increase in the second quarter. Chen Fuyang clearly pointed out the two areas we are focusing on in his remarks. Now I want to tell everyone that as a company, we emphasize R&D across all product lines so that we can maintain an advantage in the competition for next-generation products. But he did mention that we are focused on the industry's first 2-nanometer artificial intelligence XPU 3D packaging wafer This is one aspect he mentioned in his speech and is also one of the areas we are focusing on.
He also mentioned that we have doubled the rated capacity of the existing Tomahawk 5 switch so that our AI customers can expand the XPU to 1 million via Ethernet. So, I mean, this is a key focus for the company.
Analyst Christopher Roland: Okay. Thank you very much, Kersten.
Operator: Please hold on, the next question will come from Vijay Rakesh of Mizuho Securities. Please go ahead.
Analyst Vijay Rakesh: Okay. Hi, Chen Fuyang. Thank you. I have a brief question regarding the networking business. I would like to know how much the networking business has grown quarter-over-quarter in the AI segment? Also, do you have any thoughts on future mergers and acquisitions? I've seen a lot of headlines about Intel's product throughput and such. Thank you.
President and CEO Chen Fuyang: Okay. In terms of the networking business, as you mentioned, there was some growth in the first quarter, but I don't think a ratio of 60% for computing and 40% for networking is the norm. I think the normal ratio is closer to 70% to 30%, with a maximum of 30%. So who knows, on one hand, we feel that the second quarter will continue in this way, but in my view, this is just a temporary fluctuation. The normal ratio would be 70% to 30%. If you look at it over a six-month or one-year period, that’s the answer. As for mergers and acquisitions, no, I'm too busy — we are currently swamped with AI and VMware business. We are not considering mergers and acquisitions at this time.
Analyst Vijay Rakesh: Thank you, Chen Fuyang.
Operator: Thank you. The Q&A session has ended. Now I would like to hand the call back to Investor Relations Director Liu Zhi for closing remarks.
Investor Relations Director Liu Zhi: Thank you, Cherie. Broadcom currently plans to announce its fiscal year 2025 second-quarter financial results after the market closes on Thursday, June 5, 2025. A public webcast of Broadcom's earnings conference call will follow at 2 PM Pacific Time. This concludes today's earnings conference call. Thank you all for participating. Cherie, you may end the call.
Operator: Thank you. Ladies and gentlemen, thank you for your participation. Today's agenda has concluded. You may now disconnect your lines