
Broadcom Conference Call: Secured New Orders for AI Chips Worth $10 Billion, Significantly Raised Growth Expectations for 2026

Broadcom's conference call announced the successful acquisition of a new major client with an AI chip order exceeding $10 billion, and significantly raised the growth expectations for its AI business in 2026, anticipating a growth rate that will surpass that of 2025. The CEO will remain in office until 2030, providing a "sense of security" for the company's development. Although the AI and VMware software businesses are performing strongly, the recovery of the traditional non-AI semiconductor business remains slow, showing a "U-shaped" trend
Broadcom is consolidating its core position in the customized AI chip market, significantly raising its key long-term growth forecast due to a new order from a newly acquired major client worth over $10 billion, indicating that its already booming AI business will accelerate further.
According to news from the Chasing Wind Trading Desk, on September 4 local time, during the latest earnings call, the company's President and CEO Hock Tan announced two major pieces of news: First, he will continue to lead the company at least until 2030; second, the company has transformed a potential client into its fourth official customized AI accelerator (XPU) client, securing a production order worth over $10 billion.
An article from Wall Street Insight mentioned that this $10 billion order client is reported to be OpenAI.
This massive new order has pushed the company's total backlog to a record $110 billion. Coupled with the continued strong demand from three existing hyperscale clients, Broadcom announced that its AI revenue growth rate for fiscal year 2026 will "significantly improve" and exceed the growth level of fiscal year 2025.
Driven by strong performance in the AI business and VMware software division, Broadcom reported a record third-quarter revenue of $16 billion and provided a fourth-quarter guidance that exceeded market expectations, forecasting revenue of approximately $17.4 billion, while the AI revenue outlook for fiscal year 2026 has significantly improved compared to the previous quarter's forecast. However, the demand for non-AI semiconductor business is recovering slowly and is expected to show a "U-shaped" recovery trend.
Key points from the call:
Upgraded long-term guidance: The AI revenue outlook for fiscal year 2026 has significantly improved compared to the previous quarter's forecast, with strong shipments expected starting in 2026. The growth rate for 2026 is anticipated to be higher than the growth rate seen in 2025.
Secured new client order: The company confirmed it has acquired a fourth major client for customized AI chips (XPU), with an order exceeding $10 billion, set to begin deliveries in the second half of fiscal year 2026.
CEO tenure extension: President and CEO Hock Tan announced he will remain in position until 2030, providing leadership stability for the company during this critical growth period.
Strong AI business: The AI semiconductor business continues to show strong growth momentum, achieving growth for ten consecutive quarters. In the third quarter, this business generated $5.2 billion in revenue, a year-on-year increase of 63%. The company expects fourth-quarter AI semiconductor revenue to reach $6.2 billion, a year-on-year increase of 66%.
Slow recovery in traditional business: The demand for non-AI semiconductor business is recovering slowly, showing a "U-shaped" recovery trend, with meaningful recovery expected only in the latter half of 2026.
VMware Cloud Foundation 9.0 release: Broadcom released VMware Cloud Foundation 9.0, aimed at providing enterprises with a comprehensive private cloud platform for building and managing on-premises and cloud-based AI workloads
AI Network Solutions: Broadcom continues to launch leading Ethernet solutions, including an open Ethernet solution in collaboration with OpenAI, a 102 terabits per second switch, and Jericho 4 Ethernet optical routers to address the networking challenges of large-scale AI clusters.
New Billion-Dollar Client, AI Growth Expectations Significantly Upgraded
The core driver behind the upward revision of the performance outlook comes from the confirmation of a new client. Hock Tan revealed during the conference call that a previously potential client has placed a production order with Broadcom, becoming the company's fourth major XPU client. Unlike the "GPUs" designed by suppliers such as NVIDIA and AMD, Broadcom's custom AI chips are referred to as "XPUs."
"We have secured over $10 billion in AI rack orders based on our XPUs," Hock Tan stated. He added that this new "direct and substantial demand" combined with the growing orders from the existing three clients "has indeed changed our outlook for the fiscal year 2026."
According to Hock Tan, this $10 billion order is expected to begin delivery in the second half of the fiscal year 2026, possibly in the third quarter. In addition to the contribution from the new client, Broadcom's share among its original three XPU clients is also "gradually" increasing, as these clients are increasingly turning to customized solutions with each generation of products on their respective paths to computing self-sufficiency. As a result, Hock Tan expects that by 2026, the share of XPUs in AI revenue will continue to rise.
CEO Hock Tan to Remain Until 2030, Stabilizing Market Confidence
While discussing the strong business outlook, Hock Tan announced a crucial piece of news for market confidence: he has reached an agreement with the board to continue serving as CEO at least until 2030.
"For Broadcom, this is an exciting time, and I am very passionate about continuing to create value for our shareholders," Hock Tan said. As the company fully seizes the historic opportunities brought by AI, this statement alleviates external uncertainties regarding potential leadership changes, providing critical stability for the execution of the company's long-term strategy.
Traditional Semiconductor Business Recovers Slowly, Showing a "U-Shaped" Trend
In stark contrast to the booming AI business, Broadcom's non-AI semiconductor business remains sluggish. In the third fiscal quarter, this segment's revenue was $4 billion, flat compared to the previous quarter, indicating weak demand recovery.
Hock Tan described this recovery as "U-shaped," rather than the "V-shaped" rebound expected by the market. He pointed out that although the guidance for the fourth fiscal quarter indicates that the non-AI semiconductor business is expected to achieve low double-digit sequential growth, this is primarily driven by seasonal factors in the wireless and server storage businesses. Broadband is the only area showing "sustained strong growth."
"I expect the non-AI business to experience a recovery more aligned with a U-shape, and perhaps we won't start seeing any meaningful recovery until the middle to late 2026," Hock Tan admitted
Ethernet Dominates AI Networks, New Products Address Scalability Challenges
As the scale of AI clusters surpasses 100,000 nodes, networking is becoming a bottleneck. Hock Tan emphasized that Broadcom, with its decades of experience in the Ethernet field, is well-positioned to address this challenge. The company recently launched new generation switches and routers such as Tomahawk 6 and Jericho 4, aimed at supporting "scale-across" hyperscale clusters across data centers by reducing network layers and lowering latency.
Hock Tan expressed great confidence in open Ethernet standards, believing they are the inevitable choice for addressing AI network challenges. "Ethernet is the way forward," he stated, "there's no need to create new protocols that you now have to get people to accept." He believes that the openness of Ethernet and its mature ecosystem are significant advantages over proprietary protocols.
VMware Integration Shows Results, Software Business Steady Growth
Since acquiring VMware, Broadcom's software business integration has continued to make progress. In the third fiscal quarter, the infrastructure software division achieved revenue of $6.8 billion, a year-on-year increase of 17%.
An important milestone was the release of VMware Cloud Foundation (VCF) version 9.0, a fully integrated cloud platform designed to provide enterprises with a true alternative to public cloud. Hock Tan stated that the current focus is on successfully deploying and operating the platform for the top 10,000 large customers who have purchased VCF licenses, and on that basis, selling advanced services such as security and disaster recovery.
The transcript of Broadcom's Q3 FY2025 earnings call is as follows, translated by AI tools:
Event Date: September 4, 2025
Company Name: Broadcom
Event Description: Q3 FY2025 Earnings Call
Source: Broadcom
Operator:
Welcome to Broadcom's Q3 FY2025 financial performance earnings call. Now, for the opening remarks and introductions, I would like to turn the call over to Broadcom's Head of Investor Relations, Ji Yoo. Please go ahead.
Head of Investor Relations Ji Yoo:
Thank you, Shree. Good afternoon, everyone. Joining me on the call today are President and CEO Hock Tan, Chief Financial Officer and Chief Accounting Officer Kirsten Spears, and President of the Semiconductor Solutions Division Charlie Kawwas.
Broadcom has released a press release and financial tables after the market closed, detailing our financial performance for Q3 FY2025. If you did not receive a copy, you can obtain the relevant information from the Investor Relations section of Broadcom's website broadcom.com. This call is being webcast live, and the audio replay of the meeting will be available for one year through the Investor Relations section of Broadcom's website
In the prepared remarks, Hock and Kirsten will provide a detailed overview of our performance for the third quarter of fiscal year 2025, guidance for the fourth quarter of fiscal year 2025, and comments on the business environment. After the prepared remarks, we will answer questions. Please refer to our press release today and the documents recently submitted to the U.S. Securities and Exchange Commission (SEC) for specific risk factors that could cause our actual performance to differ materially from the forward-looking statements made during this conference call.
In addition to U.S. Generally Accepted Accounting Principles (US GAAP) reporting, Broadcom also reports certain financial metrics on a non-GAAP basis. The reconciliation table between GAAP and non-GAAP metrics is included in the tables attached to today's press release. Comments during today's conference call will primarily reference our non-GAAP financial performance.
I will now turn the call over to Hock.
President and CEO Hock E. Tan:
Thank you, Ji. And thank you all for joining our call today.
In our third quarter of fiscal year 2025, total revenue reached a record $16 billion, an increase of 22% year-over-year. The revenue growth was driven by stronger-than-expected AI semiconductor strength and our continued growth in VMware. Adjusted consolidated EBITDA (Earnings Before Interest, Taxes, Depreciation, and Amortization) for the third quarter reached a record $10.7 billion, an increase of 30% year-over-year.
Now, looking beyond our quarterly report, due to strong AI demand, order volumes are extremely large. Our company's current consolidated backlog has reached a record $110 billion.
Third-quarter semiconductor revenue was $9.2 billion, with year-over-year growth accelerating to 26%. This accelerated growth was driven by $5.2 billion in AI semiconductor revenue, which grew 63% year-over-year and has continued its strong growth trajectory for the tenth consecutive quarter.
Now, let me provide you with more information about our XPU business, which accelerated growth this quarter and accounted for 65% of our AI revenue. Our three customers have a growing demand for custom AI accelerators as they are progressing towards computing self-sufficiency at their own pace. Moreover, we are gradually gaining more share from these customers.
Besides these three customers, as we mentioned earlier, we have been collaborating with other potential customers on their own AI accelerators. Last quarter, one of the potential customers placed a production order with Broadcom, so we have qualified them as an XPU customer, and in fact, we have received over $10 billion in AI accelerator (regs) orders based on our XPU. Reflecting this, we now expect a significantly improved outlook for AI revenue in fiscal year 2026 compared to what we indicated last quarter.
Speaking of AI networks, demand remains strong as computing clusters must grow larger with the continuous evolution of large language models (LLM) in intelligence, making networks critical The network is the computer, and our customers are facing challenges as they expand to clusters with over 100,000 computing nodes.
For example, we all know that when you try to create a large bandwidth to share memory between multiple GPUs or XPUs within a single accelerator (reg), vertical scaling is a daunting challenge. Today's AI accelerators use proprietary NVLink, which can only vertically scale 72 GPUs with a bandwidth of 28.8 terabits per second. On the other hand, earlier this year, we partnered with OpenAI to launch an open Ethernet-based solution (sorry) that can vertically scale 512 computing nodes for customers using XPUs.
Let's talk about cross-rack horizontal scaling (scale-out). Currently, existing architectures using 51.2 terabits per second require three layers of network switches. In June, we released a 102 terabits per second Ethernet switch (6th generation) that flattens the network to two layers, reducing latency and significantly decreasing power consumption.
When you scale out to clusters beyond a single data center, you now need to scale computing across data centers. Over the past two years, we have deployed our Jericho 3 Ethernet routers to hyperscale customers to achieve this. Today, we are releasing the next-generation Jericho 4 Ethernet architecture router, providing 51.2 terabits per second of deep bandwidth and intelligent congestion control to handle clusters of over 200,000 computing nodes across multiple data centers.
We know that the biggest challenge in deploying larger generative AI computing clusters will be the network. And the technologies that Broadcom has developed for Ethernet networks over the past 20 years are perfectly suited to the challenges of vertical scaling, horizontal scaling, and scale across in generative AI.
Speaking of our forecasts, as I mentioned earlier, we continue to make steady progress in growing AI revenue. For the fourth quarter of 2025, we forecast AI semiconductor revenue to be approximately $6.2 billion, a year-over-year increase of 66%.
Now, let's talk about non-AI semiconductors. The demand recovery continues to be slow, with third-quarter revenue at $4 billion, flat quarter-over-quarter. While the broadband business showed strong quarter-over-quarter growth, enterprise networking and service storage declined quarter-over-quarter. Wireless and industrial businesses remained flat quarter-over-quarter as we expected.
In contrast, in the fourth quarter, driven by seasonal factors, we forecast non-AI semiconductor revenue to grow low double digits quarter-over-quarter, reaching approximately $4.6 billion. Broadband, server storage, and wireless businesses are expected to improve, while enterprise networking is still expected to decline year-over-year.
Now let me talk about our infrastructure software division. Third-quarter infrastructure software revenue was $6.8 billion, a year-over-year increase of 17%, exceeding our outlook of $6.7 billion, as bookings remained strong this quarter. In fact, we booked a total contract value of over $8.4 billion in the third quarter.
But here’s the point that excites me the most. After two years of engineering development by over 5,000 developers, the commitment we made when we acquired VMware has been fulfilled. We released VMware Cloud Foundation version 9.0, a fully integrated cloud platform that enterprise customers can deploy on-premises. Or migrate to the cloud. It enables businesses to run any application workload, including AI workloads running on virtual machines and modern containers. This provides a true public cloud alternative.**
In the fourth quarter, we expect infrastructure software revenue to be approximately $6.7 billion, a year-over-year increase of 15%.
In summary, the continued strength of AI and VMware will drive our guidance for consolidated revenue in the fourth quarter to approximately $17.4 billion, a year-over-year increase of 24%. We expect adjusted EBITDA in the fourth quarter to reach 67% of revenue.
With that, let me hand the call over to Kirsten.
Chief Financial Officer and Chief Accounting Officer Kirsten Spears:
Thank you, Hock. Now let me provide more details about our financial performance in the third quarter.
Consolidated revenue for the quarter reached a record $16 billion, a 22% increase year-over-year. The gross margin for the quarter was 78.4% of revenue, exceeding our initial guidance due to increased software revenue and improvements in the semiconductor product mix. Consolidated operating expenses were $2 billion, of which $1.5 billion was R&D expenses.
Operating income for the third quarter reached a record $10.5 billion, a 32% increase year-over-year. On a sequential basis, although the gross margin declined by 100 basis points due to revenue mix, the operating margin increased by 20 basis points sequentially to 65.5% due to operating leverage. Adjusted EBITDA was $10.7 billion, accounting for 67% of revenue, exceeding our guidance of 66%. This figure does not include $142 million in depreciation.
Now, looking back at the income statements of our two segments, starting with semiconductors. Our semiconductor solutions segment revenue was $9.2 billion, accelerating to a 26% year-over-year growth driven by AI. Semiconductor revenue accounted for 57% of total revenue for the quarter. The gross margin for our semiconductor solutions segment was approximately 67%, a year-over-year decline of 30 basis points due to product mix. Operating expenses increased by 9% year-over-year to $961 million, due to increased R&D investments in cutting-edge AI semiconductors. The operating margin for semiconductors was 57%, up 130 basis points year-over-year and flat sequentially.
Now let's talk about infrastructure software. Infrastructure software revenue was $6.8 billion, a 17% year-over-year increase, accounting for 43% of revenue. The gross margin for infrastructure software this quarter was 93%, compared to 90% a year ago. Operating expenses for the quarter were $1.1 billion, resulting in an operating margin for infrastructure software of approximately 77%. In comparison, the operating margin a year ago was 67%, reflecting the completion of VMware integration.
Next, let's discuss cash flow. Free cash flow for the quarter was $7 billion, accounting for 44% of revenue. We spent $142 million on capital expenditures. The days sales outstanding (DSO) for the third quarter was 37 days, compared to 32 days a year ago. Our inventory at the end of the third quarter was $2.2 billion, an 8% sequential increase, in anticipation of revenue growth in the next quarter. Our days inventory outstanding (DIO) for the third quarter was 66 days, compared to 69 days in the second quarter, as we continue to maintain strict inventory management across the ecosystem
At the end of the third quarter, we had $10.7 billion in cash and $66.3 billion in total debt principal. The weighted average coupon rate and maturity of our $65.8 billion fixed-rate debt are 3.9% and 6.9 years, respectively. The weighted average rate and maturity of our $500 million floating-rate debt are 4.7% and 0.2 years, respectively.
Speaking of capital allocation, in the third quarter, we paid $2.8 billion in cash dividends to shareholders, based on a quarterly cash dividend of $0.59 per common share. In the fourth quarter, we expect the non-GAAP diluted share count to be approximately 4.97 billion shares, excluding any potential impact from stock buybacks.
Now looking at the guidance. Our guidance for the fourth quarter is consolidated revenue of $17.4 billion, a year-over-year increase of 24%. We forecast semiconductor revenue of approximately $10.7 billion, a year-over-year increase of 30%. Among them, we expect AI semiconductor revenue in the fourth quarter to be $6.2 billion, a year-over-year increase of 66%. We expect infrastructure software revenue to be approximately $6.7 billion, a year-over-year increase of 15%.
For your modeling convenience, we expect the consolidated gross margin in the fourth quarter to decline by approximately 70 basis points quarter-over-quarter, primarily reflecting the increase in the proportion of XPU and wireless revenue. Just a reminder, the full-year consolidated gross margin will be influenced by the revenue mix of infrastructure software and semiconductors, as well as the internal product mix of semiconductors.
We expect the adjusted EBITDA for the fourth quarter to be 67%. We expect the non-GAAP tax rate for the fourth quarter and fiscal year 2025 to remain at 14%.
I will now hand the call back to Hock to share some more exciting news.
President and CEO Hock E. Tan:
I'm not as excited as Kirsten, but I am excited. I think before we enter the Q&A session, I should share some recent news. The board and I have agreed that I will continue to serve as CEO of Broadcom at least until 2030. This is an exciting time for Broadcom, and I am very eager to continue driving value for our shareholders.
Operator, please begin the Q&A session.
Q&A Session
Operator:
Thank you. (Operator instructions)
Our first question comes from Ross Seymore of Deutsche Bank. Your line is open.
Analyst Ross Seymore:
Hi, everyone. Thank you for allowing me to ask a question. Hock, thank you for staying a few more years. I just wanted to talk about the AI business, particularly XPU. When you say the growth rate will significantly exceed that of the previous quarter, what has changed? Is it just that impressive potential customers have turned into defined customers? Is it the $10 billion backlog you mentioned? Or is the demand from the existing three customers stronger? Any details would be helpful.
President and CEO Hock E. Tan:
I want both, Ross, but to a large extent, it’s the new addition to our list (of clients) now. We will start shipping in large quantities from early 2026. So, the increase in demand from the existing three clients (we are making steady progress on this), combined with the addition of the fourth client and its direct and quite substantial demand, has indeed changed our outlook on what the start of 2026 will look like.
Analyst Ross Seymore:
Thank you.
President and CEO Hock E. Tan:
Thank you.
Operator:
Please hold for the next question. The next question comes from Harlan Sur of JP Morgan. Your line is open.
Analyst Harlan Sur:
Hi, good afternoon. Congratulations on an outstanding quarter and strong free cash flow. I know everyone will be asking a lot of questions about AI, Hock. I want to ask about the non-AI semiconductor business. If I look at your guidance for the fourth quarter, it seems that if you hit the midpoint of the fourth quarter guidance, the non-AI business will decline year-over-year by about 7%-8% in fiscal year 2025. The good news is that the trend of negative year-over-year growth has been improving throughout this year. In fact, I think you will achieve year-over-year positive growth in the fourth quarter. You described it as relatively close to the bottom of the cycle, with a slow recovery.
However, we have seen some positive sprouts, right? Broadband, server storage, enterprise networking, you are still pushing for the DOCSIS 4 upgrades for broadband cables, and there are next-generation PON upgrades ahead in China and the U.S. Spending by enterprises on network upgrades is accelerating. So, from the recent bottom of the cycle, how should we think about the magnitude of the cyclical recovery? Given your 30 to 40-week delivery cycle, are you seeing continued order improvement in the non-AI space that would point to a sustained cyclical recovery in the next fiscal year?
President and CEO Hock E. Tan:
Well, if you look at that non-AI space, I mean, you are right, from the year-over-year perspective of the fourth quarter guidance, we are actually up, as you said, slightly higher than the same period last year (by 1% or 2%). There’s really nothing to write home about at this point. The biggest issue is that there are both increases and decreases. The ultimate result of all this is that, aside from the seasonal factors we perceive, if you look at it in the short term, we are looking at year-over-year, but looking at it quarter-over-quarter, we see, in areas like wireless, and even now in server storage, starting to see some seasonality. They... so far, these factors seem to have offset each other.
The only consistently rising trend we have seen over the past three or four quarters is broadband. No other area seems to have maintained an upward trend from a cyclical perspective so far. I don’t think it will — but as a whole, as you pointed out, Harlan, they haven’t gotten worse, but as a whole, they haven’t shown the V-shaped recovery that we hope to see and expect to see in the semiconductor cycle. ** The only thing that currently gives us some hope is broadband, which is recovering very strongly. But it is also the business most severely affected by the sharp decline in early '24 and '25. So, again, we should be cautious about this.
But to give you the best answer, the recovery of non-AI semiconductors is slow. As I mentioned, the best description for the fourth quarter year-on-year might be low single-digit growth. So I expect more of a U-shaped recovery for non-AI, perhaps by mid-'26 or late '26, we will start to see meaningful recovery. But for now, it is still unclear.
Analyst Harlan Sur:
Yes. Are you starting to see this in order trends and the order book, just because your delivery cycle is around 40 weeks, right?
President and CEO Hock E. Tan:
We— but we have been misled before, but we are indeed seeing it. Bookings are up, with a year-on-year increase of over 20%. While not as much as AI bookings, 23% is still quite good, right?
Caller:
Thank you. Please hold for the next question. The next question comes from Vivek Arya of Bank of America. Your line is open.
Analyst Vivek Arya:
Thank you for answering my question, and I wish you all the best in your next term. My question is about— can you help us quantify the new AI guidance for fiscal year 2026? Because I think in the last conference call you mentioned that fiscal year 2026 could grow at a rate of 60%. So what is the updated number? Is it 60% plus the $10 billion you mentioned? Related to this, do you expect the mix of custom chips versus networking products to remain roughly at the same level as the past year, or will it lean more towards custom chips? Any quantifiable information regarding the mix of networking and custom chips for fiscal year 2026 would be very helpful.
President and CEO Hock E. Tan:
Okay. Let’s first address the first part. If I may, during our last quarterly earnings report, I hinted to you that I said, hey, the growth trend for '26 will mirror the trend of '25, which is a year-on-year growth of 50%-60%. I really only said that. I didn’t— but of course, it manifests as 50%-60% because that’s what '25 is. I just said, if you want to look at it another way, perhaps more accurately, we are seeing the growth rate accelerating, not just stabilizing at 50%-60%. We expect and see that the growth rate in 2026 will be higher than the growth rate we saw in 2025.
I know you want me to give you a number, but you know, we shouldn’t provide you with a forecast for '26, but the best way to describe it is that it will be a quite significant improvement.
Analyst Vivek Arya:
What about networking and custom chips?
President and CEO Hock E. Tan:
Good question. Thank you for the reminder. As we see, the main driver of this growth will be XPU. As for—repeating what I said, the reason is that we continue to gain share from our initial three customers. They must—they are on a journey, with each new generation of products, they are turning more towards XPU. So we are gaining share from these three customers. We are now benefiting from the addition of a fourth very important customer. I mean the fourth and very important customer. This combination will mean more XPU.
As I said, as we accumulate more experience with these four customers, we will also gain networking business with these four customers, but now, the networking business share from other customers outside of these four will shrink and be diluted, becoming a smaller share. So I actually expect that entering '26, the percentage of networking business in the total pool will decline.
Analyst Vivek Arya:
Thank you.
Operator:
Please hold for the next question. The next question comes from Stacy Rasgon of Bernstein Research. Your line is open.
Analyst Stacy Rasgon:
Hi, everyone. Thank you for answering my question. I was wondering if you could help me break down this $110 billion backlog. I didn't mishear that number, did I? Can you give us an overview of its composition? For example, what is the time frame it covers? Also, how much of this $110 billion is AI vs non-AI vs software?
President and CEO Hock E. Tan:
Well, I think, Stacy, we typically do not break down backlog orders—I give a total number to give you a sense of how strong our business is overall, which is primarily driven by AI growth—in terms of growth, software continues to increase on a stable basis. Non-AI, as I pointed out, has grown double digits, but it’s nothing compared to the very strong growth of AI. Perhaps to give you a sense, at least 50% is semiconductor.
Analyst Stacy Rasgon:
Okay. So it can be said that within that semiconductor portion, AI will far exceed non-AI.
President and CEO Hock E. Tan: Correct.
Analyst Stacy Rasgon:
Okay, got it. That’s very helpful. Thank you.
Operator:
Please hold for the next question. The next question comes from Ben Reitzes of Melius Research. Your line is open.
Analyst Ben Reitzes:
Hey everyone, thank you very much. Hock, congratulations on guiding AI revenue to grow significantly above 60% next year. So I want to be a bit greedy and ask you about maybe 2027 (fiscal year 2027) and the situation with the other three customers. Besides these four customers, how is the dialogue progressing with other customers? In the past, you mentioned there were seven, and now we have added the fourth into production. So there are still three. Have you heard anything from other customers, how is the trend with the other three, and perhaps beyond '26 into '27 and beyond, how do you think this momentum will form? Thank you very much.
President and CEO Hock E. Tan:
You are indeed very greedy, and you are definitely overthinking it for me. Thank you. But when it comes to subjective qualifications, to be honest, I don't want to provide that. I'm not very willing to provide that because sometimes our time frame for entering production can be unexpectedly fast. Similarly, it can also be delayed. So I would prefer not to give you more information about potential customers, just to tell you that these potential customers are real potential customers and are continuing to be very closely involved in developing their respective XPUs, each intending to enter mass production like the four custom customers we already have today.
Analyst Ben Reitzes:
Yes, do you still believe that the million-unit target set for these seven is still intact?
President and CEO Hock E. Tan:
For those three, I say now it’s four. That’s just for the customers, silly — no comment on potential customers, and no position to judge. But for our four, three, now four customers, yes.
Analyst Ben Reitzes:
Okay. Thank you very much. Congratulations.
Operator:
Please hold for the next question. The next question comes from Jim Schneider of Goldman Sachs. Your line is open.
Analyst Jim Schneider:
Good afternoon, thank you for taking my question. Hock, I was wondering if you could give us a bit more information, not necessarily about the potential customers remaining in your pipeline, but how you view the universe of additional potential customers beyond the seven already identified and potential customers. Do you still believe there are additional potential customers worth making custom chips for? I know you have been relatively cautious and selective regarding the number of customers in this space, the volume they can provide, and the opportunities you are interested in. So perhaps frame for us the additional potential customers you see beyond the V7. Thank you.
President and CEO Hock E. Tan:
This is a very good question, let me—let me answer it on a broader basis. Well, as I mentioned before, perhaps to repeat a bit, we very much—we see this market as two large segments. You know, one is those vendors developing their own LLMs, and I think the rest of the market collectively is enterprises. This market runs AI workloads for enterprises, whether on-prem or in any form of GPU or XPU or as a service. Honestly, we are not targeting that market. We are not targeting it because it is a market that is difficult for us to address, and we are not set up to address it. Instead, we are targeting this LLM market.
As I have said multiple times, this is a very narrow market with only a few players driving frontier models towards a very accelerated trend towards superintelligence—or in others' words, the pursuit of happiness, but you understand what I mean. And those others that need to initially invest a lot of capital for training, my view is that training increasingly larger clusters and more powerful accelerators. But for these companies, they also have to be accountable to shareholders or be able to generate cash flow that can sustain their growth path, and they are also starting to invest heavily in inference to monetize their models. These are the participants we collaborate with.
These are individuals or participants spending a lot of money on significant computing capacity, but there are just too few of such people. And I have—I have pointed out, identified seven, four of which are now our customers, and three continue to be potential customers we are in contact with. We are very selective, and still cautious, I should say, selective, cautiously determining who qualifies for this. I have pointed that out. They are building a platform or have a platform and are heavily investing in leading LLM models. I think that’s about it.
We may also see one as a potential customer. But again, we are very thoughtful and cautious even in this qualification process. But what can be said for sure is that we have seven. For now, that’s basically what we have.
Analyst Jim Schneider:
Thank you.
Operator:
Please hold for the next question. The next question comes from Tom O'Malley of Barclays. Your line is open.
Analyst Thomas O'Malley:
Hi, everyone. Thank you for taking my question, and congratulations on a very strong performance. I wanted to ask about—about comments on Jericho 4. NVIDIA talked about the XGF switch, and now you’re talking about scale across. You’re discussing Jericho 4. It sounds like this market is really starting to develop. Perhaps you could talk about when you expect revenue there to see a substantial uplift? And why it’s important to start thinking about those types of switches as we shift more towards inference? Thank you, Hock
President and CEO Hock E. Tan:
Very good. Well, thank you for noticing that. Yes, cross-domain expansion is now a new term, right, distinguishing between scale-up (vertical scaling) (within the rack, in—rack computing) and scale-out (horizontal scaling) (across racks, but within the data center). But now when you reach cluster scale—I’m not 100% sure where the boundary is, but for example, above 100,000 GPUs or XPUs, in many cases, due to power constraints, you wouldn’t place more than 100,000 such XPUs within a single data center footprint. Power may not be easily available, and land may not be as convenient. So what we’re seeing is that many of our customers are now creating multiple data center sites that are close together, not far apart, within a 100-kilometer range. This counts as a certain degree, but being able to then place homogeneous XPUs or GPUs across multiple locations (three or four) and network them together so they actually operate like a single cluster. That’s the coolest part.
And this technology, due to distance reasons, requires deep buffering and very intelligent congestion control, which has been a technology used by telecom companies like AT&T and Verizon for network routing for years, just for even trickier workloads, but the principle is the same. Over the past two years, we have been shipping Jericho 3 to some hyperscale customers to address this cluster scale and the bandwidth expansion needed for AI training. We are now releasing this Jericho 4, 51 terabits per second, to handle more bandwidth, but it uses the same technology that we have tested and validated over the past 10 years and 20 years, nothing new. There’s no need to create something new for this. It runs on Ethernet, very mature, very stable. As I said, over the past two years on Jericho 3 (running 256 non-computers*? [Note: This may be a transcription error, as Jericho 3 typically refers to routers rather than non-computers]), we have been selling to several hyperscale customers.
Operator:
Please hold for the next question. The next question comes from Carl Ackerman of BNP Paribas. Your line is open.
Analyst Karl Ackerman:
Yes, thank you. Hock, have you fully transitioned your top 10,000 accounts from vSphere to the entire vSphere Cloud Foundation (VCF) virtualization stack? I ask this because I saw that 87% of accounts had adopted it last quarter, which is certainly a significant increase compared to less than 10% of customers who purchased the entire suite before the transaction. I guess as you answer this question, how do you see the interest level in adopting BCF among the longer-tail portion of enterprise customers? Also, as these customers adopt VMware, do you see tangible cross-selling benefits in your commercial semiconductor, storage, and networking businesses? Thank you.
President and CEO Hock E. Tan:
Okay. So to answer the first part of your question, yes, almost over 90% have already purchased VCF. Now I like—I'm being very cautious with my wording. Because we have sold it to them, and they have purchased the licenses to deploy it, but that does not mean they have fully deployed it. This leads us to another part of our work, which is to take these 10,000 customers or a large portion of them who have accepted—their vision of building a private cloud locally—and work with them to enable them to successfully deploy and run it on their local infrastructure. This is the hard work we see happening over the next two years.
As we do this, we see VCF expanding within their IT coverage, with the private cloud running on their data centers within their data sets. This is a key part of it. We see this continuing. This is the second phase of my VMware story. The first phase was convincing people to transition from perpetual licenses to purchase VCF. The second phase now is to get them to make purchases on VCF that truly create the private cloud value they seek locally, in their IT data centers. This is what is happening.
And—this will continue for quite some time, because on this basis, we will start selling premium services, security, disaster recovery, and even running AI workloads on top of it. All of this is very exciting.
Your second question is whether this allows me to sell more hardware? No, well, this is completely independent. In fact, as they virtualize their data centers, we consciously accept the fact that we are commoditizing the underlying hardware of the data center, commoditizing servers, commoditizing storage, and even commoditizing networking. That's okay. By commoditizing this, we actually lower the investment costs for enterprises in data center hardware.
Now, aside from the top 10,000, have we seen a lot of success? We have seen some. But again, there are two reasons we do not expect it to be necessarily as successful. One is that the value brought by what they call TCO (Total Cost of Ownership) will be much less, but the more important thing is that not only does deployment require skills (you can get services and our own help), but the skills required to continuously operate it may not be something they can afford. We will wait and see. This is an area—we are still learning, and it will be interesting to see—although VMware has 300,000 customers, we believe the top 10,000 are where deploying private clouds using VCF is very meaningful and can derive a lot of value.
We are now observing whether the next 20,000, 30,000 medium-sized companies see it this way as well. Stay tuned. I will let you know.
Analyst Karl Ackerman:
Very clear. Thank you.
Operator:
Please hold for the next question. The next question comes from C.J. Muse of Cantor Fitzgerald. Your line is open.
Analyst C.J. Muse:
Yes, good afternoon. Thank you for answering this question. I would like to focus on the gross margin. I understand the guidance is down 70 basis points, particularly with the sequential decline in software revenue and the increased contributions from wireless and XPUs. But to reach that 77-point something (77%+), I either have to model semiconductor gross margins flat (which I thought would be lower), or software gross margins reaching 95%, up 200 basis points. So could you help me better understand the moving parts here, how it only allows for a 70 basis point decline?
Chief Financial Officer and Chief Accounting Officer Kirsten Spears:
Yes. I mean, TPUs (should be a slip of the tongue for XPUs) will increase, and wireless will also increase, as I mentioned on the call, our software revenue will also increase a bit. You mean we guided, yes. Wireless is typically our heaviest quarter of the year, right? So you have wireless and TPUs (XPUs), which typically have lower gross margins, right, and then our software revenue is increasing.
Operator:
Please hold for the next question. The next question comes from Jo Moore (Joseph Moore) of Morgan Stanley. Your line is open.
Analyst Joseph Moore:
Okay. Thank you. Regarding the fourth customer, I believe you mentioned in the past that potential customers 4 and 5 are more like hyperscale enterprises, while 6 and 7 are more like LLM manufacturers themselves. Could you provide us with some information, if possible, to help us categorize them? If it's inconvenient, that's fine too. And then regarding that $10 billion order, could you give us a timeframe? Thank you.
President and CEO Hock E. Tan:
Okay. Yes, no, ultimately, all seven are doing LLM, not all currently have what we call a massive platform, but you can imagine that eventually they will all have or create a platform. So it's hard to distinguish between the two. But back to — regarding that $10 billion delivery, it will likely be in the second half of our fiscal year 2026 quarter. I would say more precisely, it will likely be in the third quarter of our fiscal year 2026.
Analyst Joseph Moore:
Okay, does the third quarter start or does the third quarter take... how long does it take to deploy the $10 billion order?
President and CEO Hock E. Tan:
Our (delivery) ends in the third quarter.
Analyst Joseph Moore:
Okay. Okay. Thank you.
Operator:
Please hold for the next question. The next question comes from Joshua Buchalter of TD Cowen. Your line is open.
Analyst Joshua Buchalter:
Hey everyone. Thanks for taking my question, and congratulations on the performance achieved. I hope you can provide some comments on the momentum of your first scale-up Ethernet solution and how it compares to the UALINK and PCDIE solutions. How significant is having the lower-latency Tomahawk Ultra product? And considering your AI networking business, how big do you think the scale-up Ethernet opportunities will be in the coming year? Thank you.
President and CEO Hock E. Tan:
Well, that's a good question. We are thinking about this ourselves because, first of all, Ethernet—our Ethernet solutions are very separate from any AI accelerators anyone else is doing. They are separate. We treat them as independent. Even if you are right that networking is computing. We have always believed that Ethernet is open source. Anyone should be able to have a choice, and we separate it from our XPU, but the fact is, for customers using XPU, we have developed and optimized our network switches and other components related to networking signals in any cluster, which are closely integrated with it.
In fact, all of these XPUs have developed interfaces for handling Ethernet, very, very much so. So to some extent, for our customers using XPU, we openly make Ethernet the preferred—preferred network protocol, very, very openly. It may not be our Ethernet switches. It could be any other company's, but it just happens to be Ethernet done by someone else. It just so happens that we are the leaders in this field, so we got the (orders).
But beyond that, especially when it comes to closed systems involving GPUs, we see less of that, except in hyperscale enterprises that can separate GPU cluster architectures from the network end, especially in scale-out. In that case, we sell a lot—lots of these Ethernet switches for scale-out to those hyperscale enterprises. We suspect that when it develops to scale across, there will be even more Ethernet that is separated from the GPUs in placement. As for XPU, it can be said that it is all Ethernet.
Analyst Joshua Buchalter:
Thank you.
Operator:
Please hold for the next question. The next question comes from Christopher Rolland of Susquehanna. Your line is open.
Analyst Christopher Rolland:
Hi, thank you for the question, and congratulations on the contract extension, Hock. So yes, my question is about competition, both in terms of networking and ASICs. You have already answered some of this in your last question, but how do you view any competition in the ASIC space, particularly from U.S. or Asian suppliers? Or do you think this is diminishing? In terms of networking, do you think UALink or PCIE have a chance to replace Sue (Ethernet) when they are expected to ramp up in 2027? Thank you.
President and CEO Hock E. Tan:
Thank you for accepting Sue (Ethernet). Thank you. I didn't expect to say that, and I appreciate it. Well, you know I have a bias, to be honest, but it's so obvious that I can't help but have a bias because Ethernet is tried and true. Ethernet is so familiar to the engineers and architects designing AI data centers and data AI infrastructure in all these hyperscale companies. It makes sense for them to use it, and they are using it and focusing on it. And developing separate, individualized protocols, to be honest, I can't imagine why they would bother. Ethernet is there, it's widely used, and it has proven to be sustainable. The only issue people talk about might be latency, especially in vertical scaling, which is where NVLink comes in.
Even then, as I pointed out, it's not hard for us, and we are not the only ones who can do it. In the Ethernet switch space, there are quite a few other companies that can do it. You just need to tune the switch to make the latency super good, better than NVLink, better than InfiniBand, easily below 250 nanoseconds. That's what we do. So it's not that hard. Maybe that's why I say this because we've been doing it for the last 25 years, and Ethernet has been around. So it's there, the technology is there, and there's no need to create some protocol that now has to be accepted by people. Ethernet is the direction, and competition is fierce because it's an open-source system. So I think Ethernet is the direction, and certainly when developing accelerators (XPU) for our customers, all these accelerators are made compatible with Ethernet interfaces with the customers' consent, rather than some quirky interface that has to keep up with bandwidth increases.
And we—I assure you, we have competition, and that's one of the reasons why hyperscale companies like the internet (Ethernet). It's not just us. If for some reason they don't like us, they can find someone else, and we are open to that. It's always good to do that. It's an open-source system, and there are participants in that market, that ecosystem.
Moving to XPU competition. Yes, you hear about it, we hear about competition, etc. It's just that this is a space where we always see competition, and the only way we ensure our position is to try to invest more and innovate more in this game than anyone else. We are fortunate to be the first company to create this silicon-based A6 XPU model. We are also fortunate to possibly be one of the largest semiconductor IP developers out there, with things like serializers/deserializers (serdes), able to develop the best packaging, able to do—redesign very low-power things So we just need to keep investing in this, and we have indeed done so, to stay ahead of the competition in this field. And I believe we are doing quite well in this regard at the moment.
Analyst Christopher Rolland:
Very clear. Thank you, Hock.
President and CEO Hock E. Tan: Of course.
Operator:
Thank you. We have time for one last question, and the last question comes from Harsh Kumar of Piper Sandler. Your line is open.
Analyst Harsh Kumar:
Hey everyone. Thanks for having me. Hock, congratulations on all the exciting AI metrics, and thank you for everything you do for Broadcom and for staying. Hock, my question is, you have three to four existing customers ramping up. As AI clusters in data centers grow larger, having differentiation, efficiency, etc., makes sense. Therefore, the case for XPU stands. Why shouldn't I think that your XPU share with these three or four existing customers will be greater than your GPU share in the long run?
President and CEO Hock E. Tan:
Yes. It will. That is a logical conclusion. Yes, Harsh, you are right. We are gradually seeing this. As I said, it is a journey, a multi-year journey, because it is multi-generational, and these accelerators (XPU) are not static either. We are doing multiple versions for each customer, at least two versions, two generations of products. With each new generation of products, they increase their consumption and usage of XPU as they gain confidence, and as the models improve, they deploy even more.
So this is a logical trend, among our few customers, XPU will continue to grow, and as they successfully deploy and their software stabilizes, the software specifications and libraries on top of these chips will stabilize and prove themselves. They will have the confidence to continue using an increasingly higher percentage of their own XPU within their own computing coverage, that is for sure, and we are seeing that as well. That is why I say we are gradually gaining share.
Analyst Harsh Kumar:
Thank you, Hock.
Operator:
Thank you. Now I would like to turn the call back to Ji Yoo, the head of investor relations, for any closing remarks.
Head of Investor Relations Ji Yoo:
Thank you, Shree. This quarter, Broadcom will participate in the Goldman Sachs Communacopia and Technology Conference in San Francisco on Tuesday, September 9, and will attend the JPMorgan U.S. All-Star Conference in London on Tuesday, September 16.
Broadcom currently plans to report its Q4 and full-year earnings for fiscal year 2025 after the market closes on Thursday, December 11, 2025. The public webcast of the Broadcom earnings conference call will take place at 2:00 PM Pacific Time
This will conclude our earnings conference call for today. Thank you all for participating. Shree, you may disconnect the call now.
Operator:
Today's program has come to an end. Thank you all for participating. You may now disconnect.
---
Disclaimer:
This transcript may not be 100% accurate and may contain spelling errors and other inaccuracies. This transcript is provided "as is" without any express or implied warranties.
The above wonderful content comes from [ZhuiFeng Trading Platform](https://mp.weixin.qq.com/s/uua05g5qk-N2J7h91pyqxQ).
For more detailed interpretations, including real-time interpretations and frontline research, please join the【 [ZhuiFeng Trading Platform ▪ Annual Membership](https://wallstreetcn.com/shop/item/1000309)】
[](https://wallstreetcn.com/shop/item/1000309)