
Track Hyper | NVIDIA's stock price has dropped 27% this year

The underlying logic of AI demand has changed
Author: Zhou Yuan / Wall Street Insight
The global AI chip leader NVIDIA (NVDA) is facing a severe test since the beginning of 2025.
As of the market close on March 11, NVIDIA's stock price was $108.76, down 27.22% from the historical high of $153.13 on January 7, with a market value evaporating by over $1.2 trillion.
The significant drop in NVIDIA's stock price is not due to any issues with its performance.
On the contrary, the AI giant reported impressive results for the fourth quarter of fiscal year 2025 (November 1, 2024 - January 26, 2025) on February 26: fourth-quarter revenue soared 78% year-on-year to a record $39.3 billion, and net profit reached $22.1 billion, with a year-on-year growth rate exceeding 80%, both surpassing expectations, thanks to the strong performance of the company's data center division.
In the fourth quarter of fiscal year 2025, NVIDIA's data center revenue accounted for 90.5% of total revenue, with a year-on-year growth of 93.3%, reaching $35.58 billion.
Is this performance sustainable?
According to information obtained by Wall Street Insight from the supply chain, the demand for NVIDIA's AI accelerator cards in the Chinese market is very strong; CEO Jensen Huang also stated that the market demand for NVIDIA chips remains crazy.
However, geopolitical risks are rapidly accumulating, and the market focus has shifted from "how much can it grow" to "how long can it grow."
Bloomberg data shows that as of March 10, NVIDIA's forward P/E ratio has fallen from 45 times in January to 28 times, below the five-year average of 37.6 times.
Since the launch of DeepSeeK's R1 version on January 20, it has taken the world by storm.
This AI tool is characterized by reducing computing power requirements, with a focus on reasoning capabilities. This feature has led the market to gradually believe that the global demand structure for AI computing power has changed.
DeepSeeK's R1 model adopts a "chain of thought" architecture, with the computing power consumption for a single inference request increasing by 3-5 times compared to traditional models, but hardware costs reduced by 70% through algorithm optimization.
Seven days after DeepSeeK's public release, NVIDIA's stock price plummeted by 17%.
Morgan Stanley's research shows that the proportion of reasoning in the demand for computing power in U.S. data centers surged from 40% in 2023 to 78% in Q1 2025.
As the demand for applications that surpass today's popular chatbots (such as ChatGPT or xAI's Grok) continues to grow among individuals and businesses, reasoning is expected to become an important component of this technology's demand.
Barclays analysts estimate that within the next two years, capital expenditures for "frontier AI" (referring to the largest and most advanced systems) for reasoning will exceed those for training, jumping from $122.6 billion in 2025 to $208.2 billion in 2026.
More importantly, DeepSeeK "is not fighting alone," as other AI chip companies, such as the U.S. chip company Cerebras Systems, are also shaking the logic chain of NVIDIA's high premium space.
This company, established only nine years ago, has made breakthroughs in the reasoning chip market with its "wafer-scale chip" technology, and its latest WSE-3 chip is 70 times faster than NVIDIA's GPU solution on the Llama 3.3 model, with costs only 1/10th Cerebras breaks the traditional chip manufacturing paradigm by using the entire wafer as a single chip. This "All-in-One Wafer" design eliminates communication losses between chips, achieving an order of magnitude increase in memory bandwidth and computing density.
This architectural change brings substantial commercial value: in the deployment of the G42 AI supercomputer, the Cerebras system reduced the training time of Llama-3 from 3 months to 9 days, with energy consumption reduced by 83%.
The company launched the Cerebras Inference Service in September 2024, supporting the Llama 3.1 70B model to achieve an inference speed of 450 tokens/second, costing only $0.6 per million tokens, which is 20 times faster and 100 times cheaper than GPU solutions; its ecological reach has also extended to Chinese companies: establishing an API-compatible ecosystem with Silicon-based Flow, LangChain, and others.
Cerebras' recent move is to add six new AI data centers in North America and Europe, increasing its inference capacity by 20 times, reaching over 40 million tokens per second.
In addition to providing a low-cost, high-performance AI inference direction for global AI technology, DeepSeek is also promoting the trend of software-defined hardware, collaborating with Silicon-based Flow to launch "dynamic operator compilation technology," enabling mid-range GPUs to support high-end models, directly impacting NVIDIA H100's premium space.
In this context, data from the Blackberry Research Institute shows that the global AI server delivery cycle has shortened from 42 weeks in December 2024 to 28 weeks in February 2025, which the market interprets as the beginning of AI server capacity oversupply.
Microsoft, which has consistently held a strong supportive stance on NVIDIA's AI performance, also "suddenly" took action in February: canceling leases for some data centers in the United States. Microsoft CEO Satya Nadella publicly stated, "The current ROI of AI applications does not support the existing investment intensity."
Oracle also joined the fray: transferring 15% of its $130 billion AI order's inference capacity to Cerebras.
Meta seems to believe in "doing it yourself for self-sufficiency": although its capital expenditure budget has been raised to $42 billion - $45 billion, a year-on-year increase of over 40%, the proportion of self-developed chips has increased to 30%.
These factors are increasing the negative impact on NVIDIA's stock price.
Goldman Sachs' model shows that if the market share of inference chips drops to 50%, NVIDIA's forward PE should be compressed to 22 times.
Cathie Wood's ARK fund has reduced its holdings in NVIDIA for three consecutive weeks, selling a total of $420 million; net short positions from hedge funds surged from $3.7 billion at the end of January to $8.9 billion in early March.
However, some continue to be bullish, such as Tianfeng International's estimate that if TSMC's CoWoS capacity expands as scheduled, NVIDIA's data center revenue could still reach $173.4 billion in 2025 (a year-on-year increase of over 53%).
Jensen Huang stated, "The demand for Blackwell is astonishing, as inference AI adds another scaling law: increasing the computation used for training can make models smarter, and increasing the computation used for long-term thinking can make answers smarter." Oracle's latest disclosure of a $130 billion order backlog, along with Elon Musk's xAI launching a million GPU super data center project in Memphis, continues to inject confidence into the industry.
Needham analyst Charles Shi pointed out, "Short-term pain cannot hide the long-term trend; the global AI infrastructure investment cycle will last at least until 2028."
The ongoing stock price adjustment of NVIDIA is unique in that it is the first valuation reconstruction driven by an algorithm revolution rather than merely hardware iterations.
DeepSeek's "software-defined computing power" model is rewriting the old paradigm of "Moore's Law-driven growth."
In the short term, NVIDIA still holds three trump cards: ecological barriers, generational advantages, and abundant cash flow.
Among them, the CUDA developer community has over 5 million members, 23 times more than the second place ROCm; the 3nm process leads by at least 18 months, with over 6,000 patents in photonic computing; and quarterly free cash flow reaches $18.4 billion, supporting an annual R&D investment of $20 billion.
However, whether NVIDIA's advantages can withstand the paradigm revolution depends on whether Jensen Huang can transform the "hardware arms dealer" into a "computing power service provider."
In the long run, this industry earthquake triggered by the evaporation of trillions in market value may reshape the global AI power landscape.
Morgan Stanley mentioned in its latest research report that "the AI chip war has entered the second half, and victory no longer belongs to the ruler of a single architecture, but to the integrator of ecosystems"—this attitude seems to continue to favor NVIDIA