Under the AI boom, the cloud computing power leasing sector is booming! AI cloud giant CoreWeave's valuation has skyrocketed to $35 billion and is about to land on the US stock market

Zhitong
2025.02.27 07:40
portai
I'm PortAI, I can summarize articles.

AI cloud computing company CoreWeave plans to go public on the US stock market with a valuation of over $35 billion, expecting to raise approximately $4 billion. The company has the support of tech giants like NVIDIA, and its valuation continues to rise as the demand for AI computing power surges. Although the details of the IPO plan may be adjusted, market enthusiasm for investing in newly listed AI-related companies is high

According to media reports citing informed sources, CoreWeave, a cloud computing service provider focused on AI computing power leasing, is considering submitting its initial public offering (IPO) application to U.S. securities regulators within a week. This hot AI startup, often referred to as "NVIDIA's favorite," plans to raise approximately $4 billion through its U.S. stock market listing. Informed sources indicate that CoreWeave's latest valuation has significantly expanded to over $35 billion.

However, sources emphasize that some details of this highly anticipated AI cloud computing leasing giant's U.S. IPO plan may still be adjusted, and there is a possibility of a delay in the application to the U.S. Securities and Exchange Commission (SEC). A CoreWeave spokesperson did not immediately respond to media requests for comments regarding the IPO.

Media reports last October indicated that this cloud computing company focused on AI computing power leasing had previously received substantial investments from U.S. tech giants such as Cisco and NVIDIA, with a total valuation of up to $23 billion at the time of investment.

Subsequently, as global enterprises and even some core government departments have seen a surge in demand for AI computing infrastructure, particularly for cloud-based AI computing resources, CoreWeave's valuation has continued to expand. Analysts suggest that following the wave of AI large models integrating into various industries, the demand for cloud-based AI inference computing power is expected to grow exponentially, indicating that there may still be significant room for valuation increases for CoreWeave.

It is understood that in the aforementioned investment round with a valuation of $23 billion, the lineup of institutional investors is exceptionally strong, including Cisco, NVIDIA, Magnetar Capital, Coatue Management, Jane Street, Fidelity Investments, and Lykos Global Management.

What exactly is CoreWeave, known as "NVIDIA's favorite"?

As one of the earliest adopters of NVIDIA graphics processing units (GPUs) in the data center field, CoreWeave has gained favor from NVIDIA's venture capital arm by seizing the wave of demand for AI computing resources in data centers, even being able to prioritize access to the highly sought-after NVIDIA H100/H200 and Blackwell series AI GPUs, earning the title of "NVIDIA's favorite."

Amid a global rush to purchase NVIDIA AI GPUs, which has led to supply far outpacing market demand and significant premiums in the second-hand market, NVIDIA's supply of AI GPUs to CoreWeave has been described as "ample." Therefore, with NVIDIA's strong support, CoreWeave continues to expand large data centers based on NVIDIA's Hopper and newly launched Blackwell architecture AI GPUs to provide cloud-based AI training/inference computing resource services CoreWeave is a cloud service provider focused on high-performance computing (HPC) and AI workloads, headquartered in the United States. The company was initially founded in 2017 by three co-founders and initially ventured into the cryptocurrency mining sector. Subsequently, based on market trends and its own technological accumulation, it transformed into a cloud computing platform focused on AI GPU computing resources.

CoreWeave provides large-scale infrastructure that supports data-intensive artificial intelligence workloads, focusing on providing powerful cloud-based AI computing resources for AI training/inference workloads. This AI computing resource leasing service provider offers a range of AI computing leasing services, including cloud-based AI computing solutions and artificial intelligence object storage, both aimed at supporting the entire workflow of artificial intelligence and machine learning, deep learning models.

As early as August 2023, CoreWeave became the first cloud computing service company to deploy NVIDIA H200 Tensor Core GPUs, a high-performance AI GPU, enabling it to provide unparalleled computing power to its customers. Driven by the AI wave, especially in 2023, CoreWeave's prominence in the cloud AI GPU computing market rapidly increased due to large-scale procurement of high-end NVIDIA AI GPUs (such as H100/H200) and comprehensive cooperation with NVIDIA in the CUDA software and hardware collaborative ecosystem.

The most notable feature of CoreWeave's AI cloud computing leasing service is its focus on providing large quantities of the highest-end AI GPU (especially NVIDIA GPUs) clusters, allowing users to access high-performance AI GPU computing resources on demand in the cloud—specifically cloud-based AI computing resources for machine learning, deep learning, and inference AI workloads.

CoreWeave supports large-scale elastic deployment, allowing users to quickly increase or decrease the number of AI GPUs based on project needs, suitable for AI model training (such as large language models, computer vision systems, etc.) and large inference workloads that require real-time processing. In addition to AI, CoreWeave's NVIDIA AI GPU resources can also be used for traditional HPC scenarios (scientific computing, molecular simulation, financial risk analysis, etc.).

According to media reports disclosed last November, CoreWeave had appointed three major Wall Street financial giants—Morgan Stanley, Goldman Sachs, and JP Morgan—as the lead underwriters for its U.S. IPO at that time. The media also reported that the company announced in October last year that it had secured a $650 million credit facility led by JP Morgan, Goldman Sachs, and Morgan Stanley.

Investors are currently closely monitoring CoreWeave's U.S. stock listing process, and the market has high hopes for potential large-scale U.S. IPO candidates such as Klarna Group and Genesys Cloud Services, expecting their stock prices to soar significantly after going public. It is worth noting that most of the newly listed companies with the best stock performance in the U.S. market over the past two years have benefited from the global AI boom, which is why the market is eager for CoreWeave to quickly land on the U.S. stock market DeepSeek Sparks an "Efficiency Revolution," but Demand for AI Computing Resources Remains Strong

As Jensen Huang, CEO of NVIDIA, mentioned in the latest NVIDIA earnings conference call, the demand for AI chips remains robust: "DeepSeek-R1 has ignited global enthusiasm, and the company is excited about the potential demand brought by AI inference. This is an outstanding innovation, but more importantly, it has open-sourced a world-class inference AI model. Models like OpenAI, Broad3, and DeepSeek R1 are all inference models that apply inference time scaling. Inference models can consume over 100 times the computing power."

Microsoft CEO Satya Nadella previously mentioned the "Jevons Paradox"—when technological innovations significantly improve efficiency, resource consumption not only does not decrease but instead surges. This applies to the field of AI computing power, where the explosive growth of AI large model applications will lead to unprecedented demand for AI inference computing power.

Wall Street financial giant Morgan Stanley, in its latest research report, reiterated its strong bullish expectations for the two core technological routes of AI chips—AI GPU and AI ASIC core stock targets. It emphasized that the significant expansion of AI capital expenditures by large tech companies like Amazon, Google, and Microsoft is also based on the anticipated surge in future application-side AI computing power demand, especially for cloud-based AI inference computing power.

Although DeepSeek has completely sparked an "efficiency revolution" in AI training and inference, pushing the future development of AI large models to focus on "low cost" and "high performance," rather than burning money in a "miracle through brute force" manner to train AI large models, the massive demand for AI inference computing power across various industries globally means that the future demand for AI chips will still be vast. This indicates that cloud-based AI computing power demand will continue to experience explosive growth, and the "valuation expansion path" belonging to CoreWeave may be far from over.

As DeepSeek's newly launched DeepSeek R1 continues to gain popularity worldwide, and the company's latest research shows that the NSA mechanism achieves revolutionary improvements in AI large model training and inference efficiency at the Transformer level, it has prompted global AI large model developers to follow this "ultra-low-cost AI large model computing paradigm." This, in turn, will comprehensively drive the acceleration of AI application software (especially generative AI software and AI agents) into various industries, fundamentally revolutionizing efficiency in various business scenarios and significantly increasing sales. The global demand for AI computing resources focused on the AI inference side may exhibit exponential growth in the future, rather than the previously anticipated "DeepSeek shockwave" leading to a cliff-like decline in AI computing demand