
The ultimate battleground for computing power breakthroughs is not in Silicon Valley, but in space orbit

As the demand for AI computing power approaches the physical limits on the ground, space computing has become a new battleground for breakthroughs in computing power, thanks to its "energy cost only 1/70 of that on the ground" and natural deep cooling advantages. The United States, represented by Google, SpaceX, and Starcloud, has taken the lead in engineering implementation through vertical integration; China, under the guidance of national strategy, is advancing the dual-track approach of "dedicated computing constellations" and "intelligent remote sensing."
As the power demand of ground data centers approaches physical limits, tech giants have realized that the next trillion-dollar computing goldmine has shifted from crowded power grids to the silent orbits of space.
This concept, once belonging to the realm of science fiction, has recently become a market focus due to the intensive voices and layouts from heavyweight figures such as SpaceX founder Elon Musk, Amazon founder Jeff Bezos, and NVIDIA CEO Jensen Huang.
According to an in-depth research report released on December 25 by the analyst team led by Zhou Tianle from Cathay Securities and Haitong Securities' Industry Research Center, space computing is not simply about sending servers into space, but rather a paradigm shift from "sky sensing and ground computing" to "sky sensing and sky computing." Faced with the dual rigid constraints of surging ground electricity and cooling difficulties, utilizing the endless solar energy and natural cooling environment of space has become the key solution to break through the computing dilemma.
In the latest industry developments, this enthusiasm has transformed into concrete actions. Wallstreetcn reported that Google plans to build a distributed satellite cluster using its TPU system, while the startup Starcloud announced the successful training of a space large language model on satellites equipped with NVIDIA GPUs.
The logic behind this trend is not only a technological vision but also reflects the reshaping of capital expenditure expectations: rather than coping with the increasingly high electricity costs and regulatory hurdles on the ground, it is better to leverage the resource advantages of space.

Physical Bottlenecks: Why Space?
The expansion of ground computing is facing two major physical rigid constraints: energy and cooling.
According to the International Energy Agency (IEA), the total electricity consumption of global data centers is projected to be 415 terawatt-hours in 2024, and this figure is expected to double by 2030.

With the surge in demand for AI large model training, the construction of ground power grids faces "generational gaps," and the construction cycle of dispatchable green energy is long, making it difficult to match the rapid demands of AI. A Morgan Stanley report pointed out that the power gap for U.S. data centers could reach 20% in the coming years.
At the same time, the cooling costs brought by high-density chips are exorbitant. The thermal flow density of next-generation chips like NVIDIA's GB200 continues to rise, traditional air cooling has reached its limits, and while liquid cooling technology has improved, it faces challenges of water resource consumption and system complexity.

In contrast, the space environment provides a perfect solution. Space has a solar energy density of up to 1360 W/m² and is unaffected by day-night weather, providing continuous power supply 24 hours a day More importantly, the cosmic background temperature drops to 3K (about -270℃), providing an infinite "heat sink" for passive radiative cooling, achieving zero water consumption and zero energy consumption for cooling.
"The abundant solar energy unique to space can support continuous power generation for on-orbit data centers 24 hours a day, and the deep cold environment of -270℃ in space is an ideal environment for passive cooling, capable of simultaneously resolving the two major bottlenecks of energy and heat dissipation on the ground."
In addition, what truly touches the nerves of the capital market is the huge cost difference between the ground and space.
According to the Lumen Orbit white paper, the energy cost for a 40MW data center cluster over ten years is $140 million on the ground, while it only costs $2 million in space (solar array costs). This fundamental change in cost structure gives space computing an overwhelming long-term economic advantage. In this regard, the energy cost ratio between ground and space is approximately 70 to 1.

On the ground, cooling systems often mean huge water resource consumption and electricity waste; whereas in space, "passive radiative cooling technology is a zero-energy, zero-carbon emission passive cooling method that directly dissipates heat into the deep space of the universe through full-spectrum infrared radiation."
Differentiated Exploration Led by Giants
In the U.S. market, the development of space computing shows a distinct characteristic of being led by giants. The report points out: "The early exploration and capability building of space computing dominated by global leaders is gradually forming large-scale commercial diffusion."

Starcloud is the first to explore "on-orbit computing as a service."
As a pioneer, Starcloud clearly focuses on providing on-orbit AI computing services, with its test satellite Starcloud-1 equipped with NVIDIA H100 GPUs, having completed on-orbit training of lightweight large language models and remote sensing image preprocessing verification. Its goal is to establish a 5GW space data center and complete a 40MW facility by 2030.

Google extends from its cloud computing system.
Its "Solar Catcher" project is not just about launching satellites, but plans to use self-developed TPUs to build a distributed satellite cluster, emphasizing software scheduling and inter-satellite networking. The report analyzes that Google aims to "define the future standards of space computing," replicating its vast cloud computing and AI ecosystem in orbit.
SpaceX plays the role of the infrastructure base.
Relying on the Starlink constellation, SpaceX has built the world's only infrastructure capable of large-scale on-orbit computing power. Although its computing power is currently mainly used for inter-satellite link management and traffic scheduling, its high-power satellite platform (Starlink V3) and low-cost launch capability (Falcon 9 and Starship) lay the physical foundation for future large-scale computing power deployment.

Vertically Integrated Industrial System
The United States has established a vertically integrated industrial system in the field of space computing power, led by major enterprises, covering everything from underlying chips to top-level services.
At the chip level, the United States has taken the lead in achieving stable on-orbit operation of commercial AI chips (COTS). NVIDIA's Jetson series and HPE's Spaceborne Computer project have demonstrated that commercial GPUs, after software redundancy and protective design, can adapt to the space radiation environment. This allows the mature CUDA ecosystem and AI models developed on the ground to be directly migrated to orbit, forming an irreplicable software and hardware ecological barrier.

At the infrastructure level, SpaceX has solved the challenges of "sending computing power to space" and "networking" by controlling high-power satellite platforms, reusable launch systems, and ultra-large-scale constellation networks. The high-frequency, low-cost launch capability makes the deployment of higher power and heavier computing payloads (such as server-level equipment) economically feasible.

In addition, the U.S. government provides continuous funding and market support for industrial development through risk-sharing mechanisms (such as NASA's procurement contracts) and diversified commercial demands (commercial remote sensing, cloud services).

China's Path: Systematic Development Led by National Strategy
Unlike the U.S. model led by commercial giants, China's development of space computing power shows a clear characteristic of being guided by national strategy, forming a dual-track approach of "dedicated computing constellations + intelligent remote sensing constellations."
The dedicated computing constellation aims to build a purely space-based computing power network. Represented by the "Three-Body Computing Constellation," this project is set to complete the launch of its first 12 satellites into orbit by May 2025. Each satellite has a computing power of up to 744 TOPS and achieves inter-satellite connectivity through a 100Gbps laser link, equipped with a space-based distributed operating system, aiming to solve the challenges of onboard high-performance computing and high-speed inter-satellite connectivity


Intelligent Remote Sensing Constellation is the mainstream path for large-scale applications. Taking the "Eastern Eye" constellation as a demonstration, by loading intelligent processing units on remote sensing satellites, it achieves "on-orbit perception and real-time assessment." For example, in disaster monitoring, satellites can directly process data and issue results, compressing response time from hours to minutes.


At the policy level, from the "14th Five-Year Plan" to the "Action Plan for Promoting High-Quality and Safe Development of Commercial Space (2025-2027)," China is promoting the evolution of space computing power from technological verification to systematic deployment through top-level design and local industrial collaboration (such as the planning of the Beijing Space Data Center).

