
Track Hyper | Stargate Expansion: The Roles of OpenAI and Oracle

What are the considerations beyond business?
Author: Zhou Yuan / Wall Street Insight
In late July, OpenAI and Oracle announced a partnership to develop an additional 4.5GW of "Stargate" data center capacity, bringing the total capacity to over 5GW when combined with the Texas project, capable of operating approximately 2 million chips.
This event may seem like a simple business collaboration between two companies, but it actually reflects a new change in the underlying operational logic of the artificial intelligence industry.
More importantly, this is not just a straightforward business partnership.
Paradigm Shift in Computing Power Supply Model
American technology thinker and futurist Kevin Kelly wrote in "The Inevitable": "Future technologies will become ubiquitous infrastructures, like water and electricity."
This collaboration is a real-world confirmation of that judgment.
Traditional AI companies' computing power supply often relies on self-built data centers or a single cloud vendor, limited by capital scale and regional resources, making it difficult to meet the explosive growth in computing power demand.
The partnership between OpenAI and Oracle represents a deep binding model of "technology company + infrastructure service provider."
With its layout of 44 cloud regions in over 20 countries, Oracle can provide cross-regional power allocation, network redundancy, and disaster recovery capabilities for the "Stargate."
This collaboration breaks down the regional barriers of computing power supply, allowing AI companies to transform computing power procurement from fixed asset investment into flexible services, similar to the transformation in the industrial era from self-built generators to accessing the public power grid.
According to data from Oracle's official website, its data center PUE (Power Usage Effectiveness) has consistently remained below 1.2, far lower than the industry average of 1.5.
Such technological advantages, through collaboration, directly translate into cost advantages for OpenAI, such as reducing the operating cost per chip.
As leading companies begin to adopt this light asset computing model, small and medium-sized AI companies will face the pressure of "cost disadvantages if not cooperating," thereby driving the reconstruction of the computing power supply model across the industry.
The scale of "operating over 2 million chips" means that this collaboration will enhance both parties' influence in the chip supply chain.
In the past, the AI chip market was dominated by NVIDIA, and OpenAI had primarily relied on its A100/H100 series products. However, Oracle has long adopted a "multi-vendor strategy" in data center construction, with its cloud infrastructure compatible with chip solutions from AMD, Intel, and others.
In this collaboration, Oracle's data center in Ashburn, Virginia, has begun deploying AMD MI300X chips, which offer better cost-performance in FP16 precision computing compared to equivalent NVIDIA products.
FP16 (half-precision floating-point) is a binary floating-point representation format that occupies 16 bits of storage space (more compact than single-precision FP32, which occupies 32 bits).
In GPUs that support FP16 (such as NVIDIA A100, H100), FP16 computing power is typically 2-4 times that of FP32, making it particularly suitable for matrix multiplication (such as convolution and fully connected layer calculations in deep learning), significantly shortening model training and inference times, allowing for more data to be processed in the same amount of time, and making it suitable for large-scale parallel tasks (such as AI inference services, image/video processing) The core value of FP16 lies in significantly optimizing the operational efficiency and cost of large-scale computing power by halving storage usage, improving computational throughput, and reducing energy consumption, while ensuring the accuracy of AI tasks. This is also a key reason for its widespread adoption in modern AI chips (such as NVIDIA GPUs and AMD MI series) and data centers.
The diversified procurement strategy gives Oracle greater bargaining power in chip selection. More importantly, the OCI (Optical Compute Interconnect) customized chip solution jointly developed by Oracle and several chip design companies can open interfaces to OpenAI through collaboration, providing a technical foundation for breaking the monopoly of a single vendor.
When procurement volumes reach millions, downstream companies can demand chip manufacturers to open more underlying control permissions and even participate in customized design.
This is also the mainstream strategy and common practice in the current chip industry. For example, Chinese smartphone companies can implement joint customization and targeted optimization of SoC chips with upstream chip design companies.
Coupling Reconstruction of Energy and Computing Power
The 4.5GW IDC capacity demands electricity equivalent to the annual electricity consumption of 3.15 million households (data source: derived from the 2024 U.S. Department of Energy's "2024 U.S. Data Center Energy Usage Report" and the 2020 U.S. Energy Information Administration's Residential Energy Consumption Survey (RECS) data).
According to Schneider Electric's insight report "Energy and Computing Synergy - Energy Challenges and Responses for Data Centers" released during the WAIC 2025 on July 26, the traditional energy usage model, which mainly relies on a rigid power supply model and considers little the dynamic synergy management between computing power and electricity, has become inadequate to meet the modern data center's demands for efficient, green, and reliable operation.
The Schneider report states that only by integrating the entire chain of power supply, distribution, computing, and cooling, and achieving flexible allocation of all elements, can deep synergy between the power system and computing power system be promoted, achieving a win-win situation in energy utilization, economic benefits, and carbon emissions.
To this end, Schneider Electric's Commercial Value Research Institute proposed a three-layer architecture for "Energy and Computing Synergy" in the report, connecting power supply, computing load, and synergy mechanisms from the bottom up to promote the deep integration of computing resources and power resources.
The "bottom layer - power supply infrastructure" mainly addresses the power quality management of sudden increases and decreases in intelligent computing loads and the access, application, and management of various energy sources (such as wind and solar), providing a stable power foundation for data centers.
The "middle layer - computing load" explores the flexibility adjustment space of IT loads and determines non-IT loads based on changes in IT loads to match electricity usage signals.
The "top layer - energy and computing synergy mechanism" establishes a decision-making framework for bidirectional adjustment of energy and computing, constructing a power-computing joint optimization model through the integration of data, algorithms, and incentive mechanisms to achieve efficient collaborative optimization of energy and computing power.
This compels the energy supply model and computing power layout to implement deep binding According to data from the U.S. Energy Information Administration (EIA), in 2023, Texas accounted for 28% of the nation's wind power installed capacity, while Oracle's data center in Odessa, Texas has achieved 100% renewable energy supply.
The site selection strategy for the "Stargate" clearly leans towards energy-rich areas: the Texas project is close to wind power bases, and new capacity may be located in Pennsylvania (rich in shale gas resources) or Washington State (where hydropower accounts for 80%).
It is evident that computing power is beginning to follow energy, breaking the traditional logic of data centers being located near users, forming a new value chain of "energy hub - computing power hub - user terminal."
Notably, the virtual power plant system that Oracle is testing can interconnect the backup power of data centers with the regional grid, allowing for reverse power output during peak electricity usage to generate revenue.
This "computing power-energy" bidirectional flow model transforms data centers from mere energy consumers into grid regulators. Once this model is scaled, artificial intelligence infrastructure will be deeply embedded in the energy internet, becoming an important participant in the new electricity market.
Xiong Yi, Senior Vice President of Schneider Electric, Head of Strategic and Business Development for China, and Director of the Commercial Value Research Institute, believes that "in the dual context of the rapid development of the AI industry and the construction of new power systems, reshaping the energy paradigm through the synergy of computing and electricity is essential to provide a solid and reliable foundation for the AI wave."
The Hidden Picture Beyond Commercial Considerations
More importantly, the collaboration between OpenAI and Oracle is not merely of a commercial nature.
This relates to Oracle's position in the U.S. AI strategy.
In January and July of this year, the U.S. government launched the "Stargate" project and the "Artificial Intelligence (AI) Action Plan," both aiming to "ensure U.S. leadership in advanced computing infrastructure."
The U.S. AI strategy is a complete system, and its strategic significance far exceeds the commercial realm. The content of the collaboration between OpenAI and Oracle—developing an additional 4.5GW of "Stargate" data center capacity, along with the previously under-construction first "Stargate" base in Abilene, Texas (planned capacity of 1.2GW)—is not simply a commercial project.
For example, the UAE has also announced plans to build the "Stargate UAE" project: the total plan includes five data centers with a combined capacity of 5GW, far exceeding the capacity of the "Stargate 1" data center in Texas.
The planning and construction of this project adopt U.S. technology systems, standards, and governance rules, which is part of the U.S. strategy to shape the technological development trajectory and digital ecosystem of Middle Eastern countries—relying on U.S. technology systems.
Oracle's data center in Utah shares physical security architecture with the U.S. National Security Agency (NSA) cloud computing project, possessing significant "dual-use" infrastructure characteristics. Therefore, "Stargate" may become the hardware carrier for the U.S. government's "Trustworthy AI" program.
According to U.S. media reports, the Department of Defense is evaluating the relocation of some AI training tasks to data centers that meet "security certification," and Oracle is one of the few service providers that have passed FedRAMP High-level certification FedRAMP (Federal Risk and Authorization Management Program) is a standardized framework launched by the U.S. federal government to assess, authorize, and monitor the security of cloud service providers (CSPs), ensuring their compliance with federal data security requirements.
Among them, the FedRAMP High certification is the highest level of security certification within this framework, aimed at cloud services that handle "High-Impact" data.
This intertwining of commercial cooperation and national strategy is blurring the public-private boundaries of technological infrastructure.
As computing power becomes a key strategic resource, the expansion of the "Stargate" is not only a corporate action but also reflects the geopolitical dimension of global AI computing power competition: the U.S. is rapidly integrating computing resources through corporate cooperation, essentially building a "digital homeland defense system" for the era of artificial intelligence.
The deeper significance of this cooperation lies in its revelation that the artificial intelligence industry is transitioning from an "algorithm-driven" to an "infrastructure-driven" new phase.
When computing power, energy, chips, and geopolitics intersect in the physical space of data centers, the underlying logic of the industry is no longer merely about technological breakthroughs, but rather the systemic collaborative capabilities of these elements.
Every move made by OpenAI and Oracle is writing new rules for the "new infrastructure" of the artificial intelligence industry, and these rules will determine the basic pattern of global AI competition in the next decade