Doing the math on the Stargate Project; Yann LeCun predicts new AI paradigm in five years; revenue is booming for AI startups; game developers push back against AI projects
Immigrant AI workers worry about the new US administration; Huawei aims to replace NVIDIA in China; the second wave of AI coding is here;
During a press conference held on Tuesday at the White House, Donald Trump, together with Larry Ellison from Oracle, Masayoshi Son from SoftBank and Sam Altman from OpenAI, announced the Stargate Project, a new company set to invest $500 billion over the next four years to build AI infrastructure in the United States.
A press release posted shortly after on OpenAI’s website included a few more details:
The equity funders are SoftBank, OpenAI, Oracle, and MGX (an Emirati company specializing in AI-related investments). SoftBank and OpenAI are the lead partners, with SoftBank overseeing financial responsibilities and OpenAI handling operational aspects. Masayoshi Son, CEO of SoftBank, will serve as chairman of Stargate.
Arm, Microsoft, NVIDIA, Oracle, and OpenAI are the technology partners. The collaboration builds upon existing partnerships, including OpenAI's relationship with NVIDIA since 2016, and a newer partnership with Oracle. (OpenAI will also continue to utilize Microsoft's Azure platform for training models and delivering products.)
The first $100 billion will be deployed immediately, with construction of data centers starting in Texas and evaluations underway for additional sites across the country. The project covers the entire “data center infrastructure landscape, from power and land to construction to equipment, and everything in between”, including the construction of new off-grid energy facilities such as modular nuclear reactors.
So what exactly does $100 billion get you in terms of real-world infrastructure? The exact number of data centers will vary based on design objectives, location, and scale. However, a useful rule of thumb for a modern hyperscale data center can be anywhere from $1 billion to $2 billion per site when factoring in land, building construction, power and cooling infrastructure, servers, and networking gear.
In reality, the investment will probably be split among several large “flagship” hyperscale data centers and a network of smaller edge or regional facilities costing in the hundreds of millions. We should therefore expect a range of 50–100 data centers, some very large, and some mid-sized or specialized. That aligns well with Larry Ellison’s remarks about the first project underway in Abilene, Texas, where 10 buildings are already under construction (each half a million square feet), with 10 more planned.
What kind of hardware should we expect to see inside a flagship Stargate data center? It’s most likely that we’ll see racks of the new Blackwell GPUs from NVIDIA, while the smaller facilities will probably get the older generation Hopper-class GPUs. The most popular Blackwell-based system right now is the GB200 NVL72—an exascale, liquid-cooled computer designed for data centers and powered by Arm-based Grace CPU (hence the inclusion of Arm as a technology partner). A GB200-based OEM system has been rumoured to cost about $3 million; therefore, assuming 40% of a $2 billion data center goes to AI hardware, we should get about 20,000 GPUs per data center.
The NVL72 rack packs 72 B200 GPUs, operating at a thermal design point (TDP) of 132 kilowatts. So if these data centers will be grouped together at sites similar to the one in Texas, they will require several gigawatts of energy, some of which they’ll be able to get from the grid (likely from gas-powered plants or more carbon-friendly sources such as wind or solar).
However, Trump hinted at relaxing regulations so Stargate can also get off-grid energy using newly built small modular nuclear reactors or refurbishing reactors that have been shut down or abandoned. The latter might be a straightforward choice in the short-term because the new small modular reactors take time to build and can only generate about 300 megawatts of power which is insufficient for a large scale site but may be enough for the smaller, edge data centers.
Given the power constraints and current limitations in the semiconductor manufacturing industry, we should see up to one million GPUs in total, across 50 large data centers, which aligns well with what Dario Amodei from Anthropic said at the World Economic Forum gathering in Davos this week:
In practice, some data centers will have fewer or more GPUs depending on workload specialization and whether they are built for AI training, inference, or general cloud services. It could also be that NVIDIA is not the only hardware game in town, and that some sites will go for Cerebras or AMD-based systems.
Additionally, the $100 billion outlay must cover land, buildings, power infrastructure, cooling, and other equipment, so GPU spending could be lower or higher depending on strategic choices. Cooling in particular is expected to be a challenge because power usage is among the most expensive line items on a data center’s balance sheet, and cooling cost represents about 40 percent of that total, on average. And whereas H100 GPUs could be air cooled, Blackwell requires water cooling to handle the increased thermal load, which in turn means increased capex costs due to the added complexity of plumbing and liquid cooling components. These water cooling systems also consume a lot of water, which can stress overtaxed water supplies.
Nonetheless, a range of 500,000 to over a million GPUs in aggregate is a reasonable estimate for this level of investment. And with an additional $400 billion anticipated to follow, we could see up to five million GPUs deployed by the Stargate Project by 2030, as the economies of scale kick in for NVIDIA and their OEMs.
That’s of course just the result of an investment from one company; established hyperscalers (Amazon, Microsoft, Google) will want to have competitive offerings. Speaking in Davos this week, Satya Nadella from Microsoft said his company is also spending $80 billion this year on data center infrastructure, and we should expect similar investments from Amazon and Google.
The primary winners from Stargate will be NVIDIA and its data center OEMs (Dell, HPE, Supermicro), followed by power and utility companies. Construction and real estate sectors and enterprise IT service providers will also see a boost, which explains why Trump was keen to highlight that 100,000 jobs will be created “immediately.”
But here’s some additional math, courtesy of Chinese AI lab DeepSeek. Last Christmas, the company released their DeepSeek-V3 model which outperformed the equivalent models from OpenAI, Anthropic and Google. The DeepSeek-V3 technical report claimed that the model required a budget of $5.57 million in GPU-related costs, which is at least one order of magnitude less than what GPT-4 or Claude 3.5 were trained with. So the question is: do we really need one million GPU, one gigawatt data centers for the next-generation models or should we instead try to do more with less?
And now here are the week’s news:
❤️Computer loves
Our top news picks for the week - your essential reading from the world of AI
The Stargate Project
Business Insider: Trump announces an AI infrastructure investment of up to $500 billion involving OpenAI, Oracle, and SoftBank
Bloomberg: Stargate’s First Data Center Site is Size of Central Park, With At Least 57 Jobs
Fortune: OpenAI’s Stargate may be tech’s biggest gamble ever, but here’s what’s really at stake
The Information: Behind the OpenAI-Oracle Pact, an Elon Musk Threat Loomed
Bloomberg: AI’s $100 Billion Stargate Venture Touted by Trump Will Tap Solar Power
Sifted: Winning in AI will require millions more GPUs. Can Europe get there?
TechCrunch: Meta’s Yann LeCun predicts ‘new paradigm of AI architectures’ within 5 years and ‘decade of robotics’
The Information: Startups’ AI Revenue Is Booming. Some Investors Doubt It Will Last
Wired: Game Developers Are Getting Fed Up With Their Bosses’ AI Initiatives
Fortune: ‘A sense of panic’: Immigrant AI talent worry Trump could make an already broken visa system worse
FT: Huawei seeks to grab market share in AI chips from Nvidia in China
MIT Technology Review: The second wave of AI coding is here
TechCrunch: Here are the types of AI companies enterprise VCs want to back in 2025
Washington Post: Amazon AI deal leaves ‘zombie’ start-up in its wake, whistleblower says
Science News: Want your own AI double? There could be big benefits — and risks
Keep reading with a 7-day free trial
Subscribe to Computerspeak by Alexandru Voica to keep reading this post and get 7 days of free access to the full post archives.