Why AI Companies Are Building Data Centers in Space Instead of on Earth

Starcloud has secured $170 million in Series A funding to build artificial intelligence data centers in low Earth orbit, becoming the fastest Y Combinator startup ever to reach a $1.1 billion valuation. The company plans to launch satellites equipped with NVIDIA Blackwell GPUs (graphics processing units) that will process AI workloads directly in space, avoiding the power grid bottlenecks that plague Earth-based facilities .

What's Driving AI Companies to Launch Data Centers Into Space?

The fundamental problem is simple: artificial intelligence requires enormous amounts of electricity, and Earth's power infrastructure cannot keep pace with demand. Building a new 100-megawatt energy project on land requires five to ten years just for permitting and environmental reviews, according to Starcloud CEO Philip Johnston. Add in local opposition, and the timeline stretches even longer .

Space eliminates these bottlenecks entirely. Satellites placed in sun-synchronous orbits receive near-continuous sunlight without needing battery backup systems. This makes space-based solar power roughly eight times more efficient than terrestrial solar installations. As launch costs decline and manufacturing scales up, the marginal cost of deploying AI infrastructure in orbit drops significantly, while Earth-based costs continue climbing .

"The marginal cost of building data centers on Earth continually rises, but the marginal cost in space declines as launch capacity scales and manufacturing rates increase," noted Philip Johnston, CEO of Starcloud.

Philip Johnston, CEO at Starcloud

SpaceX CEO Elon Musk has predicted that deploying AI in space will become cheaper than terrestrial deployment within just two to three years. He emphasized that as Earth's "easy spots" for power generation get used up, development becomes increasingly difficult and expensive due to local resistance .

How Are Companies Preparing for Orbital AI Infrastructure?

  • Hardware Development: NVIDIA launched its Space-1 Vera Rubin Module and IGX Thor platforms, specifically engineered to deliver data-center-class AI inference in size-, weight-, and power-constrained orbital environments where traditional cooling systems don't work.
  • Launch Capacity: SpaceX introduced the Terafab initiative, a strategic effort to build a terawatt of compute power in space using the Starship rocket's massive payload capacity, targeting 10 million tons to orbit per year.
  • Commercial Validation: Starcloud's Starcloud-2 satellite, launching later this year, will feature NVIDIA Blackwell B200 chips and run commercial workloads for customers including Crusoe, AWS, and Google Cloud.

The technology is not theoretical. Starcloud's Starcloud-1 module, launched in November 2025, successfully operated an NVIDIA H100 GPU in orbit without a single restart failure from the chip itself, demonstrating that commercial-off-the-shelf silicon can survive and thrive in space .

When Will Orbital Data Centers Become Cost-Competitive With Earth-Based Facilities?

The economics hinge on launch costs. Starcloud estimates that orbital facilities will become cost-competitive with terrestrial data centers as soon as SpaceX's Starship is flying frequently for commercial payloads, which is expected by mid-to-late 2028. The break-even point is around $500 per kilogram for GPU payloads, though that threshold is actually moving closer to $1,000 per kilogram as permitted land on Earth becomes increasingly expensive .

The investment community is already betting on this transition. EQT Ventures, whose parent company owns over 70 terrestrial data centers, co-led Starcloud's Series A funding round. This signals that traditional infrastructure players are hedging their bets and preparing for a future where orbital compute becomes the dominant model .

"Intelligence must live wherever data is generated," said NVIDIA CEO Jensen Huang, specifically naming Starcloud as a partner in bringing hyperscale AI to orbit.

Jensen Huang, CEO at NVIDIA

Starcloud's rapid ascent to unicorn status provides market validation for the ambitious space roadmaps recently laid out by NVIDIA and SpaceX. The company raised its total capital to $200 million and will use the Series A funding to establish a dedicated manufacturing facility, expand headcount, and procure future launch contracts .

Within the next decade, Johnston projects that close to a trillion dollars per year in capital expenditure will be deployed into space-based compute. The next era of AI scaling will not be defined by terrestrial real estate, but by early movers securing the best orbits and highest launch cadences for their orbital data centers. For hyperscalers and AI developers who ignore this transition, the risk is severe: they may become constrained by terrestrial power limits while competitors scale freely in orbit .