In a sense, this whole thing was inevitable. Elon Musk and his coterie have been talking about artificial intelligence in space for years — mainly in the context of Iain Banks’ sci-fi series about a far-future universe where sentient spaceships roam and rule the galaxy.
Now Musk sees an opportunity to realize a version of that vision. His company, SpaceX, has applied for regulatory approval to build solar-powered orbital data centers, deployed on more than a million satellites, that could move up to 100 GW of computing power off-planet. He reportedly suggested that some of his AI satellites would be built on the moon.
“By far the cheapest place to put AI will be space in 36 months or less,” Musk said last week on a podcast hosted by Stripe co-founder John Collison.
He is not alone. xAI’s head of computing has reportedly made a bet with his counterpart at Anthropic that 1% of global computing will be in orbit by 2028. Google (which has a significant ownership stake in SpaceX) announced a space-based artificial intelligence initiative called Project Suncatcher that will launch prototype vehicles in 2027. Starcloud, a startup that received its own plans for $34 million and Google’s advances. 80,000 satellite constellation last week. Even Jeff Bezos said this is the future.
But beyond the hype, what will it actually take to get data centers into space?
According to the first analysis, today’s terrestrial data centers remain cheaper than those in orbit. Andrew McCalip, an aerospace engineer, has created a useful calculator that compares the two models. Its baseline results show that a 1Gw orbital data center could cost $42.4 billion – nearly three times its terrestrial equivalent, thanks to the upfront cost of building the satellites and getting them into orbit.
Changing that equation, experts say, will require technological developments in several areas, massive capital expenditures and a lot of supply chain work for space-grade components. It also depends on costs, which are rising as resources and supply chains are strained by rising demand.
Techcrunch event
Boston, MA
|
June 23, 2026
Designing and launching satellites
A key factor in any space business model is how much it costs to get anything up. Musk’s SpaceX is already pushing down the cost of a trip to orbit, but analysts weighing what it will take to make orbital data centers a reality need even lower prices to make their business case. In other words, while AI data centers may appear to be a new business line story ahead of SpaceX’s IPO, the plan hinges on the completion of the company’s longest-running unfinished project — Starship.
Consider that a reusable Falcon 9 today costs roughly $3,600/kg to orbit. To make space data centers feasible, according to the Project Suncatcher white paper, prices closer to $200/kg will be required, an 18-fold improvement over what should be available in the 2030s. However, at this price today, the power delivered by a Starlink satellite would be cost-competitive with a terrestrial data center.
SpaceX’s next-generation Starship rocket is expected to deliver these improvements — no other vehicle in development promises equivalent savings. However, this vehicle has yet to become operational or even reach orbit; the third iteration of the starship is expected to launch for the first time sometime in the coming months.
Even if Starship were completely successful, assumptions that it would immediately deliver lower prices to customers may not pass the smell test. Economists at consultancy Rational Futures make a compelling case that, as with the Falcon 9, SpaceX won’t want to charge much less than its best competitor — otherwise the company is leaving money on the table. For example, if Blue Origin’s New Glenn rocket were to sell for $70 million, SpaceX would not take Starship missions to outside customers at a much lower cost, which would keep it above the numbers publicly projected by space data center builders.
“There aren’t enough rockets to launch a million satellites yet, so we’re a long way from that,” Matt Gorman, CEO of Amazon Web Services, said at a recent event. “If you think about the cost of getting a payload into space today, it’s enormous. It’s just not economical.”
Still, if launch is the bane of all space ventures, the other issue is production costs.
“At this point, we always take for granted that the cost of Starship will be hundreds of dollars per kilo,” McCalip told TechCrunch. “People don’t take into account that satellites are almost $1,000 a kilogram now.”
The cost of manufacturing the satellites is the largest part of that price tag, but if high-performance satellites can be produced for about half the cost of current Starlink satellites, the numbers start to make sense. SpaceX made big strides in satellite economics while building Starlink, its record-breaking communications network, and the company hopes to achieve more with scale. Part of the rationale for a million satellites is undoubtedly the cost savings that come from mass production.
However, the satellites that will be used for these missions must be large enough to meet the complex requirements of operating powerful GPUs, including large solar arrays, thermal management systems, and laser-based communications links for receiving and delivering data.
Project Suncatcher’s 2025 white paper offers one way to compare ground-based and space-based data centers based on the cost of energy, the basic input needed to run chips. Onsite, data centers spend roughly $570-$3,000 per kW of energy per year, depending on local energy costs and the efficiency of their systems. SpaceX’s Starlink satellites get their power from on-board solar panels, but the cost of acquiring, launching and maintaining these spacecraft provides energy at $14,700 per kW per year. Simply put, satellites and their components will need to be much cheaper to be cost-competitive with measured performance.
The space environment doesn’t fool around
Proponents of orbital data centers often say that temperature control is “free” in space, but this is an oversimplification. Without an atmosphere, it is actually more difficult to dissipate heat.
“You’re relying on very large radiators that are just going to be able to dissipate heat into the darkness of space, and that’s a lot of surface area and mass that you have to deal with,” said Mike Safyan, executive director of Planet Labs, which is building prototype satellites for the Google Suncatcher, which is expected to launch in 2027 in particular. “It’s seen as one of the key challenges.”
In addition to the vacuum of space, AI satellites will also have to deal with cosmic radiation. Cosmic rays degrade chips over time and can also cause “bit flip” errors that can corrupt data. Chips can be shielded, use series-hardened components, or work in series with redundant error checks, but all of these options involve expensive trade-offs in weight. Still, Google used the particle beam to test the effects of radiation on its Tensor Processing Units (chips designed specifically for machine learning applications). SpaceX management said on social media that the company has acquired a particle accelerator for this purpose.
Another challenge is the solar panels themselves. The logic of the project is energy arbitrage: placing solar panels in space makes them five to eight times more efficient than on Earth, and if they are in the right orbit, they can be in sight of the sun for 90% of the day or more, increasing their efficiency. Electricity is the main fuel for chips, so more power = cheaper data centers. But even solar panels are more complicated in space.
Space solar panels made of rare earth elements are durable but too expensive. Solar panels made of silicon are cheap and increasingly prevalent in space – Starlink and Amazon Kuiper use them – but they degrade much more quickly due to cosmic rays. This will limit the lifespan of AI satellites to around five years, meaning they will need to generate a return on investment more quickly.
Still, some analysts think it’s not that big of a deal, based on how quickly new generations of chips are coming on the scene. “After five or six years, the dollars per kilowatt-hour are not coming back, and that’s because they’re not cutting edge,” Starcloud CEO Philip Johnston told TechCrunch.
Danny Field, chief executive of Solestial, a space-based silicon solar panel start-up, says the industry sees orbital data centers as a key driver of growth. He’s talking to several companies about potential data center projects and says “any player big enough to dream is at least thinking about it.” However, as a long-time spacecraft designer, he does not slack off on these models.
“You can always extrapolate the physics to a larger size,” Field said. “I’m excited to see some of these companies get to the point where the economics make sense and the business case closes.”
How do space data centers fit into this?
One outstanding question about these data centers: What are we going to do with them? Are they for general purposes, or for inference, or for training? Based on existing use cases, they may not be completely interchangeable with data centers on the ground.
A key challenge for training new models is running thousands of GPUs together in matter. Most model training is not distributed, but is done in individual data centers. Hyperscalers are working to change this to increase the performance of their models, but it still hasn’t been achieved. Similarly, training in space will require coherence between GPUs on multiple satellites.
Google’s Project Suncatcher team notes that the company’s terrestrial data centers interconnect its TPU networks with throughput in the hundreds of gigabits per second. Today’s fastest common intersatellite communication links, which use lasers, can only reach about 100 Gb/s.
That led to an interesting architecture for Suncatcher: It involves flying 81 satellites in formation so they’re close enough to use the kind of transceivers that ground-based data centers rely on. Of course, this presents its own challenges: The autonomy required to ensure that each spacecraft remains in its proper station, even when maneuvers are necessary to avoid orbital debris or another spacecraft.
Still, the Google study offers a caveat: Inference can tolerate orbital radiation environments, but more research is needed to understand the potential impact of bit-flips and other errors on training workloads.
Inference tasks do not have the same need for GPUs to perform uniformly. This work can be done with dozens of GPUs, perhaps on a single satellite, an architecture that represents a kind of minimum viable product and a likely starting point for an orbital data center business.
“Training is not an ideal thing in space,” Johnston said. “I think almost all workloads are going to be done in space,” I imagine, everything from voice customer service agents to ChatGPT queries being computed in orbit. He says his company’s first AI satellite is already making money from on-orbit inferences.
While details are scarce even in the company’s FCC filing, SpaceX’s constellation of orbital data centers appears to assume about 100 kW of computing power per ton28, roughly double the power of current Starlink satellites. The spacecraft will work in conjunction with each other and use the Starlink network to share information; the filing claims that Starlink’s laser links can achieve petabit-level throughput.
SpaceX’s recent acquisition of xAI (which builds its own ground-based data centers) will allow the company to map out both terrestrial and orbital data center positions and see which supply chain adapts faster.
That’s the advantage of swappable floating point operations per second – if you can make it happen. “A FLOP is a FLOP, it doesn’t matter where he lives,” McCalip said. “(SpaceX) can just scale until (it) hits enablement or capex bottlenecks on the ground and then go back to (its) space deployment.”
Got a sensitive tip or confidential documents about SpaceX? Contact Tim Fernholz at tim.fernholz@techcrunch.Coh For secure communication, you can contact him via Signal at tim_fernholz.21.