Powering AI from Orbit: SpaceX, Space Solar, and the Musk Stack (Pt.2)
Summary
- The bet is that AI compute has outgrown Earth — power, cooling, and land are all hitting walls — and 1.2 million satellites can host it in orbit instead.
- Starship makes launch tractable, but cell chemistry is the real economic lever — HJT and perovskite tandems are what actually collapse the cost stack.
- The constellation is a three-tier fabric — VLEO for inference, LEO for training, MEO for storage — wired together by lasers into one distributed supercomputer.
- Space doesn't make cooling free, it makes it structural: the same panel that catches sunlight on one side dumps waste heat on the other.
- Vacuum has no chromatic dispersion, which rewrites the photonics supply chain — most links can use $10 lasers instead of $1,000 coherent modules.
- The endgame is a closed loop: xAI plans, Optimus builds, Tesla fabs, SpaceX launches — with the Moon eventually taking over from Earth as the deployment base.
Space AI Data Center Architecture
If Part 1 made the case for why space solar is the path forward, Part 2 turns to the harder question: what does it actually look like in orbit? The architecture Musk is assembling is not a gradual extension of today's satellite industry — it is a step-change in scale, and the numbers only make sense once you see how the pieces fit together.
Start with the filing itself. SpaceX has applied to the FCC to launch over 1.2 million satellites — more than every satellite humanity has ever put into orbit, combined. These satellites will form into a three-tier constellation designed to function as a single distributed compute fabric.
The first wave will be variants of the Starlink V3 platform already in production: roughly 2 tons each, 30 kW of solar capacity, and around 500 m² of panel area per satellite. Familiar hardware, unfamiliar purpose.
Three Orbital Tiers
Each tier is optimized for a different workload, with altitude trading off against latency, coverage, and thermal headroom.
- VLEO (500 km) — 800,000 satellites. The edge layer. Each satellite delivers roughly 1,000 TOPS (Tera Operations Per Second - FP8 but could change) with ~20 ms latency to ground users, handling real-time inference for everything from autonomous vehicles to consumer AI assistants.
- LEO (1,000 km) — 300,000 satellites. The regional layer. At ~5,000 TOPS per satellite, this tier carries the bulk of model training and mid-tier compute.
- MEO (2,000 km) — 118,000 satellites. The cloud layer. At ~20,000 TOPS per satellite, MEO handles long-running workloads and data storage, where latency matters less than density and stability.
Underneath all three tiers, the platform is the same. Each Starlink V3 satellite weighs roughly 2 tons, and nearly half that mass — about 850 kg — is dedicated to the compute payload itself: radiation-hardened AI silicon designed to survive cosmic ray strikes that would corrupt ordinary chips, liquid cooling loops to shed heat in an environment with no air for convection, and high-efficiency power conversion electronics that translate the solar array's raw output into the precise voltages the processors need. These aren't communications satellites with some compute bolted on. They're flying data centers, with the bus built around the silicon rather than the other way around.
The compute itself is a space-rated variant of Dojo 2 — Tesla's second-generation AI training chip, redesigned for the orbital environment — with 12 to 24 chips per satellite depending on the tier. Stitching them together are inter-satellite laser links, the connective tissue of the constellation, now running at 10 Gbps per channel (on Earth 8-16 channels per lane but in space there is no limit). Across thousands of simultaneous links, aggregate throughput climbs into the terabits per second, enough to let geographically scattered satellites behave as a single coherent compute fabric rather than a million isolated nodes.