Themes - SysMoore & Energy (Pt.3)

Themes - SysMoore & Energy (Pt.3)

Summary

• AI is turning data center demand into a continuous power load, colliding with a US grid that has seen little capacity growth for two decades.

• Chip production alone illustrates the scale: peak power for one year of AI accelerators could approach hundreds of GW, comparable to grid-level demand.

• China holds a structural advantage with ~3× US capacity and 11× faster build rates, while the US faces supply and permitting bottlenecks.

• US strengths in natural gas and nuclear are offset by turbine shortages, long build cycles, and unit sizes that lag multi-GW AI campus needs.

• Earth’s energy resources are ultimately limited; long-term AI-scale abundance likely depends on massive solar expansion, potentially space-based.

The Scale of the Problem

Before the AI era, data centers represented a low single-digit percentage of total electricity consumption worldwide — an afterthought in the context of global energy infrastructure. That has changed dramatically. AI infrastructure is scaling by orders of magnitude across every dimension — compute, memory, networking, and critically, power consumption — to a level that the legacy utility industry, largely stagnant for decades, simply cannot accommodate quickly enough.

The United States has a total electricity generation capacity of approximately 1,270 GW (utility-scale summer capacity as of late 2025, or roughly 1,330 GW including small-scale and distributed solar PV). This figure has remained essentially flat for the past two decades. Meanwhile, the power envelope of a single frontier AI data center has ballooned from hundreds of megawatts to over 1 GW, and roadmaps point toward 5–10 GW campuses within the next few years. The mismatch is staggering: a single data center campus will soon consume a meaningful fraction of a percent of the nation's entire electricity output.

As a result, every vendor across the electricity value chain is under immense pressure. Because progress in the physical world — generation, transmission, switchgear, transformers — has remained muted for decades in the US, these suppliers cannot simply scale production to meet surging demand. The consequence is 2–3 year lead times for nearly everything related to electricity supply: gas turbines, high-voltage transformers, switchgear, and grid interconnection.

What makes the future outlook even more challenging is the composition of demand. Training runs will continue to scale in size and energy consumption, but inference demand is only now beginning to ramp in earnest. Training is episodic — large models are developed periodically and updated infrequently. Inference, by contrast, is continuous and scales directly with usage. Every query, every agent action, every automated workflow, and every simulation step requires the model to run. As AI becomes embedded across consumer applications, enterprise software, and industrial systems, billions of inference calls will execute daily, turning what is individually a smaller workload into a far larger aggregate energy burden.

Contact Footer Example