OpenAI, Broadcom, NVIDIA, and AMD say they’ll deploy 10GW of AI compute by end of 2026. That includes custom chips and slews of 1GW data centers. What they didn’t say: where, when, or how. No sites named. No shovels in dirt.
OpenAI alone aims for 250GW by 2033—a moonshot that needs $400B in the next 12 months. That’s faster than cloud giants have ever spent and runs headfirst into supply problems: chips, energy, talent.