I think compute scaling will slow substantially by around 2030
There will be signs if it slows down earlier, it’s possible that in 2027-2028 we are already observing that there is no resolve to start building 5 GW Rubin Ultra training systems (let alone the less efficient but available-a-year-earlier 5 GW non-Ultra Rubin systems), so that we can update then already, without waiting for 2030.
This could result from some combination of underwhelming algorithmic progress, RLVR scaling not working out, and the 10x compute scaling from 100K H100 chips to 400K GB200 chips not particularly helping, so that AIs of 2027 fail to be substantially more capable than AIs of 2025.
But sure, this doesn’t seem particularly likely. And there will be even earlier signs that the scaling slowdown isn’t happening before 2027-2028 if the revenues of companies like OpenAI and Anthropic keep sufficiently growing (in 2025-2026), though most of these revenues might also be indirectly investment-fueled, threatening to evaporate if AI stops improving substantially.
There will be signs if it slows down earlier, it’s possible that in 2027-2028 we are already observing that there is no resolve to start building 5 GW Rubin Ultra training systems (let alone the less efficient but available-a-year-earlier 5 GW non-Ultra Rubin systems), so that we can update then already, without waiting for 2030.
This could result from some combination of underwhelming algorithmic progress, RLVR scaling not working out, and the 10x compute scaling from 100K H100 chips to 400K GB200 chips not particularly helping, so that AIs of 2027 fail to be substantially more capable than AIs of 2025.
But sure, this doesn’t seem particularly likely. And there will be even earlier signs that the scaling slowdown isn’t happening before 2027-2028 if the revenues of companies like OpenAI and Anthropic keep sufficiently growing (in 2025-2026), though most of these revenues might also be indirectly investment-fueled, threatening to evaporate if AI stops improving substantially.