The key decision-point in my model at which things might become a bit different is if we hit the end of the compute overhang, and you can’t scale up AI further simply by more financial investment, but instead now need to substantially ramp up global compute production, and make algorithmic progress, which might markedly slow down progress.
I think compute scaling will slow substantially by around 2030 (edit: if we haven’t seen transformative AI). (There is some lag, so I expect the rate at which capex is annually increasing to already have slowed by mid 2028 or so, but this will take a bit before it hits scaling.)
Also, it’s worth noting that most algorithmic progress AI companies are making is driven by scaling up compute (because scaling up labor in an effective way is so hard: talented labor is limited, humans parallelize poorly, and you can’t pay more to make them run faster). So, I expect algorithmic progress will also slow around this point.
All these factors make me think that something like 2032 or maybe 2034 could be a reasonable Schelling time (I agree that 2028 is a bad Schelling time), but IDK if I see that much value in having a Schelling time (I think you probably agree with this).
In practice, we should be making large updates (in expectation) over the next 5 years regardless.
I think compute scaling will slow substantially by around 2030
There will be signs if it slows down earlier, it’s possible that in 2027-2028 we are already observing that there is no resolve to start building 5 GW Rubin Ultra training systems (let alone the less efficient but available-a-year-earlier 5 GW non-Ultra Rubin systems), so that we can update then already, without waiting for 2030.
This could result from some combination of underwhelming algorithmic progress, RLVR scaling not working out, and the 10x compute scaling from 100K H100 chips to 400K GB200 chips not particularly helping, so that AIs of 2027 fail to be substantially more capable than AIs of 2025.
But sure, this doesn’t seem particularly likely. And there will be even earlier signs that the scaling slowdown isn’t happening before 2027-2028 if the revenues of companies like OpenAI and Anthropic keep sufficiently growing (in 2025-2026), though most of these revenues might also be indirectly investment-fueled, threatening to evaporate if AI stops improving substantially.
I think compute scaling will slow substantially by around 2030 (edit: if we haven’t seen transformative AI). (There is some lag, so I expect the rate at which capex is annually increasing to already have slowed by mid 2028 or so, but this will take a bit before it hits scaling.)
Also, it’s worth noting that most algorithmic progress AI companies are making is driven by scaling up compute (because scaling up labor in an effective way is so hard: talented labor is limited, humans parallelize poorly, and you can’t pay more to make them run faster). So, I expect algorithmic progress will also slow around this point.
All these factors make me think that something like 2032 or maybe 2034 could be a reasonable Schelling time (I agree that 2028 is a bad Schelling time), but IDK if I see that much value in having a Schelling time (I think you probably agree with this).
In practice, we should be making large updates (in expectation) over the next 5 years regardless.
There will be signs if it slows down earlier, it’s possible that in 2027-2028 we are already observing that there is no resolve to start building 5 GW Rubin Ultra training systems (let alone the less efficient but available-a-year-earlier 5 GW non-Ultra Rubin systems), so that we can update then already, without waiting for 2030.
This could result from some combination of underwhelming algorithmic progress, RLVR scaling not working out, and the 10x compute scaling from 100K H100 chips to 400K GB200 chips not particularly helping, so that AIs of 2027 fail to be substantially more capable than AIs of 2025.
But sure, this doesn’t seem particularly likely. And there will be even earlier signs that the scaling slowdown isn’t happening before 2027-2028 if the revenues of companies like OpenAI and Anthropic keep sufficiently growing (in 2025-2026), though most of these revenues might also be indirectly investment-fueled, threatening to evaporate if AI stops improving substantially.