That scenario is not impossible. If we aren’t in a bubble, I’d expect something like that to happen.
It’s still premised on the idea that more training/inference/ressources will result in qualitative improvements.
We’ve seen model after model being better and better, without any of them overcoming the fundamental limitations of the genre. Fundamentally, they still break when out of distribution (this is hidden in part by their extensive training which puts more stuff in distribution, without solving the issue).
So your scenario is possible; I had similar expectations a few years ago. But I’m seeing more and more evidence against it, so I’m giving it a lower probability (maybe 20%).
I’m responding to the claim that training scaling laws “have ended”, even as the question of “the bubble” might be relevant context. The claim isn’t very specific, and useful ways of making it specific seem to make it false, either in itself or in the implication that the observations so far have something to say in support of the claim.
The scaling laws don’t depend on how much compute we’ll be throwing at training or when, they predict how perplexity depends on the amount of compute. For scaling laws in this sense to become false, we’d need to show that perplexity starts depending on compute in some different way (with more compute). Not having enough compute doesn’t disprove that the scaling laws are OK. Even not having enough data doesn’t disprove this.
For practical purposes, scaling laws could be said to fail once they can no longer be exploited for making models better. As I outlined, there’s going to be significantly more compute soon (this is still the case with “a bubble”, which might have the power to get compute as much as 3x lower than the more optimistic 200x-400x projection for models by 2031, compared to the currently deployed models). The text data is plausibly in some trouble even for training with 2026 compute, and likely in a lot of trouble for training with 2028-2030 compute. But this hasn’t happened yet, so the claim of scaling laws “having ended”, past tense, would still be false in this sense. Instead, there would be a prediction that the scaling laws would in some practical sense end in a few years, before compute stops scaling even at pre-AGI funding levels. But also, the data efficiency I’m using for predicting that text data will be insufficient (even with repetition) is a product of the public pre-LLM-secrecy research that almost always took unlimited data for granted, so it’s possible that spending a few years explicitly searching for ways to overcome data scarcity will let AI companies find a way to sidestep this issue, at least until 2030. Thus I wouldn’t even predict that text data will run out by 2030 with a high degree of certainty, it’s merely my baseline expectation.
It’s still premised on the idea that more training/inference/ressources will result in qualitative improvements.
I said nothing about qualitative improvements. Sufficiently good inference hardware makes it cheap to make models a lot bigger, so if there is some visible benefit at all, this will be happening at the pace of the buildouts of better inference hardware. But also conversely, if there’s not enough inference hardware, you physically can’t serve something as a frontier model (for a large user base) even if that offers qualitative improvements, unless you restrict demand (with very high prices or rate limits).
So your scenario is possible; I had similar expectations a few years ago. But I’m seeing more and more evidence against it, so I’m giving it a lower probability (maybe 20%).
This is not very specific, similarly to the claim about training scaling laws “having ended”. Even with “a bubble” (that bursts before 2031), some AI companies (like Google) might survive in an OK shape. These companies will also have their pick of the wreckage of the other AI companies, including both researchers and the almost-ready datacenter sites, which they can use to make their own efforts stronger. The range of scenarios I outlined only needs 2-4 GW of training compute by 2030 for at least one AI company (in addition to 2-4 GW of inference compute), which revenues of $40-80bn should be sufficient to cover (especially as the quality of inference hardware stops being a bottleneck, so that even older hardware will again become useful for serving current frontier models). Google has been spending this kind of money on datacenter capex as a matter of course for many years now.
OpenAI is projecting about $20bn of revenue in their current state, when the 800M+ free users are not being monetized (which is likely to change). These numbers can plausibly grow to at least give $50bn per year to the leading model company by 2030 (even if it’s not OpenAI), this seems like a very conservative estimate. It doesn’t depend on qualitative improvement in LLMs or promises for more than a trillion dollars in datacenter capex. Also, the capex numbers might even scale down gracefully if $50bn per year from one company by 2030 turns out to be all that’s actually available.
That scenario is not impossible. If we aren’t in a bubble, I’d expect something like that to happen.
It’s still premised on the idea that more training/inference/ressources will result in qualitative improvements.
We’ve seen model after model being better and better, without any of them overcoming the fundamental limitations of the genre. Fundamentally, they still break when out of distribution (this is hidden in part by their extensive training which puts more stuff in distribution, without solving the issue).
So your scenario is possible; I had similar expectations a few years ago. But I’m seeing more and more evidence against it, so I’m giving it a lower probability (maybe 20%).
I’m responding to the claim that training scaling laws “have ended”, even as the question of “the bubble” might be relevant context. The claim isn’t very specific, and useful ways of making it specific seem to make it false, either in itself or in the implication that the observations so far have something to say in support of the claim.
The scaling laws don’t depend on how much compute we’ll be throwing at training or when, they predict how perplexity depends on the amount of compute. For scaling laws in this sense to become false, we’d need to show that perplexity starts depending on compute in some different way (with more compute). Not having enough compute doesn’t disprove that the scaling laws are OK. Even not having enough data doesn’t disprove this.
For practical purposes, scaling laws could be said to fail once they can no longer be exploited for making models better. As I outlined, there’s going to be significantly more compute soon (this is still the case with “a bubble”, which might have the power to get compute as much as 3x lower than the more optimistic 200x-400x projection for models by 2031, compared to the currently deployed models). The text data is plausibly in some trouble even for training with 2026 compute, and likely in a lot of trouble for training with 2028-2030 compute. But this hasn’t happened yet, so the claim of scaling laws “having ended”, past tense, would still be false in this sense. Instead, there would be a prediction that the scaling laws would in some practical sense end in a few years, before compute stops scaling even at pre-AGI funding levels. But also, the data efficiency I’m using for predicting that text data will be insufficient (even with repetition) is a product of the public pre-LLM-secrecy research that almost always took unlimited data for granted, so it’s possible that spending a few years explicitly searching for ways to overcome data scarcity will let AI companies find a way to sidestep this issue, at least until 2030. Thus I wouldn’t even predict that text data will run out by 2030 with a high degree of certainty, it’s merely my baseline expectation.
I said nothing about qualitative improvements. Sufficiently good inference hardware makes it cheap to make models a lot bigger, so if there is some visible benefit at all, this will be happening at the pace of the buildouts of better inference hardware. But also conversely, if there’s not enough inference hardware, you physically can’t serve something as a frontier model (for a large user base) even if that offers qualitative improvements, unless you restrict demand (with very high prices or rate limits).
This is not very specific, similarly to the claim about training scaling laws “having ended”. Even with “a bubble” (that bursts before 2031), some AI companies (like Google) might survive in an OK shape. These companies will also have their pick of the wreckage of the other AI companies, including both researchers and the almost-ready datacenter sites, which they can use to make their own efforts stronger. The range of scenarios I outlined only needs 2-4 GW of training compute by 2030 for at least one AI company (in addition to 2-4 GW of inference compute), which revenues of $40-80bn should be sufficient to cover (especially as the quality of inference hardware stops being a bottleneck, so that even older hardware will again become useful for serving current frontier models). Google has been spending this kind of money on datacenter capex as a matter of course for many years now.
OpenAI is projecting about $20bn of revenue in their current state, when the 800M+ free users are not being monetized (which is likely to change). These numbers can plausibly grow to at least give $50bn per year to the leading model company by 2030 (even if it’s not OpenAI), this seems like a very conservative estimate. It doesn’t depend on qualitative improvement in LLMs or promises for more than a trillion dollars in datacenter capex. Also, the capex numbers might even scale down gracefully if $50bn per year from one company by 2030 turns out to be all that’s actually available.