It seems more accurate to say that AI progress is linear rather than exponential, as a result of being logarithmic in resources that are in turn exponentially increasing with time. (This is not quantitative, any more than the “exponential progress” I’m disagreeing with[1].)
Logarithmic return on resources means strongly diminishing returns, but that’s not actual plateauing, and the linear progress in time is only slowing down according to how the exponential growth of resources is slowing down. Moore’s law in the price-performance form held for a really long time; even though it’s much slower than the present funding ramp, it’s still promising exponentially more compute over time.
And so the progress won’t obviously have an opportunity to actually plateau, merely proceed at a slower linear pace, until some capability threshold or a non-incremental algorithmic improvement. Observing the continued absence of the never-real exponential progress doesn’t oppose this expectation. Incremental releases are already apparently making it difficult for people to notice the extent of improvement over the last 2.5 years. With 3x slower progress (after 2029-2032), a similar amount of improvement would need 8 years.
The METR time horizon metric wants to be at least exponential in time, but most of the other benchmarks and intuitive impressions seem to quantify progress in a way that better aligns with linear progress over time (at the vibe level where “exponential progress” usually has its intended meaning). Many plots use log-resources of various kinds on the horizontal axis, with the benchmark value increasing linearly in log-resources, while it’s not yet saturated.
Perhaps another meaning of “exponential progress” that’s real is funding over time, even growth of individual AI companies, but that holds at the start of any technology adoption cycle, or for any startup, and doesn’t need to coexist with the unusual feature of AI making logarithmic progress with more resources.
There is a natural sense in which AI progress is exponential: capabilities are increasing at a rate which involves exponentially increasing impact (as measured by e.g. economic value).
Exponential increase in total economic value is not specific to AI, any new tech is going to start exponentially (possibly following the startups championing it) before it gets further on the adoption S-curve. The unusual things about AI is that it gets better with more resources (while most other things just don’t get better at all in a straightforward scaling law manner), that the logarithm of resources thing leaves the persistent impression of plateauing despite not actually plateauing, and that even if it runs out of the adoption S-curve it still has Moore’s law of price-performance to keep fueling its improvement. These unusual things frame the sense in which it’s linear/logarithmic.
If the improvement keeps raising the ceiling on adoption (capabilities) fast enough, funding keeps scaling into slightly more absurd territory, but even then it won’t go a long way without the kind of takeoff that makes anything like the modern industry obsolete. After the exponential phase of adoption comes to an end, it falls back to Moore’s law, which still keeps giving it exponential compute to slowly keep fueling further progress, and in that sense there is some unusual exponential-ness to this. Though probably there are other things with scaling laws of their own that global economic growth (instead of Moore’s law) would similarly fuel, even slower.
In many industries cost decreases by some factor with every doubling of cumulative production. This is how solar eventually became economically viable.
I guess the cost-quality tradeoff makes AI progress even better described as that of a normal technology. As economies of scale reduce cost, they should also be increasing quality (somewhat interchangeably). It’s just harder to quantify, and so most of the discussion will be in terms of cost. But for the purposes of raising the ceiling on adoption (total addressable market), higher quality works as well as lower cost, so the lowering of costs is directly relevant.
In this framing, logarithmic improvement of quality with more resources isn’t an unusual AI-specific thing either. What remains is the inflated expectations for how quality should be improving cheaply (which is not a real thing, and so leads to the impressions of plateauing with AI, where for other technologies very slow quality improvement would be the default expectation). And Moore’s law of price-performance, which is much faster than economic growth. The economies of scale mostly won’t be able to notice the growth of the specific market for some post-adoption technology that’s merely downstream of the growth of the overall economy. But with AI, available compute would be growing fast enough to make a difference even post-adoption (in 2030s).
It seems more accurate to say that AI progress is linear rather than exponential, as a result of being logarithmic in resources that are in turn exponentially increasing with time. (This is not quantitative, any more than the “exponential progress” I’m disagreeing with[1].)
Logarithmic return on resources means strongly diminishing returns, but that’s not actual plateauing, and the linear progress in time is only slowing down according to how the exponential growth of resources is slowing down. Moore’s law in the price-performance form held for a really long time; even though it’s much slower than the present funding ramp, it’s still promising exponentially more compute over time.
And so the progress won’t obviously have an opportunity to actually plateau, merely proceed at a slower linear pace, until some capability threshold or a non-incremental algorithmic improvement. Observing the continued absence of the never-real exponential progress doesn’t oppose this expectation. Incremental releases are already apparently making it difficult for people to notice the extent of improvement over the last 2.5 years. With 3x slower progress (after 2029-2032), a similar amount of improvement would need 8 years.
The METR time horizon metric wants to be at least exponential in time, but most of the other benchmarks and intuitive impressions seem to quantify progress in a way that better aligns with linear progress over time (at the vibe level where “exponential progress” usually has its intended meaning). Many plots use log-resources of various kinds on the horizontal axis, with the benchmark value increasing linearly in log-resources, while it’s not yet saturated.
Perhaps another meaning of “exponential progress” that’s real is funding over time, even growth of individual AI companies, but that holds at the start of any technology adoption cycle, or for any startup, and doesn’t need to coexist with the unusual feature of AI making logarithmic progress with more resources.
There is a natural sense in which AI progress is exponential: capabilities are increasing at a rate which involves exponentially increasing impact (as measured by e.g. economic value).
Exponential increase in total economic value is not specific to AI, any new tech is going to start exponentially (possibly following the startups championing it) before it gets further on the adoption S-curve. The unusual things about AI is that it gets better with more resources (while most other things just don’t get better at all in a straightforward scaling law manner), that the logarithm of resources thing leaves the persistent impression of plateauing despite not actually plateauing, and that even if it runs out of the adoption S-curve it still has Moore’s law of price-performance to keep fueling its improvement. These unusual things frame the sense in which it’s linear/logarithmic.
If the improvement keeps raising the ceiling on adoption (capabilities) fast enough, funding keeps scaling into slightly more absurd territory, but even then it won’t go a long way without the kind of takeoff that makes anything like the modern industry obsolete. After the exponential phase of adoption comes to an end, it falls back to Moore’s law, which still keeps giving it exponential compute to slowly keep fueling further progress, and in that sense there is some unusual exponential-ness to this. Though probably there are other things with scaling laws of their own that global economic growth (instead of Moore’s law) would similarly fuel, even slower.
In many industries cost decreases by some factor with every doubling of cumulative production. This is how solar eventually became economically viable.
I guess the cost-quality tradeoff makes AI progress even better described as that of a normal technology. As economies of scale reduce cost, they should also be increasing quality (somewhat interchangeably). It’s just harder to quantify, and so most of the discussion will be in terms of cost. But for the purposes of raising the ceiling on adoption (total addressable market), higher quality works as well as lower cost, so the lowering of costs is directly relevant.
In this framing, logarithmic improvement of quality with more resources isn’t an unusual AI-specific thing either. What remains is the inflated expectations for how quality should be improving cheaply (which is not a real thing, and so leads to the impressions of plateauing with AI, where for other technologies very slow quality improvement would be the default expectation). And Moore’s law of price-performance, which is much faster than economic growth. The economies of scale mostly won’t be able to notice the growth of the specific market for some post-adoption technology that’s merely downstream of the growth of the overall economy. But with AI, available compute would be growing fast enough to make a difference even post-adoption (in 2030s).
Is this true??