Exponential increase in total economic value is not specific to AI, any new tech is going to start exponentially (possibly following the startups championing it) before it gets further on the adoption S-curve. The unusual things about AI is that it gets better with more resources (while most other things just don’t get better at all in a straightforward scaling law manner), that the logarithm of resources thing leaves the persistent impression of plateauing despite not actually plateauing, and that even if it runs out of the adoption S-curve it still has Moore’s law of price-performance to keep fueling its improvement. These unusual things frame the sense in which it’s linear/logarithmic.
If the improvement keeps raising the ceiling on adoption (capabilities) fast enough, funding keeps scaling into slightly more absurd territory, but even then it won’t go a long way without the kind of takeoff that makes anything like the modern industry obsolete. After the exponential phase of adoption comes to an end, it falls back to Moore’s law, which still keeps giving it exponential compute to slowly keep fueling further progress, and in that sense there is some unusual exponential-ness to this. Though probably there are other things with scaling laws of their own that global economic growth (instead of Moore’s law) would similarly fuel, even slower.
In many industries cost decreases by some factor with every doubling of cumulative production. This is how solar eventually became economically viable.
I guess the cost-quality tradeoff makes AI progress even better described as that of a normal technology. As economies of scale reduce cost, they should also be increasing quality (somewhat interchangeably). It’s just harder to quantify, and so most of the discussion will be in terms of cost. But for the purposes of raising the ceiling on adoption (total addressable market), higher quality works as well as lower cost, so the lowering of costs is directly relevant.
In this framing, logarithmic improvement of quality with more resources isn’t an unusual AI-specific thing either. What remains is the inflated expectations for how quality should be improving cheaply (which is not a real thing, and so leads to the impressions of plateauing with AI, where for other technologies very slow quality improvement would be the default expectation). And Moore’s law of price-performance, which is much faster than economic growth. The economies of scale mostly won’t be able to notice the growth of the specific market for some post-adoption technology that’s merely downstream of the growth of the overall economy. But with AI, available compute would be growing fast enough to make a difference even post-adoption (in 2030s).
Exponential increase in total economic value is not specific to AI, any new tech is going to start exponentially (possibly following the startups championing it) before it gets further on the adoption S-curve. The unusual things about AI is that it gets better with more resources (while most other things just don’t get better at all in a straightforward scaling law manner), that the logarithm of resources thing leaves the persistent impression of plateauing despite not actually plateauing, and that even if it runs out of the adoption S-curve it still has Moore’s law of price-performance to keep fueling its improvement. These unusual things frame the sense in which it’s linear/logarithmic.
If the improvement keeps raising the ceiling on adoption (capabilities) fast enough, funding keeps scaling into slightly more absurd territory, but even then it won’t go a long way without the kind of takeoff that makes anything like the modern industry obsolete. After the exponential phase of adoption comes to an end, it falls back to Moore’s law, which still keeps giving it exponential compute to slowly keep fueling further progress, and in that sense there is some unusual exponential-ness to this. Though probably there are other things with scaling laws of their own that global economic growth (instead of Moore’s law) would similarly fuel, even slower.
In many industries cost decreases by some factor with every doubling of cumulative production. This is how solar eventually became economically viable.
I guess the cost-quality tradeoff makes AI progress even better described as that of a normal technology. As economies of scale reduce cost, they should also be increasing quality (somewhat interchangeably). It’s just harder to quantify, and so most of the discussion will be in terms of cost. But for the purposes of raising the ceiling on adoption (total addressable market), higher quality works as well as lower cost, so the lowering of costs is directly relevant.
In this framing, logarithmic improvement of quality with more resources isn’t an unusual AI-specific thing either. What remains is the inflated expectations for how quality should be improving cheaply (which is not a real thing, and so leads to the impressions of plateauing with AI, where for other technologies very slow quality improvement would be the default expectation). And Moore’s law of price-performance, which is much faster than economic growth. The economies of scale mostly won’t be able to notice the growth of the specific market for some post-adoption technology that’s merely downstream of the growth of the overall economy. But with AI, available compute would be growing fast enough to make a difference even post-adoption (in 2030s).