Your list of “actual arguments” against explosive growth seems to be missing the one that is by far the most important/convincing IMO, namely Baumol effects.
This argument has been repeatedly brought up by growth economists in earlier rounds from the AI-explosive-growth debate. So rather than writing my own version of this argument, I’ll just paste some quotes below.
As far as I can tell, the phenomenon discussed in these quotes is excluded by construction from the GATE model: while it draws a distinction between different “tasks” on the production side, its model of consumption effectively has only one “consumable good” which all these tasks produce (or equivalently, multiple goods which are all perfect substitutes for one another).
In other words, it stipulates what Vollrath (in the first quote below) calls “[the] truly unbelievable assumption that [AI] can innovate *precisely* equally across every product in existence.” Of course, if you do assume this “truly unbelievable” thing, then you don’t get Baumol effects – but this would be a striking difference from what has happened in every historical automation wave, and also just sort of prima facie bizarre.
Sure, maybe AI will be different in a way that turns off Baumol effects, for some reason or other. But if that is the claim, then an argument needs to be made for that specific claim, and why it will hold for AI when it hasn’t for anything else before. It can’t be justified as a mere “modeling simplification,” because the same “simplification” would have led you to wrongly expect similar explosive growth from past agricultural automation, from Moore’s Law, etc.
History suggests that people tend to view many goods and services as complements. Yes, within specific sub-groups (e.g. shoes) different versions are close substitutes, but across those groups (e.g. shoes and live concerts) people treat them as complements and would like to consume some of both.
What does that do to the predictions of explosive growth? It suggests that it may “eat itself”. AI or whatever will deliver productivity growth to some products faster than others, barring a truly unbelievable assumption that it can innovate *precisely* equally across every product in existence. When productivity grows more rapidly in product A than in product B (50% versus 10%, say), the relative price of product A falls relative to product B. Taking A and B as complements, what happens to the total expenditure on A (price times quantity)? It falls. We can get all the A we want for very cheap, and because we like both A and B, we have a limit on how much A we want. So total spending on A falls.
But growth in aggregate productivity (and in GWP, leaving aside my comments on inputs above) is a weighted average of productivity growth in all products. The weights are the expenditure shares. So in the A/B example, as A gets more and more productive relative to B, the productivity growth rate *falls* towards the 10% of product B. In general, the growth rate of productivity is going to get driven towards the *lowest* productivity growth rate across the range of products we consume.
And the faster that productivity grows in product A, the sooner the aggregate growth rate will fall to the productivity growth rate of B. So a massive question for this report is how widespread explosive growth is expected to be. Productivity growth in *all* products of 10% forever would deliver 10% growth in productivity forever (and perhaps in GWP). Great. But productivity growth of 100% in A and 0% in B will devolve into productivity growth of 0% over time.
This has nothing to do with the nature of R&D or the knife-edge conditions on growth models. This is simply about the nature of demand for products.
From Ben Jones’ review of the same Davidson 2021 report:
[W]e have successfully automated an amazing amount of agricultural production (in advanced economies) since the 19th century. One fact I like: In 2018, a farmer using a single combine harvester in Illinois set a record by harvesting 3.5 million pounds of corn in just 12 hours. That is really amazing. But the result is that corn is far cheaper than it used to be, and the GDP implications are modest. As productivity advances and prices fall, these amazing technologies tend to become rounding errors in GDP and labor productivity overall. Indeed, agricultural output used to be about half of all GDP but now it is down to just a couple percent of GDP. The things you get good at tend to disappear as their prices plummet. Another example is Moore’s Law. The progress here is even more mind-boggling – with growth rates in calculations per unit of resource cost going up by over 30% per year. But the price of calculations has plummeted in response. Meanwhile, very many things that we want but don’t make rapid progress in – generating electricity; traveling across town; extracting resources from mines; fixing a broken window; fixing a broken limb; vacation services – see sustained high prices and come to take over the economy. In fact, despite the amazing results of Moore’s Law and all the quite general-purpose advances it enables – from the Internet, to smartphones, to machine learning – the productivity growth in the U.S. economy if anything appears to be slowing down.
There are two ways to “spend” an increase in productivity driven by new ideas. You can use it to produce more goods and services given the same amount of inputs as before, or you can use it to reduce the inputs used while producing the same goods and services as before. If we presume that AI can generate explosive growth in ideas, a very real choice people might make is to “spend” it on an explosive decline in input use rather than an explosive increase in GDP.
Let’s say AI becomes capable of micro-managing agricultural land. There is already a “laser-weeder” capable of rolling over a field and using AI to identify weeds and then kill them off with a quick laser strike. Let’s say AI raises agricultural productivity by a factor of 10 (even given all the negative feedback loops mentioned above). What’s the response to this? Do we continue to use the same amount of agricultural land as before (and all the other associated resources) and increase food production by a factor of 10? Or do we take advantage of this to shrink the amount of land used for agriculture by a factor of 10? If you choose the latter—which is entirely reasonable given that worldwide we produce enough food to feed everyone—then there is no explosive growth in agricultural output. There isn’t any growth in agricultural output. We’ve taken the AI-generate idea and generated exactly zero economic growth, but reduced our land use by around 90%.
Which is amazing! This kind of productivity improvement would be a massive environmental success. But ideas don’t have to translate into economic growth to be amazing. More important, amazing-ness does necessarily lead to economic growth.
In general I find the AI explosive growth debate pretty confusing and frustrating, for reasons related to what Vollrath says about “amazing-ness” in that last quote.
Often (and for instance, in this post), the debate gets treated as indirect “shadowboxing” about the plausibility of various future AI capabilities, or about the degree of “transformation” AI will bring to the future economy – if you doubt explosive growth you are probably not really “feeling the AGI,” etc.
But if we really want to talk about those things, we should just talk about them directly. “Will there be explosive growth?” is a poor proxy for “will AI dramatically transform the world economy?”, and things get very muddled when we talk about the former and then read into this talk to guess what someone really thinks about the latter.
Maybe AI will be so transformative that “the economy” and “economic growth” won’t even exist in any sense we would now recognize. Maybe it attains capabilities that could sustain explosive growth if there were consumers around to hold up the demand side of that bargain, but it turns out that humans just can’t meaningfully “consume” at 100x (or 1000x or whatever) of current levels, at some point there’s only 24h in a day, and only so much your mind can attend to at once, etc. Or maybe there is explosive growth, but it involves “synthetic demand” by AIs for AI-produced goods in a parallel economy humans don’t much care about, and we face the continual nuisance of filtering that stuff out of GDP so that GDP still tracks anything meaningful to us.
Or something else entirely, who knows! What we care about is the actual content of the economic transformation – the specific “amazing” things that will happen, in Vollrath’s terms. We should argue over those, and only derive the answer to “will there be explosive growth?” as a secondary consequence.
The list doesn’t exclude Baumal effects as these are just the implication of:
Physical bottlenecks and delays prevent growth. Intelligence only goes so far.
Regulatory and social bottlenecks prevent growth this fast, INT only goes so far.
Like Baumal effects are just some area of the economy with more limited growth bottlenecking the rest of the economy. So, we might as well just directly name the bottleneck.
Your argument seems to imply you think there might be some other bottleneck like:
There will be some cognitive labor sector of the economy which AIs can’t do.
But, this is just a special case of “will there be superintelligence which exceeds human cognitive performance in all domains”.
In other words, it stipulates what Vollrath (in the first quote below) calls “[the] truly unbelievable assumption that [AI] can innovate precisely equally across every product in existence.” Of course, if you do assume this “truly unbelievable” thing, then you don’t get Baumol effects – but this would be a striking difference from what has happened in every historical automation wave, and also just sort of prima facie bizarre.
Huh? It doesn’t require equal innovation across all products, it just requires that the bottlenecking sectors have sufficiently high innovation/growth that the overall economy can grow. Sufficient innovation in all potentially bottlenecking sectors != equal innovation.
Suppose world population was 100,000x higher, but these additional people magically didn’t consume anything or need office space. I think this would result in very fast economic growth due to advancing all sectors simultaneously. Imagining population growth increases seems to be to set a lower bound on the implications of highly advanced AI (and robotics).
As far as I can tell, this Baumol effect argument is equally good at predicting that 3% or 10% growth rates are impossible from the perspective of people in agricultural societies with much lower growth rates.
So, I think you have to be quantitative and argue about the exact scale of the bottleneck and why it will prevent some rate of progress. The true physical limits (doubling time on the order of days or less, dyson sphere or even consuming solar mass faster than this) are extremely high, so this can’t be the bottleneck—it must be something about the rate of innovation or physical capital accumulation leading up to true limits.
Perhaps your view is: “Sure, we’ll quickly have a Dyson sphere and ungodly amounts of compute, but this won’t really result in explosive GDP growth as GDP will be limited by sectors that directly interface with humans like education (presumably for fun?) or services where the limits are much lower.” But, this isn’t a crux for the vast majority of arguments which depend on the potential for explosive growth!
I second the general point that GDP growth is a funny metric … it seems possible (as far as I know) for a society to invent every possible technology, transform the world into a wild sci-fi land beyond recognition or comprehension each month, etc., without quote-unquote “GDP growth” actually being all that high — cf. What Do GDP Growth Curves Really Mean? and follow-up Some Unorthodox Ways To Achieve High GDP Growth with (conversely) a toy example of sustained quote-unquote “GDP growth” in a static economy.
This is annoying to me, because, there’s a massive substantive worldview difference between people who expect, y’know, the thing where the world transforms into a wild sci-fi land beyond recognition or comprehension each month, or whatever, versus the people who are expecting something akin to past technologies like railroads or e-commerce. I really want to talk about that huge worldview difference, in a way that people won’t misunderstand. Saying “>100%/year GDP growth” is a nice way to do that … so it’s annoying that this might be technically incorrect (as far as I know). I don’t have an equally catchy and clear alternative.
(Hmm, I once saw someone (maybe Paul Christiano?) saying “1% of Earth’s land area will be covered with solar cells in X number of years”, or something like that. But that failed to communicate in an interesting way: the person he was talking to treated the claim as so absurd that he must have messed up by misplacing a decimal point :-P ) (Will MacAskill has been trying “century in a decade”, which I think works in some ways but gives the wrong impression in other ways.)
What I would really like to see is cost of living plummet to 0. Then cost of thriving plummet to 0. Which would also cause GDP to plummet. However, this is only a problem in practical terms if the forces of automation require money to keep running, rather than, say, a benevolent ASI taking care of humanity as a personal hobby.
One way or another, though, AGI is going to have an impact on this world of a magnitude equivalent to something like a 30% growth in GWP per year at least. This includes all life getting wiped out, of course.
Maybe we need a standard metric for the rate of unrecognizability/incomprehensibility of the world and talk about how AGI will accelerate this. Like how much a person accustomed to life in 1500 would have to adjust to fit in to the world of 2000. A standard shock level (SSL), if you will.
The shock level of 2000 relative to 1500 may end up describing the shock level of 2040 relative to 2020, assuming AGI has saturated the global economy by then. The time it takes for the world to become unrecognizable (again and again) will shrink over time as intelligence grows, whether manifested as GDP growth, GDP collapse, or paperclipping. If ordinary people understood that at least, you might get more push for investment into alignment research or for stricter regulations.
Your list of “actual arguments” against explosive growth seems to be missing the one that is by far the most important/convincing IMO, namely Baumol effects.
This argument has been repeatedly brought up by growth economists in earlier rounds from the AI-explosive-growth debate. So rather than writing my own version of this argument, I’ll just paste some quotes below.
As far as I can tell, the phenomenon discussed in these quotes is excluded by construction from the GATE model: while it draws a distinction between different “tasks” on the production side, its model of consumption effectively has only one “consumable good” which all these tasks produce (or equivalently, multiple goods which are all perfect substitutes for one another).
In other words, it stipulates what Vollrath (in the first quote below) calls “[the] truly unbelievable assumption that [AI] can innovate *precisely* equally across every product in existence.” Of course, if you do assume this “truly unbelievable” thing, then you don’t get Baumol effects – but this would be a striking difference from what has happened in every historical automation wave, and also just sort of prima facie bizarre.
Sure, maybe AI will be different in a way that turns off Baumol effects, for some reason or other. But if that is the claim, then an argument needs to be made for that specific claim, and why it will hold for AI when it hasn’t for anything else before. It can’t be justified as a mere “modeling simplification,” because the same “simplification” would have led you to wrongly expect similar explosive growth from past agricultural automation, from Moore’s Law, etc.
From Dietrich Vollrath’s review of Davidson 2021:
From Ben Jones’ review of the same Davidson 2021 report:
And here’s Vollrath again, from his commentary on Clancy and Besiroglu 2023:
In general I find the AI explosive growth debate pretty confusing and frustrating, for reasons related to what Vollrath says about “amazing-ness” in that last quote.
Often (and for instance, in this post), the debate gets treated as indirect “shadowboxing” about the plausibility of various future AI capabilities, or about the degree of “transformation” AI will bring to the future economy – if you doubt explosive growth you are probably not really “feeling the AGI,” etc.
But if we really want to talk about those things, we should just talk about them directly. “Will there be explosive growth?” is a poor proxy for “will AI dramatically transform the world economy?”, and things get very muddled when we talk about the former and then read into this talk to guess what someone really thinks about the latter.
Maybe AI will be so transformative that “the economy” and “economic growth” won’t even exist in any sense we would now recognize. Maybe it attains capabilities that could sustain explosive growth if there were consumers around to hold up the demand side of that bargain, but it turns out that humans just can’t meaningfully “consume” at 100x (or 1000x or whatever) of current levels, at some point there’s only 24h in a day, and only so much your mind can attend to at once, etc. Or maybe there is explosive growth, but it involves “synthetic demand” by AIs for AI-produced goods in a parallel economy humans don’t much care about, and we face the continual nuisance of filtering that stuff out of GDP so that GDP still tracks anything meaningful to us.
Or something else entirely, who knows! What we care about is the actual content of the economic transformation – the specific “amazing” things that will happen, in Vollrath’s terms. We should argue over those, and only derive the answer to “will there be explosive growth?” as a secondary consequence.
The list doesn’t exclude Baumal effects as these are just the implication of:
Like Baumal effects are just some area of the economy with more limited growth bottlenecking the rest of the economy. So, we might as well just directly name the bottleneck.
Your argument seems to imply you think there might be some other bottleneck like:
But, this is just a special case of “will there be superintelligence which exceeds human cognitive performance in all domains”.
Huh? It doesn’t require equal innovation across all products, it just requires that the bottlenecking sectors have sufficiently high innovation/growth that the overall economy can grow. Sufficient innovation in all potentially bottlenecking sectors != equal innovation.
Suppose world population was 100,000x higher, but these additional people magically didn’t consume anything or need office space. I think this would result in very fast economic growth due to advancing all sectors simultaneously. Imagining population growth increases seems to be to set a lower bound on the implications of highly advanced AI (and robotics).
As far as I can tell, this Baumol effect argument is equally good at predicting that 3% or 10% growth rates are impossible from the perspective of people in agricultural societies with much lower growth rates.
So, I think you have to be quantitative and argue about the exact scale of the bottleneck and why it will prevent some rate of progress. The true physical limits (doubling time on the order of days or less, dyson sphere or even consuming solar mass faster than this) are extremely high, so this can’t be the bottleneck—it must be something about the rate of innovation or physical capital accumulation leading up to true limits.
Perhaps your view is: “Sure, we’ll quickly have a Dyson sphere and ungodly amounts of compute, but this won’t really result in explosive GDP growth as GDP will be limited by sectors that directly interface with humans like education (presumably for fun?) or services where the limits are much lower.” But, this isn’t a crux for the vast majority of arguments which depend on the potential for explosive growth!
Sorry if my comment was triggering @nostalgebraist. : (
I second the general point that GDP growth is a funny metric … it seems possible (as far as I know) for a society to invent every possible technology, transform the world into a wild sci-fi land beyond recognition or comprehension each month, etc., without quote-unquote “GDP growth” actually being all that high — cf. What Do GDP Growth Curves Really Mean? and follow-up Some Unorthodox Ways To Achieve High GDP Growth with (conversely) a toy example of sustained quote-unquote “GDP growth” in a static economy.
This is annoying to me, because, there’s a massive substantive worldview difference between people who expect, y’know, the thing where the world transforms into a wild sci-fi land beyond recognition or comprehension each month, or whatever, versus the people who are expecting something akin to past technologies like railroads or e-commerce. I really want to talk about that huge worldview difference, in a way that people won’t misunderstand. Saying “>100%/year GDP growth” is a nice way to do that … so it’s annoying that this might be technically incorrect (as far as I know). I don’t have an equally catchy and clear alternative.
(Hmm, I once saw someone (maybe Paul Christiano?) saying “1% of Earth’s land area will be covered with solar cells in X number of years”, or something like that. But that failed to communicate in an interesting way: the person he was talking to treated the claim as so absurd that he must have messed up by misplacing a decimal point :-P ) (Will MacAskill has been trying “century in a decade”, which I think works in some ways but gives the wrong impression in other ways.)
What I would really like to see is cost of living plummet to 0. Then cost of thriving plummet to 0. Which would also cause GDP to plummet. However, this is only a problem in practical terms if the forces of automation require money to keep running, rather than, say, a benevolent ASI taking care of humanity as a personal hobby.
One way or another, though, AGI is going to have an impact on this world of a magnitude equivalent to something like a 30% growth in GWP per year at least. This includes all life getting wiped out, of course.
Maybe we need a standard metric for the rate of unrecognizability/incomprehensibility of the world and talk about how AGI will accelerate this. Like how much a person accustomed to life in 1500 would have to adjust to fit in to the world of 2000. A standard shock level (SSL), if you will.
The shock level of 2000 relative to 1500 may end up describing the shock level of 2040 relative to 2020, assuming AGI has saturated the global economy by then. The time it takes for the world to become unrecognizable (again and again) will shrink over time as intelligence grows, whether manifested as GDP growth, GDP collapse, or paperclipping. If ordinary people understood that at least, you might get more push for investment into alignment research or for stricter regulations.