Your shift in demand has to come from somewhere and not just be something that materialized out of thin air…If one sees value in Say’s Law, then the increased demand for some product/service comes from the increased production of other goods and services…just where are the resources for the shift in supply you suggest?
If a human population gradually grows (say, by birth or immigration), then demand for pretty much every product increases, and production of pretty much every product increases, and pretty much every product becomes less expensive via experience curves / economies of scale / R&D.
Agree?
QUESTION: How is that fact compatible with Say’s Law?
If you write down an answer, then I will take the text of your answer but replace the word “humans” with “AGIs” everywhere, and bam, that’s basically my answer to your question! :) (after some minor additional tweaks.)
See what I mean?
The first seems a really poor understanding and hardly steelmanning the economic arguments/views.
Correct, this is not “steelmanning”, this is “addressing common mistakes”. My claim is that a great many trained economists—but not literally 100% of trained economists—have a bundle of intuitions for thinking about labor, and a different bundle of intuitions for thinking about capital, and these intuitions lead to them having incorrect and incoherent beliefs about AGI. This is something beyond formal economics models, it’s a set of mental models and snap reflexes developed over the course of them spending years in the field studying the current and historic economy. The snap reaction says: “That’s not what labor automation is supposed to look like, that can’t be right, there must be an error somewhere.” Indeed, AGI is not what labor automation looks like today, and it’s not how labor automation has ever looked, because AGI is not labor automation, it’s something entirely new.
I say this based on both talking to economists and reading their writing about future AI, and no I’m not talking about people who took Econ 101, but rather prominent tenured economics professors, Econ PhDs who specialize in the economics of R&D and automation, etc.
(…People who ONLY took Econ 101 are irrelevant, they probably forgot everything about economics the day after the course ended :-P )
If a human population gradually grows (say, by birth or immigration), then demand for pretty much every product increases, and production of pretty much every product increases, and pretty much every product becomes less expensive via experience curves / economies of scale / R&D.
Agree?
QUESTION: How is that fact compatible with Say’s Law?
If you write down an answer, then I will take the text of your answer but replace the word “humans” with “AGIs” everywhere, and bam, that’s basically my answer to your question! :) (after some minor additional tweaks.)
Okay. Humans are capable of final consumption (i.e. with a reward function that does not involve making more money later).
I’m interested to see how an AI would do that because it is the crux of a lot of downstream processes.
OK sure, here’s THOUGHT EXPERIMENT 1: suppose that these future AGIs desire movies, cars, smartphones, etc. just like humans do. Would you buy my claims in that case?
If so—well, not all humans want to enjoy movies and fine dining. Some have strong ambitious aspirations—to go to Mars, to cure cancer, whatever. If they have money, they spend it on trying to make their dream happen. If they need money or skills, they get them.
For example, Jeff Bezos had a childhood dream of working on rocket ships. He founded Amazon to get money to do Blue Origin, which he is sinking $2B/year into.
Would the economy collapse if all humans put their spending cash towards ambitious projects like rocket ships, instead of movies and fast cars? No, of course not! Right?
So the fact that humans “demand” videogames rather than scramjet prototypes is incidental, not a pillar of the economy.
OK, back to AIs. I acknowledge that AIs are unlikely to want movies and fast cars. But AIs can certainly “want” to accomplish ambitious projects. If we’re putting aside misalignment and AI takeover, these ambitious projects would be ones that their human programmer installed, like making cures-for-cancer and quantum computers. Or if we’re not putting aside misalignment, then these ambitious projects might include building galaxy-scale paperclip factories or whatever.
So THOUGHT EXPERIMENT 2: these future AGIs don’t desire movies etc. like in Thought Experiment 1, but rather desire to accomplish certain ambitious projects like curing cancer, quantum computation, or galaxy-scale paperclip factories.
My claims are:
If you agree that humans historically bootstrapped themselves to a large population, then you should accept that Thought Experiment 1 enables exponentially-growing AGIs, since that’s basically isomorphic (except that AGI population growth can be much faster because it’s not bottlenecked by human pregnancy and maturation times).
If you buy that Thought Experiment 1 enables exponentially-growing AGIs, then you should buy that Thought Experiment 2 enables exponentially-growing AGIs too, since that’s basically isomorphic. (Actually, if anything, the case for growth is stronger for Thought Experiment 2 than 1!)
Do you agree? Or where do you get off the train? (Or sorry if I’m misunderstanding your comment.)
I’d say this is a partial misunderstanding, because the difference between final and intermediate consumption is about intention, rather than the type of goods.
Or to be more concrete, this is where I get off the train.
Would the economy collapse if all humans put their spending cash towards ambitious projects like rocket ships, instead of movies and fast cars? No, of course not! Right?
It depends entirely on whether these endeavors were originally thought to be profitable. If you were spending your own money, with no thought of financial returns, then it would be fine. If all the major companies on the stock market announced today that they were devoting all of their funds to rocket ships, on the other hand, the result would be easily called a economic collapse, as people (banks, bondholders, etc.) recalibrate their balance sheets to the updated profitability expectations.
If AI is directing that spending, rather than people, on the other hand, the distinction would not be between alignment and misalignment, but rather with something more akin to ‘analignment,’ where AIs could have spending preferences completely disconnected from those of their human owners. Otherwise, their financial results would simply translate to the financial conditions of their owners.
The reason why intention is relevant to models which might appear at first to be entirely mechanistic has to do with emergent properties. While on the one hand this is just an accounting question, you would also hope that in your model, GDP at time t bears some sort of relationship with t+1 (or whatever alternative measure of economic activity you prefer). Ultimately, any model of reality has to start at some level of analysis. This can be subjective, and I would potentially be open to a case that AI might be a more suitable level of analysis than the individual human, but if you are making that case then I would like to see the case the independence of AI spending decisions. If that turns out to be a difficult argument to make, then it’s a sign that it may be worth keeping conventional economics as the most efficient/convenient/productive modelling approach.
For obvious reasons, we should care a great deal whether the exponentially-growing mass of AGIs-building-AGIs is ultimately trying to make cancer cures and other awesome consumer products (things that humans view as intrinsically valuable / ends in themselves), versus ultimately trying to make galaxy-scale paperclip factories (things that misaligned AIs view as intrinsically valuable / ends in themselves).
From my perspective, I care about this because the former world is obviously a better world for me to live in.
But it seems like you have some extra reason to care about this, beyond that, and I’m confused about what that is. I get the impression that you are focused on things that are “just accounting questions”?
Analogy: In those times and places where slavery was legal, “food given to slaves” was presumably counted as an intermediate good, just like gasoline to power a tractor, right? Because they’re kinda the same thing (legally / economically), i.e. they’re an energy source that helps get the wheat ready for sale, and then that wheat is the final product that the slaveowner is planning to sell. If slavery is replaced by a legally-different but functionally-equivalent system (indentured servitude or whatever), does GDP skyrocket overnight because the food-given-to-farm-workers magically transforms from an intermediate to a final good? It does, right? But that change is just on paper. It doesn’t reflect anything real.
I think what you’re talking about for AGI is likewise just “accounting”, not anything real. So who cares? We don’t need a “subjective” “level of analysis”, if we don’t ask subjective questions in the first place. We can instead talk concretely about the future world and its “objective” properties. Like, do we agree about whether or not there is an unprecedented exponential explosion of AGIs? If so, we can talk about what those AGIs will be doing at any given time, and what the humans are doing, and so on. Right?
First, I’ll admit, after rereading, a poor/uncharitable first read of your position. Sorry for that.
But I would still suggest your complaint is about some economist rather than economics as a study or analysis tool For example, “If anything, the value of labor goes UP, not down, with population! E.g. dense cities are engines of growth!” fits very well into economics of network effects.
To the extent that the economists can only consider AI as capital that’s a flawed view, I would agree. I would suggest it is, for economic application, probably best equated with “human capital”—which is also something different than labor or capital in classic capital-labor dichotomy.
So, in the end I still see the main complaint you have is not really about economics but perhaps your experience is that economists (that you talk with) might be more prone to this blind spot/bias than others (not sure who that population might be). I don’t see that you’ve really made the case that it was the study of economics that produced this situation. Which them suggests that we don’t really have a good pointer to how to get less wrong on this front.
Strictly relating to pre-singularity AI, everything after that is a different paradigm.
The strongest economic trend I’m aware of is growing inequality.
AI would seem to be an accelerant of this trend, i.e. I think most AI returns are captured by capital, not labour. (AIs are best modelled as slaves in this context).
And inequality would seem to be demand destroying—there are less consumers, most consumers are poorer.
Thus my near term (pre-singularity) expectations are something like—massive runaway financialization; divergence between paper economy (roughly claims on resources) and real economy (roughly resources). And yes we have something like a gilded age where a very small section of the planet lives very well until singularity and then we find out if humanity graduates to the next round.
But like, fundamentally this isn’t a picture of runaway economic growth—which is what everyone else talking about this seems to be describing.
Would appreciate clarity/correction/insight here.
Increasing inequality has been a thing here in the US for a few decades now, but it’s not universal, and it’s not an inevitable consequence of economic growth. Moreover, it does not (in the US) consist of poor people getting poorer and rich people getting richer. It consists of poor people staying poor, or only getting a bit richer, while rich people get a whole lot richer. Thus, it is not demand destroying.
One could imagine this continuing with the advent of AI, or of everyone ending up equally dead, or many other outcomes.
In the rate-limiting resource, housing, the poor have indeed gotten poorer. Treating USD as a wealth primitive [ not to mention treating “demand” as a game-theoretic primitive ] is an economist-brained error.
If a human population gradually grows (say, by birth or immigration), then demand for pretty much every product increases, and production of pretty much every product increases, and pretty much every product becomes less expensive via experience curves / economies of scale / R&D.
Agree?
QUESTION: How is that fact compatible with Say’s Law?
If you write down an answer, then I will take the text of your answer but replace the word “humans” with “AGIs” everywhere, and bam, that’s basically my answer to your question! :) (after some minor additional tweaks.)
See what I mean?
Correct, this is not “steelmanning”, this is “addressing common mistakes”. My claim is that a great many trained economists—but not literally 100% of trained economists—have a bundle of intuitions for thinking about labor, and a different bundle of intuitions for thinking about capital, and these intuitions lead to them having incorrect and incoherent beliefs about AGI. This is something beyond formal economics models, it’s a set of mental models and snap reflexes developed over the course of them spending years in the field studying the current and historic economy. The snap reaction says: “That’s not what labor automation is supposed to look like, that can’t be right, there must be an error somewhere.” Indeed, AGI is not what labor automation looks like today, and it’s not how labor automation has ever looked, because AGI is not labor automation, it’s something entirely new.
I say this based on both talking to economists and reading their writing about future AI, and no I’m not talking about people who took Econ 101, but rather prominent tenured economics professors, Econ PhDs who specialize in the economics of R&D and automation, etc.
(…People who ONLY took Econ 101 are irrelevant, they probably forgot everything about economics the day after the course ended :-P )
Okay. Humans are capable of final consumption (i.e. with a reward function that does not involve making more money later).
I’m interested to see how an AI would do that because it is the crux of a lot of downstream processes.
OK sure, here’s THOUGHT EXPERIMENT 1: suppose that these future AGIs desire movies, cars, smartphones, etc. just like humans do. Would you buy my claims in that case?
If so—well, not all humans want to enjoy movies and fine dining. Some have strong ambitious aspirations—to go to Mars, to cure cancer, whatever. If they have money, they spend it on trying to make their dream happen. If they need money or skills, they get them.
For example, Jeff Bezos had a childhood dream of working on rocket ships. He founded Amazon to get money to do Blue Origin, which he is sinking $2B/year into.
Would the economy collapse if all humans put their spending cash towards ambitious projects like rocket ships, instead of movies and fast cars? No, of course not! Right?
So the fact that humans “demand” videogames rather than scramjet prototypes is incidental, not a pillar of the economy.
OK, back to AIs. I acknowledge that AIs are unlikely to want movies and fast cars. But AIs can certainly “want” to accomplish ambitious projects. If we’re putting aside misalignment and AI takeover, these ambitious projects would be ones that their human programmer installed, like making cures-for-cancer and quantum computers. Or if we’re not putting aside misalignment, then these ambitious projects might include building galaxy-scale paperclip factories or whatever.
So THOUGHT EXPERIMENT 2: these future AGIs don’t desire movies etc. like in Thought Experiment 1, but rather desire to accomplish certain ambitious projects like curing cancer, quantum computation, or galaxy-scale paperclip factories.
My claims are:
If you agree that humans historically bootstrapped themselves to a large population, then you should accept that Thought Experiment 1 enables exponentially-growing AGIs, since that’s basically isomorphic (except that AGI population growth can be much faster because it’s not bottlenecked by human pregnancy and maturation times).
If you buy that Thought Experiment 1 enables exponentially-growing AGIs, then you should buy that Thought Experiment 2 enables exponentially-growing AGIs too, since that’s basically isomorphic. (Actually, if anything, the case for growth is stronger for Thought Experiment 2 than 1!)
Do you agree? Or where do you get off the train? (Or sorry if I’m misunderstanding your comment.)
I’d say this is a partial misunderstanding, because the difference between final and intermediate consumption is about intention, rather than the type of goods.
Or to be more concrete, this is where I get off the train.
It depends entirely on whether these endeavors were originally thought to be profitable. If you were spending your own money, with no thought of financial returns, then it would be fine. If all the major companies on the stock market announced today that they were devoting all of their funds to rocket ships, on the other hand, the result would be easily called a economic collapse, as people (banks, bondholders, etc.) recalibrate their balance sheets to the updated profitability expectations.
If AI is directing that spending, rather than people, on the other hand, the distinction would not be between alignment and misalignment, but rather with something more akin to ‘analignment,’ where AIs could have spending preferences completely disconnected from those of their human owners. Otherwise, their financial results would simply translate to the financial conditions of their owners.
The reason why intention is relevant to models which might appear at first to be entirely mechanistic has to do with emergent properties. While on the one hand this is just an accounting question, you would also hope that in your model, GDP at time t bears some sort of relationship with t+1 (or whatever alternative measure of economic activity you prefer). Ultimately, any model of reality has to start at some level of analysis. This can be subjective, and I would potentially be open to a case that AI might be a more suitable level of analysis than the individual human, but if you are making that case then I would like to see the case the independence of AI spending decisions. If that turns out to be a difficult argument to make, then it’s a sign that it may be worth keeping conventional economics as the most efficient/convenient/productive modelling approach.
For obvious reasons, we should care a great deal whether the exponentially-growing mass of AGIs-building-AGIs is ultimately trying to make cancer cures and other awesome consumer products (things that humans view as intrinsically valuable / ends in themselves), versus ultimately trying to make galaxy-scale paperclip factories (things that misaligned AIs view as intrinsically valuable / ends in themselves).
From my perspective, I care about this because the former world is obviously a better world for me to live in.
But it seems like you have some extra reason to care about this, beyond that, and I’m confused about what that is. I get the impression that you are focused on things that are “just accounting questions”?
Analogy: In those times and places where slavery was legal, “food given to slaves” was presumably counted as an intermediate good, just like gasoline to power a tractor, right? Because they’re kinda the same thing (legally / economically), i.e. they’re an energy source that helps get the wheat ready for sale, and then that wheat is the final product that the slaveowner is planning to sell. If slavery is replaced by a legally-different but functionally-equivalent system (indentured servitude or whatever), does GDP skyrocket overnight because the food-given-to-farm-workers magically transforms from an intermediate to a final good? It does, right? But that change is just on paper. It doesn’t reflect anything real.
I think what you’re talking about for AGI is likewise just “accounting”, not anything real. So who cares? We don’t need a “subjective” “level of analysis”, if we don’t ask subjective questions in the first place. We can instead talk concretely about the future world and its “objective” properties. Like, do we agree about whether or not there is an unprecedented exponential explosion of AGIs? If so, we can talk about what those AGIs will be doing at any given time, and what the humans are doing, and so on. Right?
First, I’ll admit, after rereading, a poor/uncharitable first read of your position. Sorry for that.
But I would still suggest your complaint is about some economist rather than economics as a study or analysis tool For example, “If anything, the value of labor goes UP, not down, with population! E.g. dense cities are engines of growth!” fits very well into economics of network effects.
To the extent that the economists can only consider AI as capital that’s a flawed view, I would agree. I would suggest it is, for economic application, probably best equated with “human capital”—which is also something different than labor or capital in classic capital-labor dichotomy.
So, in the end I still see the main complaint you have is not really about economics but perhaps your experience is that economists (that you talk with) might be more prone to this blind spot/bias than others (not sure who that population might be). I don’t see that you’ve really made the case that it was the study of economics that produced this situation. Which them suggests that we don’t really have a good pointer to how to get less wrong on this front.
Not an economist; have a confusion.
Strictly relating to pre-singularity AI, everything after that is a different paradigm.
The strongest economic trend I’m aware of is growing inequality.
AI would seem to be an accelerant of this trend, i.e. I think most AI returns are captured by capital, not labour. (AIs are best modelled as slaves in this context).
And inequality would seem to be demand destroying—there are less consumers, most consumers are poorer.
Thus my near term (pre-singularity) expectations are something like—massive runaway financialization; divergence between paper economy (roughly claims on resources) and real economy (roughly resources). And yes we have something like a gilded age where a very small section of the planet lives very well until singularity and then we find out if humanity graduates to the next round.
But like, fundamentally this isn’t a picture of runaway economic growth—which is what everyone else talking about this seems to be describing.
Would appreciate clarity/correction/insight here.
Increasing inequality has been a thing here in the US for a few decades now, but it’s not universal, and it’s not an inevitable consequence of economic growth. Moreover, it does not (in the US) consist of poor people getting poorer and rich people getting richer. It consists of poor people staying poor, or only getting a bit richer, while rich people get a whole lot richer. Thus, it is not demand destroying.
One could imagine this continuing with the advent of AI, or of everyone ending up equally dead, or many other outcomes.
In the rate-limiting resource, housing, the poor have indeed gotten poorer. Treating USD as a wealth primitive [ not to mention treating “demand” as a game-theoretic primitive ] is an economist-brained error.