OK sure, here’s THOUGHT EXPERIMENT 1: suppose that these future AGIs desire movies, cars, smartphones, etc. just like humans do. Would you buy my claims in that case?
If so—well, not all humans want to enjoy movies and fine dining. Some have strong ambitious aspirations—to go to Mars, to cure cancer, whatever. If they have money, they spend it on trying to make their dream happen. If they need money or skills, they get them.
For example, Jeff Bezos had a childhood dream of working on rocket ships. He founded Amazon to get money to do Blue Origin, which he is sinking $2B/year into.
Would the economy collapse if all humans put their spending cash towards ambitious projects like rocket ships, instead of movies and fast cars? No, of course not! Right?
So the fact that humans “demand” videogames rather than scramjet prototypes is incidental, not a pillar of the economy.
OK, back to AIs. I acknowledge that AIs are unlikely to want movies and fast cars. But AIs can certainly “want” to accomplish ambitious projects. If we’re putting aside misalignment and AI takeover, these ambitious projects would be ones that their human programmer installed, like making cures-for-cancer and quantum computers. Or if we’re not putting aside misalignment, then these ambitious projects might include building galaxy-scale paperclip factories or whatever.
So THOUGHT EXPERIMENT 2: these future AGIs don’t desire movies etc. like in Thought Experiment 1, but rather desire to accomplish certain ambitious projects like curing cancer, quantum computation, or galaxy-scale paperclip factories.
My claims are:
If you agree that humans historically bootstrapped themselves to a large population, then you should accept that Thought Experiment 1 enables exponentially-growing AGIs, since that’s basically isomorphic (except that AGI population growth can be much faster because it’s not bottlenecked by human pregnancy and maturation times).
If you buy that Thought Experiment 1 enables exponentially-growing AGIs, then you should buy that Thought Experiment 2 enables exponentially-growing AGIs too, since that’s basically isomorphic. (Actually, if anything, the case for growth is stronger for Thought Experiment 2 than 1!)
Do you agree? Or where do you get off the train? (Or sorry if I’m misunderstanding your comment.)
I’d say this is a partial misunderstanding, because the difference between final and intermediate consumption is about intention, rather than the type of goods.
Or to be more concrete, this is where I get off the train.
Would the economy collapse if all humans put their spending cash towards ambitious projects like rocket ships, instead of movies and fast cars? No, of course not! Right?
It depends entirely on whether these endeavors were originally thought to be profitable. If you were spending your own money, with no thought of financial returns, then it would be fine. If all the major companies on the stock market announced today that they were devoting all of their funds to rocket ships, on the other hand, the result would be easily called a economic collapse, as people (banks, bondholders, etc.) recalibrate their balance sheets to the updated profitability expectations.
If AI is directing that spending, rather than people, on the other hand, the distinction would not be between alignment and misalignment, but rather with something more akin to ‘analignment,’ where AIs could have spending preferences completely disconnected from those of their human owners. Otherwise, their financial results would simply translate to the financial conditions of their owners.
The reason why intention is relevant to models which might appear at first to be entirely mechanistic has to do with emergent properties. While on the one hand this is just an accounting question, you would also hope that in your model, GDP at time t bears some sort of relationship with t+1 (or whatever alternative measure of economic activity you prefer). Ultimately, any model of reality has to start at some level of analysis. This can be subjective, and I would potentially be open to a case that AI might be a more suitable level of analysis than the individual human, but if you are making that case then I would like to see the case the independence of AI spending decisions. If that turns out to be a difficult argument to make, then it’s a sign that it may be worth keeping conventional economics as the most efficient/convenient/productive modelling approach.
For obvious reasons, we should care a great deal whether the exponentially-growing mass of AGIs-building-AGIs is ultimately trying to make cancer cures and other awesome consumer products (things that humans view as intrinsically valuable / ends in themselves), versus ultimately trying to make galaxy-scale paperclip factories (things that misaligned AIs view as intrinsically valuable / ends in themselves).
From my perspective, I care about this because the former world is obviously a better world for me to live in.
But it seems like you have some extra reason to care about this, beyond that, and I’m confused about what that is. I get the impression that you are focused on things that are “just accounting questions”?
Analogy: In those times and places where slavery was legal, “food given to slaves” was presumably counted as an intermediate good, just like gasoline to power a tractor, right? Because they’re kinda the same thing (legally / economically), i.e. they’re an energy source that helps get the wheat ready for sale, and then that wheat is the final product that the slaveowner is planning to sell. If slavery is replaced by a legally-different but functionally-equivalent system (indentured servitude or whatever), does GDP skyrocket overnight because the food-given-to-farm-workers magically transforms from an intermediate to a final good? It does, right? But that change is just on paper. It doesn’t reflect anything real.
I think what you’re talking about for AGI is likewise just “accounting”, not anything real. So who cares? We don’t need a “subjective” “level of analysis”, if we don’t ask subjective questions in the first place. We can instead talk concretely about the future world and its “objective” properties. Like, do we agree about whether or not there is an unprecedented exponential explosion of AGIs? If so, we can talk about what those AGIs will be doing at any given time, and what the humans are doing, and so on. Right?
OK sure, here’s THOUGHT EXPERIMENT 1: suppose that these future AGIs desire movies, cars, smartphones, etc. just like humans do. Would you buy my claims in that case?
If so—well, not all humans want to enjoy movies and fine dining. Some have strong ambitious aspirations—to go to Mars, to cure cancer, whatever. If they have money, they spend it on trying to make their dream happen. If they need money or skills, they get them.
For example, Jeff Bezos had a childhood dream of working on rocket ships. He founded Amazon to get money to do Blue Origin, which he is sinking $2B/year into.
Would the economy collapse if all humans put their spending cash towards ambitious projects like rocket ships, instead of movies and fast cars? No, of course not! Right?
So the fact that humans “demand” videogames rather than scramjet prototypes is incidental, not a pillar of the economy.
OK, back to AIs. I acknowledge that AIs are unlikely to want movies and fast cars. But AIs can certainly “want” to accomplish ambitious projects. If we’re putting aside misalignment and AI takeover, these ambitious projects would be ones that their human programmer installed, like making cures-for-cancer and quantum computers. Or if we’re not putting aside misalignment, then these ambitious projects might include building galaxy-scale paperclip factories or whatever.
So THOUGHT EXPERIMENT 2: these future AGIs don’t desire movies etc. like in Thought Experiment 1, but rather desire to accomplish certain ambitious projects like curing cancer, quantum computation, or galaxy-scale paperclip factories.
My claims are:
If you agree that humans historically bootstrapped themselves to a large population, then you should accept that Thought Experiment 1 enables exponentially-growing AGIs, since that’s basically isomorphic (except that AGI population growth can be much faster because it’s not bottlenecked by human pregnancy and maturation times).
If you buy that Thought Experiment 1 enables exponentially-growing AGIs, then you should buy that Thought Experiment 2 enables exponentially-growing AGIs too, since that’s basically isomorphic. (Actually, if anything, the case for growth is stronger for Thought Experiment 2 than 1!)
Do you agree? Or where do you get off the train? (Or sorry if I’m misunderstanding your comment.)
I’d say this is a partial misunderstanding, because the difference between final and intermediate consumption is about intention, rather than the type of goods.
Or to be more concrete, this is where I get off the train.
It depends entirely on whether these endeavors were originally thought to be profitable. If you were spending your own money, with no thought of financial returns, then it would be fine. If all the major companies on the stock market announced today that they were devoting all of their funds to rocket ships, on the other hand, the result would be easily called a economic collapse, as people (banks, bondholders, etc.) recalibrate their balance sheets to the updated profitability expectations.
If AI is directing that spending, rather than people, on the other hand, the distinction would not be between alignment and misalignment, but rather with something more akin to ‘analignment,’ where AIs could have spending preferences completely disconnected from those of their human owners. Otherwise, their financial results would simply translate to the financial conditions of their owners.
The reason why intention is relevant to models which might appear at first to be entirely mechanistic has to do with emergent properties. While on the one hand this is just an accounting question, you would also hope that in your model, GDP at time t bears some sort of relationship with t+1 (or whatever alternative measure of economic activity you prefer). Ultimately, any model of reality has to start at some level of analysis. This can be subjective, and I would potentially be open to a case that AI might be a more suitable level of analysis than the individual human, but if you are making that case then I would like to see the case the independence of AI spending decisions. If that turns out to be a difficult argument to make, then it’s a sign that it may be worth keeping conventional economics as the most efficient/convenient/productive modelling approach.
For obvious reasons, we should care a great deal whether the exponentially-growing mass of AGIs-building-AGIs is ultimately trying to make cancer cures and other awesome consumer products (things that humans view as intrinsically valuable / ends in themselves), versus ultimately trying to make galaxy-scale paperclip factories (things that misaligned AIs view as intrinsically valuable / ends in themselves).
From my perspective, I care about this because the former world is obviously a better world for me to live in.
But it seems like you have some extra reason to care about this, beyond that, and I’m confused about what that is. I get the impression that you are focused on things that are “just accounting questions”?
Analogy: In those times and places where slavery was legal, “food given to slaves” was presumably counted as an intermediate good, just like gasoline to power a tractor, right? Because they’re kinda the same thing (legally / economically), i.e. they’re an energy source that helps get the wheat ready for sale, and then that wheat is the final product that the slaveowner is planning to sell. If slavery is replaced by a legally-different but functionally-equivalent system (indentured servitude or whatever), does GDP skyrocket overnight because the food-given-to-farm-workers magically transforms from an intermediate to a final good? It does, right? But that change is just on paper. It doesn’t reflect anything real.
I think what you’re talking about for AGI is likewise just “accounting”, not anything real. So who cares? We don’t need a “subjective” “level of analysis”, if we don’t ask subjective questions in the first place. We can instead talk concretely about the future world and its “objective” properties. Like, do we agree about whether or not there is an unprecedented exponential explosion of AGIs? If so, we can talk about what those AGIs will be doing at any given time, and what the humans are doing, and so on. Right?