I think you’re arguing that Principle (A) has nothing to teach us about AGI, and shouldn’t even be brought up in an AGI context except to be immediately refuted. And I think you’re wrong.
Principle (A) applied to AGIs says: The universe won’t run out of productive things for AGIs to do. In this respect, AGIs are different from, say, hammers. If a trillion hammers magically appeared in my town, then we would just have to dispose of them somehow. That’s way more hammers than anyone wants. There’s nothing to be done with them. Their market value would asymptote to zero.
AGIs will not be like that. It’s a big world. No matter how many AGIs there are, they can keep finding and inventing new opportunities. If they outgrow the planet, they can start in on Dyson spheres. The idea that AGIs will simply run out of things to do after a short time and then stop self-reproducing—the way I would turn off a hammer machine after the first trillion hammers even if its operating costs were zero—is wrong.
So yes, I think this is a valid lesson that we can take from Principle (A) and apply to AGIs, in order to extract an important insight. This is an insight that not everyone gets, not even (actually, especially not) most professional economists, because most professional economists are trained to lump in AGIs with hammers, in the category of “capital”, which implicitly entails “things that the world needs only a certain amount of, with diminishing returns”.
So yes, I think this is a valid lesson that we can take from Principle (A) and apply to AGIs, in order to extract an important insight. This is an insight that not everyone gets, not even (actually, especially not) most professional economists, because most professional economists are trained to lump in AGIs with hammers, in the category of “capital”, which implicitly entails “things that the world needs only a certain amount of, with diminishing returns”.
This trilemma might be a good way to force people-stuck-in-a-frame-of-traditional-economics to actually think about strong AI. I wouldn’t know; I honestly haven’t spent a ton of time talking to such people.
Principle [A] doesn’t just say AIs won’t run out of productive things to do; it makes a prediction about how this will affect prices in a market. It’s true that superintelligent AI won’t run out of productive things to do, but it will also change the situation such that the prices in the existing economy won’t be affected by this in the normal way prices are affected by “human participants in the market won’t run out of productive things to do”. Maybe there will be some kind of legible market internal to the AI’s thinking, or [less likely, but conceivable] a multi-ASI equilibrium with mutually legible market prices. But what reason would a strongly superintelligent AI have to continue trading with humans very long, in a market that puts human-legible prices on things? Even in Hanson’s Age of Em, humans who choose to remain in their meatsuits are entirely frozen out of the real [emulation] economy, very quickly in subjective time, and to me this is an obvious consequence of every agent in the market simply thinking and working way faster than you.
I think you’re arguing that Principle (A) has nothing to teach us about AGI, and shouldn’t even be brought up in an AGI context except to be immediately refuted. And I think you’re wrong.
Principle (A) applied to AGIs says: The universe won’t run out of productive things for AGIs to do. In this respect, AGIs are different from, say, hammers. If a trillion hammers magically appeared in my town, then we would just have to dispose of them somehow. That’s way more hammers than anyone wants. There’s nothing to be done with them. Their market value would asymptote to zero.
AGIs will not be like that. It’s a big world. No matter how many AGIs there are, they can keep finding and inventing new opportunities. If they outgrow the planet, they can start in on Dyson spheres. The idea that AGIs will simply run out of things to do after a short time and then stop self-reproducing—the way I would turn off a hammer machine after the first trillion hammers even if its operating costs were zero—is wrong.
So yes, I think this is a valid lesson that we can take from Principle (A) and apply to AGIs, in order to extract an important insight. This is an insight that not everyone gets, not even (actually, especially not) most professional economists, because most professional economists are trained to lump in AGIs with hammers, in the category of “capital”, which implicitly entails “things that the world needs only a certain amount of, with diminishing returns”.
So, kudos to Principle (A). Do you agree?
This trilemma might be a good way to force people-stuck-in-a-frame-of-traditional-economics to actually think about strong AI. I wouldn’t know; I honestly haven’t spent a ton of time talking to such people.
Principle [A] doesn’t just say AIs won’t run out of productive things to do; it makes a prediction about how this will affect prices in a market. It’s true that superintelligent AI won’t run out of productive things to do, but it will also change the situation such that the prices in the existing economy won’t be affected by this in the normal way prices are affected by “human participants in the market won’t run out of productive things to do”. Maybe there will be some kind of legible market internal to the AI’s thinking, or [less likely, but conceivable] a multi-ASI equilibrium with mutually legible market prices. But what reason would a strongly superintelligent AI have to continue trading with humans very long, in a market that puts human-legible prices on things? Even in Hanson’s Age of Em, humans who choose to remain in their meatsuits are entirely frozen out of the real [emulation] economy, very quickly in subjective time, and to me this is an obvious consequence of every agent in the market simply thinking and working way faster than you.