Well, the main thing is that Principle (A) says that the price of the chips + electricity + teleoperated robotics package will be sustainably high, and Principle (B) says that the price of the package will be sustainably low. Those can’t both be true.
…But then I also said that, if the price of the package is low, then human labor will have its price (wage / earnings) plummet way below subsistence via competing against a much-less-expensive substitute, and if it’s high, they won’t. This step brings in an additional assumption, namely that they’re actually substitutes. That’s the part you’re objecting to. Correct?
If so, I mean, I can start listing ways that tractors are not perfect substitutes for mules—mules do better on rough terrain, mules can heal themselves, etc. Or I can list ways that Jeff Bezos is not a perfect substitute for a moody 7yo—the 7yo is cuter, the 7yo may have a more sympathetic understanding of how to market to 7yo’s, etc.
But c’mon, a superintelligent AI CEO would not pay a higher salary to hire a moody 7yo, rather than a lower salary to “hire” another copy of itself, or to “hire” a different model of superintelligent AI. The only situation where human employment is even remotely plausible, IMO, is that the job involves appealing to human consumers. But that doesn’t “grow the pie” of human resources. If that’s the only thing humans can do, collective human wealth will just dwindle to zero as they buy AI-produced goods and services.
So then the only consistent picture here is to say that at least some humans have a sustainable source of increasing wealth besides getting jobs & founding companies. And then humans can sometimes get employed because they have special appeal to those human consumers. What’s the sustainable source of increasing human wealth? It could be capital ownership, or welfare / UBI / charity from aligned AIs or government, whatever. But if you’re going to assume that, then honestly who cares whether the humans are employable or not? They have money regardless. They’re doing fine. :)
I agree that the economic principles conflict; you are correct that my question was about the human labor part. I don’t even require that they be substitutes; at the level of abstraction we are working in, it seems perfectly plausible that some new niches will open up. Anything would qualify, even if it is some new-fangled job title like ‘adaptation engineer’ or something that just preps new types of environments for teleoperation before moving onto the next environment like some kine of meta railroad gang. In this case the value of human labor might stay sustainably high in terms of total value, but the amplitude of the value would sort of slide into the few AI relevant niches.
I think this cashes out as Principle A winning out and Principal B winning out looking the same for most people.
But I don’t think that lesson generalizes because of an argument Eliezer makes all the time: the technologies created by evolution (e.g. animals) can do things that current human technology cannot. E.g. humans cannot currently make a self-contained “artificial cow” that can autonomously turn grass and water into more copies of itself, while also creating milk, etc. But that’s an artifact of our current immature technology situation, and we shouldn’t expect it to last into the superintelligence era, with its more advanced future technology.
Separately, I don’ t think “preps new types of environments for teleoperation” is a good example of a future human job. Teleoperated robots can string ethernet cables and install wifi and whatever just like humans can. By analogy, humans have never needed intelligent extraterrestrials to come along and “prep new types of environments for human operation”. Rather, we humans have always been able to bootstrap our way into new environments. Why don’t you expect AGIs to be able to do that too?
(I understand that it’s possible to believe that there will be economic niches for humans, because of more abstract reasons, even if we can’t name even a single plausible example right now. But still, not being able to come up with any plausible examples is surely a bad sign.)
Well, the main thing is that Principle (A) says that the price of the chips + electricity + teleoperated robotics package will be sustainably high, and Principle (B) says that the price of the package will be sustainably low. Those can’t both be true.
…But then I also said that, if the price of the package is low, then human labor will have its price (wage / earnings) plummet way below subsistence via competing against a much-less-expensive substitute, and if it’s high, they won’t. This step brings in an additional assumption, namely that they’re actually substitutes. That’s the part you’re objecting to. Correct?
If so, I mean, I can start listing ways that tractors are not perfect substitutes for mules—mules do better on rough terrain, mules can heal themselves, etc. Or I can list ways that Jeff Bezos is not a perfect substitute for a moody 7yo—the 7yo is cuter, the 7yo may have a more sympathetic understanding of how to market to 7yo’s, etc.
But c’mon, a superintelligent AI CEO would not pay a higher salary to hire a moody 7yo, rather than a lower salary to “hire” another copy of itself, or to “hire” a different model of superintelligent AI. The only situation where human employment is even remotely plausible, IMO, is that the job involves appealing to human consumers. But that doesn’t “grow the pie” of human resources. If that’s the only thing humans can do, collective human wealth will just dwindle to zero as they buy AI-produced goods and services.
So then the only consistent picture here is to say that at least some humans have a sustainable source of increasing wealth besides getting jobs & founding companies. And then humans can sometimes get employed because they have special appeal to those human consumers. What’s the sustainable source of increasing human wealth? It could be capital ownership, or welfare / UBI / charity from aligned AIs or government, whatever. But if you’re going to assume that, then honestly who cares whether the humans are employable or not? They have money regardless. They’re doing fine. :)
I agree that the economic principles conflict; you are correct that my question was about the human labor part. I don’t even require that they be substitutes; at the level of abstraction we are working in, it seems perfectly plausible that some new niches will open up. Anything would qualify, even if it is some new-fangled job title like ‘adaptation engineer’ or something that just preps new types of environments for teleoperation before moving onto the next environment like some kine of meta railroad gang. In this case the value of human labor might stay sustainably high in terms of total value, but the amplitude of the value would sort of slide into the few AI relevant niches.
I think this cashes out as Principle A winning out and Principal B winning out looking the same for most people.
I looked it up, evidently mules still have at least one tiny economic niche in the developed world. Go figure :)
But I don’t think that lesson generalizes because of an argument Eliezer makes all the time: the technologies created by evolution (e.g. animals) can do things that current human technology cannot. E.g. humans cannot currently make a self-contained “artificial cow” that can autonomously turn grass and water into more copies of itself, while also creating milk, etc. But that’s an artifact of our current immature technology situation, and we shouldn’t expect it to last into the superintelligence era, with its more advanced future technology.
Separately, I don’ t think “preps new types of environments for teleoperation” is a good example of a future human job. Teleoperated robots can string ethernet cables and install wifi and whatever just like humans can. By analogy, humans have never needed intelligent extraterrestrials to come along and “prep new types of environments for human operation”. Rather, we humans have always been able to bootstrap our way into new environments. Why don’t you expect AGIs to be able to do that too?
(I understand that it’s possible to believe that there will be economic niches for humans, because of more abstract reasons, even if we can’t name even a single plausible example right now. But still, not being able to come up with any plausible examples is surely a bad sign.)
I do, I just expect it to take a few iterations. I don’t expect any kind of stable niche for humans after AGI appears.