I think AIs will be able to do all cognitive labor for less resources than a human could survive on. In particular, “Scott Alexander quality writing” and “long term planning and coordination of programmers”—tasks that you assume will stay with humans—seem to me like tasks where AIs will surpass the best humans before the decade is out. Any “replacement industry” tasks can be taken up by AIs as well, because AI learning will keep getting better and more general. And it doesn’t seem to matter whether demand satiates or grows: even fast-growing demand would be cheaper met by building more AIs than by using the same resources to feed humans.
(This is also why Ricardian comparative advantage won’t apply. If the AI side has a choice of trading with humans for something, vs. spending the same resources on building AIs to produce the same thing cheaper, then the latter option is more profitable. So after a certain point in capability development, the only thing AIs and AI companies will want from us is our resources, like land; not our labor. The best analogy is enclosures in England.)
This is also why Ricardian comparative advantage won’t apply. If the AI side has a choice of trading with humans for something, vs. spending the same resources on building AIs to produce the same thing cheaper, then the latter option is more profitable.
Maybe it’s equivalent, but I have been thinking of this as “The price at which humans have comparative advantage will become lower than subsistence, so humans refuse the job and/or die out anyway.” AKA this is what happened to most horses after cars got cheap.
I think AIs will be able to do all cognitive labor for less resources than a human could survive on. In particular, “Scott Alexander quality writing” and “long term planning and coordination of programmers”—tasks that you assume will stay with humans—seem to me like tasks where AIs will surpass the best humans before the decade is out. Any “replacement industry” tasks can be taken up by AIs as well, because AI learning will keep getting better and more general. And it doesn’t seem to matter whether demand satiates or grows: even fast-growing demand would be cheaper met by building more AIs than by using the same resources to feed humans.
(This is also why Ricardian comparative advantage won’t apply. If the AI side has a choice of trading with humans for something, vs. spending the same resources on building AIs to produce the same thing cheaper, then the latter option is more profitable. So after a certain point in capability development, the only thing AIs and AI companies will want from us is our resources, like land; not our labor. The best analogy is enclosures in England.)
Maybe it’s equivalent, but I have been thinking of this as “The price at which humans have comparative advantage will become lower than subsistence, so humans refuse the job and/or die out anyway.” AKA this is what happened to most horses after cars got cheap.
Yeah, I think your formulation is more correct than mine.