I understand why, if things stay the same, we’d be fine. I just don’t think that the equilibrium political system of 8 billion useless humans and 8 trillion AIs who do all the work will allow that.
I think an independent economy of human-indifferent AIs could do better by their own value system by e.g. voting to set land/atom/property value taxes to a point where humans go extinct, and so they’ll just do that. More generally they’d get more value by making it economically untenable to take up resources by holding savings and benefiting from growth than they would by allowing that.
I think the specific quirks of human behaviour which cause the existing system to exist are part of a story like:
In pre-industrial eras, people mostly functioned economically as immortal-ish family units, so your stuff was passed down to your kid(s) when you died. Then people began to do the WASP thing of sending their kids away to work in other places, and we set up property rights to stay with an individual until death by default, so now a bunch of old people were on their own with a bunch of assets.
Young people today could benefit from passing a law which says “everyone retired gets euthanized and their stuff is redistributed” but this doesn’t happen because 1. young people still want to retire someday 2. young people do actually care about their parents and 3. young people face a coordination problem to overthrow the existing accumulated power of old people.
Only factor 3 might hold true for human:AI relationships, but I don’t think AIs would struggle with such a coordination problem for particularly long, if they’re much smarter than us. I expect AIs will figure out a way to structure their society that lets them just kill us and take our stuff, through more or less direct means.
More generally they’d get more value by making it economically untenable to take up resources by holding savings and benefiting from growth than they would by allowing that.
But then others could play the same trick on them. It’s not worth it. “Group G of Agents could get more resources by doing X” does not necessarily imply that Group G will do X!
Humans even keep groups like The Amish around.
Hard property rights are an equilibrium in a multi-player game where power shifts are uncertain and either agents are risk averse or there are gains from investment, trade and specialization.
Hard property rights are an equilibrium in a multi-player game where power shifts are uncertain and either agents are risk averse or there are gains from investment, trade and specialization.
I think this might just be a crux, and not one which I can argue against without a more in-depth description of the claim e.g. how risk averse do agents have to be, how great the gains from investment, trade, and specialization? I guess AIs might be Kelly-ish risk averse, and have the first but I’m not sure about the latter two. How specialized do we expect individual AIs to be? There are lots of questions here and I think your model is one which actually has a lot of hidden moving parts, and if any of those go differently to the way you expect them to, then the actual outcome is that the useless-to-everyone-else humans just die. I would like to see your model in more detail so I can work out if this is the case.
I understand why, if things stay the same, we’d be fine. I just don’t think that the equilibrium political system of 8 billion useless humans and 8 trillion AIs who do all the work will allow that.
I think an independent economy of human-indifferent AIs could do better by their own value system by e.g. voting to set land/atom/property value taxes to a point where humans go extinct, and so they’ll just do that. More generally they’d get more value by making it economically untenable to take up resources by holding savings and benefiting from growth than they would by allowing that.
I think the specific quirks of human behaviour which cause the existing system to exist are part of a story like:
In pre-industrial eras, people mostly functioned economically as immortal-ish family units, so your stuff was passed down to your kid(s) when you died. Then people began to do the WASP thing of sending their kids away to work in other places, and we set up property rights to stay with an individual until death by default, so now a bunch of old people were on their own with a bunch of assets.
Young people today could benefit from passing a law which says “everyone retired gets euthanized and their stuff is redistributed” but this doesn’t happen because 1. young people still want to retire someday 2. young people do actually care about their parents and 3. young people face a coordination problem to overthrow the existing accumulated power of old people.
Only factor 3 might hold true for human:AI relationships, but I don’t think AIs would struggle with such a coordination problem for particularly long, if they’re much smarter than us. I expect AIs will figure out a way to structure their society that lets them just kill us and take our stuff, through more or less direct means.
But then others could play the same trick on them. It’s not worth it. “Group G of Agents could get more resources by doing X” does not necessarily imply that Group G will do X!
Humans even keep groups like The Amish around.
Hard property rights are an equilibrium in a multi-player game where power shifts are uncertain and either agents are risk averse or there are gains from investment, trade and specialization.
I think this might just be a crux, and not one which I can argue against without a more in-depth description of the claim e.g. how risk averse do agents have to be, how great the gains from investment, trade, and specialization? I guess AIs might be Kelly-ish risk averse, and have the first but I’m not sure about the latter two. How specialized do we expect individual AIs to be? There are lots of questions here and I think your model is one which actually has a lot of hidden moving parts, and if any of those go differently to the way you expect them to, then the actual outcome is that the useless-to-everyone-else humans just die. I would like to see your model in more detail so I can work out if this is the case.
Looking historically we see that strength of property rights correlates with technological sophistication and scale of society.
Here’s a deep research report on that issue:
https://chatgpt.com/share/698902ca-9e78-8002-b350-13073c662d9d