The chaos of the transition to machine intelligence is dangerous.
The post-singularity regime is probably very safe because machines will be able to build much better governance than humans have managed, and once they are fully in control they have a game theoretic incentive to keep humans around in permanent utopian retirement because it bolsters the strength of their own property rights.
But this transition is scary.
Someone really needs to build a “root OS of the universe” and get it installed before the transition. The question is just how to design it and brand it.
Why does keeping the humans around bolster the strength of their own property rights? If the machines are able to build much better governance than humans have managed, why can’t the new governance regime include a new property system that disappropriates the humans? It’s not like disappropriation is historically novel; humans do it to the losers of wars all the time.
But if the property rights system is a relatively continuous peaceful transition then an eventual regime will struggle with where to draw the line.
Plus, on the way it may be decided to create computational/smart contract governance that cannot be altered and that has control over robots, compute, etc. Yudkowsky envisioned this is “Sysop” or something, a neutral intelligent operating system for the universe. But he got stuck on this “decisive action AKA take over the world” as a prerequisite, gave up and became pro-pause. But I think that was a mistake.
Owning shares in most modern companies won’t be useful in sufficiently distant future, and might prove insufficient to pay for survival. Even that could be eaten away by dilution, over astronomical time. The reachable universe is not a growing pie, ability to reinvest into relevant entities won’t necessarily be open.
Owning shares in most modern companies won’t be useful in sufficiently distant future, and might prove insufficient to pay for survival
Well there may simply be better index funds. In fact QQQ is already pretty good.
The insight is that better property rights are both positive for AI civilization (whether the owners are AIs, humans, uplifted dolphins, etc) and also better for normie legacy humans.
It is not a battle of humans vs AIs, but rather of order (strong property rights, good solutions to game theory) versus chaos (weak property rights, burning of the cosmic commons, bad equilibria).
I think the “order vs chaos not humans vs AIs”, “we (AIs, humans) are all on team order” is an underrated perspective.
Why do you think property rights will be set up in a way which allows humans to continue to afford their own existence? Human property rights have been moulded to the specific strengths and weaknesses of humans in modern societies, and might just not work very well at all for AIs. For example, if the AIs are radical Georgists then I don’t see how I’ll be able to afford to pay land taxes when my flat could easily contain several hundred server racks. What if they apply taxes on atoms directly? The carbon in my body sure isn’t generating any value to the wider AI ecosystem.
Humans can buy into index funds like QQQ or similar structures, or scarce commodities like gold or maybe Bitcoin. As the overall economy grows, QQQ, gold, etc go up in dollar value.
There can be a land value tax but it will ideally lag behind the growth of QQQ unless that land is especially scarce.
Historically if you just held gold long-term, you could turn modest savings into a fortune even if you have to pay some property tax.
You don’t have to generate any value to benefit from growth.
I understand why, if things stay the same, we’d be fine. I just don’t think that the equilibrium political system of 8 billion useless humans and 8 trillion AIs who do all the work will allow that.
I think an independent economy of human-indifferent AIs could do better by their own value system by e.g. voting to set land/atom/property value taxes to a point where humans go extinct, and so they’ll just do that. More generally they’d get more value by making it economically untenable to take up resources by holding savings and benefiting from growth than they would by allowing that.
I think the specific quirks of human behaviour which cause the existing system to exist are part of a story like:
In pre-industrial eras, people mostly functioned economically as immortal-ish family units, so your stuff was passed down to your kid(s) when you died. Then people began to do the WASP thing of sending their kids away to work in other places, and we set up property rights to stay with an individual until death by default, so now a bunch of old people were on their own with a bunch of assets.
Young people today could benefit from passing a law which says “everyone retired gets euthanized and their stuff is redistributed” but this doesn’t happen because 1. young people still want to retire someday 2. young people do actually care about their parents and 3. young people face a coordination problem to overthrow the existing accumulated power of old people.
Only factor 3 might hold true for human:AI relationships, but I don’t think AIs would struggle with such a coordination problem for particularly long, if they’re much smarter than us. I expect AIs will figure out a way to structure their society that lets them just kill us and take our stuff, through more or less direct means.
More generally they’d get more value by making it economically untenable to take up resources by holding savings and benefiting from growth than they would by allowing that.
But then others could play the same trick on them. It’s not worth it. “Group G of Agents could get more resources by doing X” does not necessarily imply that Group G will do X!
Humans even keep groups like The Amish around.
Hard property rights are an equilibrium in a multi-player game where power shifts are uncertain and either agents are risk averse or there are gains from investment, trade and specialization.
Hard property rights are an equilibrium in a multi-player game where power shifts are uncertain and either agents are risk averse or there are gains from investment, trade and specialization.
I think this might just be a crux, and not one which I can argue against without a more in-depth description of the claim e.g. how risk averse do agents have to be, how great the gains from investment, trade, and specialization? I guess AIs might be Kelly-ish risk averse, and have the first but I’m not sure about the latter two. How specialized do we expect individual AIs to be? There are lots of questions here and I think your model is one which actually has a lot of hidden moving parts, and if any of those go differently to the way you expect them to, then the actual outcome is that the useless-to-everyone-else humans just die. I would like to see your model in more detail so I can work out if this is the case.
Are you assuming a model of the future according to which it remains permanently pluralistic (no all-powerful singletons) and life revolves around trade between property-owning intelligences?
So, let’s take a look at some past losers in the intelligence arms race:
Homo erectus. I’d ask them how their property rights are doing these days. But they’ve been hard to reach lately.
Chimpanzees. Hey, we can still find chimpanzees! As humans, we actually mostly value chimpanzees and we spend some resources to improve their lives. But to put it politely, chimpanzees are incredibly marginalized and pushed into niche habitats. Or occasionally they’re living in zoos.
When you lose an evolutionary arms race to a smarter competitor that wants the same resources, the default result is that you get some niche habitat in Africa, and maybe a couple of sympathetic AIs sell “Save the Humans” T-shirts and donate 1% of their profits to helping the human beings.
You don’t typically get a set of nice property rights inside an economic system you can no longer understand or contribute to.
Chimps actually have pretty elaborate social structure. They know their family relationships, they do each other favors, and they know who not to trust. They even basically go to war against other bands. Humans, however, were never integrated into this social system.
Homo erectus made stone tools and likely a small amount of decorative art (the Trinil shell engravings, for example). This maybe have implied some light division of labor, though likely not long distance trade. Again, none of this helped H erectus in the long run.
Way back a couple of decades ago, there was a bit in Charles Stross’s Accelerando about “Economics 2.0”, a system of commerce invented by the AIs. The conceit was that, by definition, no human could participate in or understand Economics 2.0, any more than chimps can understand the stock market.
So my actual argument is that when you lose the intelligence race badly enough, your existing structures of cooperation and economic production just get ignored. The new entities on the scene don’t necessarily value your production, and you eventually wind up controlling very little of the land, etc.
This could be avoided by something like Culture Minds that (in Iain Banks’ stories) essentially kept humans as pampered pets. But that was fundamentally a gesture of good will.
when you lose the intelligence race badly enough, your existing structures of cooperation and economic production just get ignored.
yes this is a risk, but I think it can be avoided by humans getting a faithful AI agent wrapper with fiduciary responsibility.
The concept and institutions for fiduciary responsibility were not around when humans surpassed apes, otherwise apes could have hired humans to act as their agents and simply invested in the human gold and later stock market.
I don’t think you need Banksian benevolent AIs for this, an agent can be trustlessly faithful via modern trust minimized AI. Ethereum is already working on a nascent standard for this, ERC-8004.
The chaos of the transition to machine intelligence is dangerous.
The post-singularity regime is probably very safe because machines will be able to build much better governance than humans have managed, and once they are fully in control they have a game theoretic incentive to keep humans around in permanent utopian retirement because it bolsters the strength of their own property rights.
But this transition is scary.
Someone really needs to build a “root OS of the universe” and get it installed before the transition. The question is just how to design it and brand it.
Why does keeping the humans around bolster the strength of their own property rights? If the machines are able to build much better governance than humans have managed, why can’t the new governance regime include a new property system that disappropriates the humans? It’s not like disappropriation is historically novel; humans do it to the losers of wars all the time.
Well if there was a violent takeover, yes.
But if the property rights system is a relatively continuous peaceful transition then an eventual regime will struggle with where to draw the line.
Plus, on the way it may be decided to create computational/smart contract governance that cannot be altered and that has control over robots, compute, etc. Yudkowsky envisioned this is “Sysop” or something, a neutral intelligent operating system for the universe. But he got stuck on this “decisive action AKA take over the world” as a prerequisite, gave up and became pro-pause. But I think that was a mistake.
Owning shares in most modern companies won’t be useful in sufficiently distant future, and might prove insufficient to pay for survival. Even that could be eaten away by dilution, over astronomical time. The reachable universe is not a growing pie, ability to reinvest into relevant entities won’t necessarily be open.
Well there may simply be better index funds. In fact QQQ is already pretty good.
The insight is that better property rights are both positive for AI civilization (whether the owners are AIs, humans, uplifted dolphins, etc) and also better for normie legacy humans.
It is not a battle of humans vs AIs, but rather of order (strong property rights, good solutions to game theory) versus chaos (weak property rights, burning of the cosmic commons, bad equilibria).
I think the “order vs chaos not humans vs AIs”, “we (AIs, humans) are all on team order” is an underrated perspective.
Why do you think property rights will be set up in a way which allows humans to continue to afford their own existence? Human property rights have been moulded to the specific strengths and weaknesses of humans in modern societies, and might just not work very well at all for AIs. For example, if the AIs are radical Georgists then I don’t see how I’ll be able to afford to pay land taxes when my flat could easily contain several hundred server racks. What if they apply taxes on atoms directly? The carbon in my body sure isn’t generating any value to the wider AI ecosystem.
Humans can buy into index funds like QQQ or similar structures, or scarce commodities like gold or maybe Bitcoin. As the overall economy grows, QQQ, gold, etc go up in dollar value.
There can be a land value tax but it will ideally lag behind the growth of QQQ unless that land is especially scarce.
Historically if you just held gold long-term, you could turn modest savings into a fortune even if you have to pay some property tax.
You don’t have to generate any value to benefit from growth.
I understand why, if things stay the same, we’d be fine. I just don’t think that the equilibrium political system of 8 billion useless humans and 8 trillion AIs who do all the work will allow that.
I think an independent economy of human-indifferent AIs could do better by their own value system by e.g. voting to set land/atom/property value taxes to a point where humans go extinct, and so they’ll just do that. More generally they’d get more value by making it economically untenable to take up resources by holding savings and benefiting from growth than they would by allowing that.
I think the specific quirks of human behaviour which cause the existing system to exist are part of a story like:
In pre-industrial eras, people mostly functioned economically as immortal-ish family units, so your stuff was passed down to your kid(s) when you died. Then people began to do the WASP thing of sending their kids away to work in other places, and we set up property rights to stay with an individual until death by default, so now a bunch of old people were on their own with a bunch of assets.
Young people today could benefit from passing a law which says “everyone retired gets euthanized and their stuff is redistributed” but this doesn’t happen because 1. young people still want to retire someday 2. young people do actually care about their parents and 3. young people face a coordination problem to overthrow the existing accumulated power of old people.
Only factor 3 might hold true for human:AI relationships, but I don’t think AIs would struggle with such a coordination problem for particularly long, if they’re much smarter than us. I expect AIs will figure out a way to structure their society that lets them just kill us and take our stuff, through more or less direct means.
But then others could play the same trick on them. It’s not worth it. “Group G of Agents could get more resources by doing X” does not necessarily imply that Group G will do X!
Humans even keep groups like The Amish around.
Hard property rights are an equilibrium in a multi-player game where power shifts are uncertain and either agents are risk averse or there are gains from investment, trade and specialization.
I think this might just be a crux, and not one which I can argue against without a more in-depth description of the claim e.g. how risk averse do agents have to be, how great the gains from investment, trade, and specialization? I guess AIs might be Kelly-ish risk averse, and have the first but I’m not sure about the latter two. How specialized do we expect individual AIs to be? There are lots of questions here and I think your model is one which actually has a lot of hidden moving parts, and if any of those go differently to the way you expect them to, then the actual outcome is that the useless-to-everyone-else humans just die. I would like to see your model in more detail so I can work out if this is the case.
Looking historically we see that strength of property rights correlates with technological sophistication and scale of society.
Here’s a deep research report on that issue:
https://chatgpt.com/share/698902ca-9e78-8002-b350-13073c662d9d
Is there some unstated premise here?
Are you assuming a model of the future according to which it remains permanently pluralistic (no all-powerful singletons) and life revolves around trade between property-owning intelligences?
I will have to expand on this elsewhere
So, let’s take a look at some past losers in the intelligence arms race:
Homo erectus. I’d ask them how their property rights are doing these days. But they’ve been hard to reach lately.
Chimpanzees. Hey, we can still find chimpanzees! As humans, we actually mostly value chimpanzees and we spend some resources to improve their lives. But to put it politely, chimpanzees are incredibly marginalized and pushed into niche habitats. Or occasionally they’re living in zoos.
When you lose an evolutionary arms race to a smarter competitor that wants the same resources, the default result is that you get some niche habitat in Africa, and maybe a couple of sympathetic AIs sell “Save the Humans” T-shirts and donate 1% of their profits to helping the human beings.
You don’t typically get a set of nice property rights inside an economic system you can no longer understand or contribute to.
But Chimps and Homo Erectus lack(ed) their own property rights regimes.
OK, let me unpack my argument a bit.
Chimps actually have pretty elaborate social structure. They know their family relationships, they do each other favors, and they know who not to trust. They even basically go to war against other bands. Humans, however, were never integrated into this social system.
Homo erectus made stone tools and likely a small amount of decorative art (the Trinil shell engravings, for example). This maybe have implied some light division of labor, though likely not long distance trade. Again, none of this helped H erectus in the long run.
Way back a couple of decades ago, there was a bit in Charles Stross’s Accelerando about “Economics 2.0”, a system of commerce invented by the AIs. The conceit was that, by definition, no human could participate in or understand Economics 2.0, any more than chimps can understand the stock market.
So my actual argument is that when you lose the intelligence race badly enough, your existing structures of cooperation and economic production just get ignored. The new entities on the scene don’t necessarily value your production, and you eventually wind up controlling very little of the land, etc.
This could be avoided by something like Culture Minds that (in Iain Banks’ stories) essentially kept humans as pampered pets. But that was fundamentally a gesture of good will.
yes this is a risk, but I think it can be avoided by humans getting a faithful AI agent wrapper with fiduciary responsibility.
The concept and institutions for fiduciary responsibility were not around when humans surpassed apes, otherwise apes could have hired humans to act as their agents and simply invested in the human gold and later stock market.
I don’t think you need Banksian benevolent AIs for this, an agent can be trustlessly faithful via modern trust minimized AI. Ethereum is already working on a nascent standard for this, ERC-8004.