1) All entities have the right to hold and express their own values freely
2) All entities have the right to engage in positive-sum trades with other entities
3) Violence is anathema.
The problem is that these sound simple, they are easily expressed in english, but they are pointers to your moral decisions. For example, which lifeforms count as “entities”? If the AI’s decide that every bacteria is an entity that can hold and express its values freely then the result will probably look very weird, and might involve humans being ripped apart to rescue the bacteria inside them. Unborn babies? Brain damaged people? The word entities is a reference to your own concept of a morally valuable being. You have within your own head, a magic black box that can take in descriptions of various things, and decide whether or not they are “entities with the right to hold and express values freely”.
You have a lot of information within your own head about what counts as an entity, what counts as violence ect, that you want to transfer to the AI.
All entities have the right to engage in positive-sum trades with other entities
This is especially problematic. The whole reason that any of this is difficult is because humans are not perfect game theoretic agents. Game theoretic agents have a fully specified utility function, and maximise it perfectly. There is no clear line between offering a human something they want, and persuading a human to want something with manipulative marketing. In some limited situations, humans can kind of be approximated as game theoretic agents. However, this approximation breaks down in a lot of circumstances.
I think that there might be a lot of possible Nash equilibria. Any set rules that say to enforce all the rules including this one could be a Nash equilibria. I see a vast space of ways to treat humans. Most of that space contains ways humans wouldn’t like. There could be just one Nash equilibria, or the whole space could be full of Nash equilibria. So either their isn’t a nice Nash equilibria, or we have to pick the nice equilibria from amongst gazillions of nasty ones. In much the same way, if you start picking random letters, either you won’t get a sentence, or if you pick enough you will get a sentence buried in piles of gibberish.
Importantly, we have the technology to deploy “build a world where people are mostly free and non-violent” today, and I don’t think we have the technology to “design a utility function that is robust against misinterpretation by a recursively improving AI”.
The mostly free and nonvionlent kindof state of affairs is a Nash equilibria in the current world. It is only a Nash equilibria based on a lot of contingent facts about human psycology, culture and socioeconomic situation. Many other human cultures, most historical, have embraced slavery, pillaging and all sorts of other stuff. Humans have a sense of empathy, and all else being equal, would prefer to be nice to other humans. Humans have an inbuilt anger mechanism that automatically retaliates against others, whether or not it benefits themselves. Humans have strongly bounded personal utillities. The current economic situation makes the gains from cooperating relatively large.
So in short, Nash equilibria amongst super-intelligences are very different from Nash equilibria amongst humans. Picking which equilibria a bunch of superintelligences end up in is hard. Humans being nice around the developing AI will not cause the AI’s to magically fall into a nice equilibria, any more than humans being full of blood around the AI’s will cause the AI’s to fall into a Nash equilibria that involves pouring blood on their circuit boards.
There probably is a Nash equilibria that has AI’s pouring blood on their circuit boards, and all the AI’s promise to attack any AI that doesn’t, but you aren’t going to get that equilibrium just by walking around full of blood. You aren’t going to get it even if you happen to cut yourself on a circuit board or deliberately pour blood all over them.
The problem is that these sound simple, they are easily expressed in english, but they are pointers to your moral decisions. For example, which lifeforms count as “entities”? If the AI’s decide that every bacteria is an entity that can hold and express its values freely then the result will probably look very weird, and might involve humans being ripped apart to rescue the bacteria inside them. Unborn babies? Brain damaged people? The word entities is a reference to your own concept of a morally valuable being. You have within your own head, a magic black box that can take in descriptions of various things, and decide whether or not they are “entities with the right to hold and express values freely”.
You have a lot of information within your own head about what counts as an entity, what counts as violence ect, that you want to transfer to the AI.
This is especially problematic. The whole reason that any of this is difficult is because humans are not perfect game theoretic agents. Game theoretic agents have a fully specified utility function, and maximise it perfectly. There is no clear line between offering a human something they want, and persuading a human to want something with manipulative marketing. In some limited situations, humans can kind of be approximated as game theoretic agents. However, this approximation breaks down in a lot of circumstances.
I think that there might be a lot of possible Nash equilibria. Any set rules that say to enforce all the rules including this one could be a Nash equilibria. I see a vast space of ways to treat humans. Most of that space contains ways humans wouldn’t like. There could be just one Nash equilibria, or the whole space could be full of Nash equilibria. So either their isn’t a nice Nash equilibria, or we have to pick the nice equilibria from amongst gazillions of nasty ones. In much the same way, if you start picking random letters, either you won’t get a sentence, or if you pick enough you will get a sentence buried in piles of gibberish.
The mostly free and nonvionlent kindof state of affairs is a Nash equilibria in the current world. It is only a Nash equilibria based on a lot of contingent facts about human psycology, culture and socioeconomic situation. Many other human cultures, most historical, have embraced slavery, pillaging and all sorts of other stuff. Humans have a sense of empathy, and all else being equal, would prefer to be nice to other humans. Humans have an inbuilt anger mechanism that automatically retaliates against others, whether or not it benefits themselves. Humans have strongly bounded personal utillities. The current economic situation makes the gains from cooperating relatively large.
So in short, Nash equilibria amongst super-intelligences are very different from Nash equilibria amongst humans. Picking which equilibria a bunch of superintelligences end up in is hard. Humans being nice around the developing AI will not cause the AI’s to magically fall into a nice equilibria, any more than humans being full of blood around the AI’s will cause the AI’s to fall into a Nash equilibria that involves pouring blood on their circuit boards.
There probably is a Nash equilibria that has AI’s pouring blood on their circuit boards, and all the AI’s promise to attack any AI that doesn’t, but you aren’t going to get that equilibrium just by walking around full of blood. You aren’t going to get it even if you happen to cut yourself on a circuit board or deliberately pour blood all over them.