Sorry..did you mean FAI is about societies, or FAI is about singletons?
But if ethics does emerge as an organisational principle in socieities, that’s all you need for FAI. You don’t even to to worry about one sociopathic AI turning unfriendly, because the majority will be able to restrain it.
UFAI is about singletons. If you have an AI society whose members compare notes and share information—which ins isntrumentally useful for them anyway—your reduce the probability of singleton fooming.
Any agent that fooms becomes a singleton. Thus, it doesn’t matter if they acted nice while in a society; all that matters is whether they act nice as a singleton.
An agent in a society is unable to force its values on the society; it needs to cooperate with the rest of society. A singleton is able to force its values on the rest of society.
Sorry..did you mean FAI is about societies, or FAI is about singletons?
But if ethics does emerge as an organisational principle in socieities, that’s all you need for FAI. You don’t even to to worry about one sociopathic AI turning unfriendly, because the majority will be able to restrain it.
FAI is about singletons, because the first one to foom wins, is the idea.
ETA: also, rational agents may be ethical in societies, but there’s no advantage to being an ethical singleton.
UFAI is about singletons. If you have an AI society whose members compare notes and share information—which ins isntrumentally useful for them anyway—your reduce the probability of singleton fooming.
Any agent that fooms becomes a singleton. Thus, it doesn’t matter if they acted nice while in a society; all that matters is whether they act nice as a singleton.
I don’t get it: any agent that fooms becomes superintelligent. It’s values don’t necessarily change at all, nor does its connection to its society.
An agent in a society is unable to force its values on the society; it needs to cooperate with the rest of society. A singleton is able to force its values on the rest of society.