Sorry, when I said “False Universalism”, I meant things like, “one group wants to have kings, and another wants parliamentary democracy”. Or “one group wants chocolate, and the other wants vanilla”. Common moral algorithms seem to simply assume that the majority wins, so if the majority wants chocolate, everyone gets chocolate. Moral constructionism gets around this by saying: values may not be universal, but we can come to game-theoretically sound agreements (even if they’re only Timelessly sound, like Rawls’ Theory of Justice) on how to handle the disagreements productively, thus wasting fewer resources on fighting each other when we could be spending them on Fun.
Basically, I think the correct moral algorithm is: use a constructionist algorithm to cluster people into groups who can then use realist universalisms internally.
Sorry, when I said “False Universalism”, I meant things like, “one group wants to have kings, and another wants parliamentary democracy”. Or “one group wants chocolate, and the other wants vanilla”. Common moral algorithms seem to simply assume that the majority wins, so if the majority wants chocolate, everyone gets chocolate. Moral constructionism gets around this by saying: values may not be universal, but we can come to game-theoretically sound agreements (even if they’re only Timelessly sound, like Rawls’ Theory of Justice) on how to handle the disagreements productively, thus wasting fewer resources on fighting each other when we could be spending them on Fun.
Basically, I think the correct moral algorithm is: use a constructionist algorithm to cluster people into groups who can then use realist universalisms internally.