Other possibility that is easy to see if you are to think more like an engineer and less like philosopher:
The AI is to operate with light-speed delay, and has to be made of multiple nodes. It is entirely possible that some morality systems would not allow efficient solutions to this challenge (i.e. would break into some sort of war between modules, or otherwise fail to intellectually collaborate).
It is likely that there’s only a limited number of good solutions to P2P intelligence design, and the one that would be found would be substantially similar to our own solution of fundamentally same problem, solution which we call ‘morality’, complete with various non-utilitarian quirks.
edit: that is, our ‘morality’ is the set of rules for inter-node interaction in society, and some of such rules just don’t work. Orthogonality thesis for anything in any sense practical is a conjunction of potentially very huge number of propositions (which are assumed false without consideration, by omission) - any sort of consideration not yet considered can break the symmetry between different goals, then another such consideration is incredibly unlikely to add symmetry back.
Other possibility that is easy to see if you are to think more like an engineer and less like philosopher:
The AI is to operate with light-speed delay, and has to be made of multiple nodes. It is entirely possible that some morality systems would not allow efficient solutions to this challenge (i.e. would break into some sort of war between modules, or otherwise fail to intellectually collaborate).
It is likely that there’s only a limited number of good solutions to P2P intelligence design, and the one that would be found would be substantially similar to our own solution of fundamentally same problem, solution which we call ‘morality’, complete with various non-utilitarian quirks.
edit: that is, our ‘morality’ is the set of rules for inter-node interaction in society, and some of such rules just don’t work. Orthogonality thesis for anything in any sense practical is a conjunction of potentially very huge number of propositions (which are assumed false without consideration, by omission) - any sort of consideration not yet considered can break the symmetry between different goals, then another such consideration is incredibly unlikely to add symmetry back.