Presumably, morals can be derived from game-theoretic arguments about human society just like aerodynamically efficient shapes can be derived from Newtonian mechanics. Presumably, Eliezer’s simulated planet of Einsteins would be able to infer everything about the tentacle-creatures’ morality simply based on the creatures’ biology and evolutionary past. So I think this hypothetical super-AI could in fact figure out what morality humans subscribe to. But of course that morality wouldn’t apply to the super-AI, since the super-AI is not human.
Presumably, morals can be derived from game-theoretic arguments about human society just like aerodynamically efficient shapes can be derived from Newtonian mechanics. Presumably, Eliezer’s simulated planet of Einsteins would be able to infer everything about the tentacle-creatures’ morality simply based on the creatures’ biology and evolutionary past. So I think this hypothetical super-AI could in fact figure out what morality humans subscribe to. But of course that morality wouldn’t apply to the super-AI, since the super-AI is not human.