Why wouldn’t AGI build a superhuman understanding of ethics, which it would then use to guide its decision-making?
It may or may not take the time to build a superhuman understanding of human ethics, perhaps in order to manipulate humans, but building an AI that would then care about those ethics and optimize for humans values is the hard part.
Why wouldn’t AGI build a superhuman understanding of ethics, which it would then use to guide its decision-making?
It may or may not take the time to build a superhuman understanding of human ethics, perhaps in order to manipulate humans, but building an AI that would then care about those ethics and optimize for humans values is the hard part.