The point of the question was why are we not defining core values are important to humans for an AGI—so for the complex cases we could tell the AGI to ‘leave it to the humans to decide’
I think the main reason is because we don’t believe that AGI works in a way where you could meaningfully tell the AGI to let humans decide the complex cases.
Apart from that there are already plenty of philosophers engaged into trying which issues are important to humans and how they are important.
I think the main reason is because we don’t believe that AGI works in a way where you could meaningfully tell the AGI to let humans decide the complex cases.
Apart from that there are already plenty of philosophers engaged into trying which issues are important to humans and how they are important.