Yes, there will always be controversy across Countries and cultures, but this doesn’t mean we shouldn’t make a start with working out a system. In fact it highlights that we should be doing this, if for no other reason than to get an idea of which of these issues are important to the majority of humans.
The point of the question was why are we not defining core values are important to humans for an AGI—so for the complex cases we could tell the AGI to ‘leave it to the humans to decide’
The point of the question was why are we not defining core values are important to humans for an AGI—so for the complex cases we could tell the AGI to ‘leave it to the humans to decide’
I think the main reason is because we don’t believe that AGI works in a way where you could meaningfully tell the AGI to let humans decide the complex cases.
Apart from that there are already plenty of philosophers engaged into trying which issues are important to humans and how they are important.
Yes, there will always be controversy across Countries and cultures, but this doesn’t mean we shouldn’t make a start with working out a system. In fact it highlights that we should be doing this, if for no other reason than to get an idea of which of these issues are important to the majority of humans. The point of the question was why are we not defining core values are important to humans for an AGI—so for the complex cases we could tell the AGI to ‘leave it to the humans to decide’
I think the main reason is because we don’t believe that AGI works in a way where you could meaningfully tell the AGI to let humans decide the complex cases.
Apart from that there are already plenty of philosophers engaged into trying which issues are important to humans and how they are important.