I’m thinking along similar lines and appreciate your articulation.
“How do we make… [self-interested] AGI that cares enough to act compassionately for the benefit of all beings?” Or: under what conditions would compassion in self-interested AGI be selected for?
I’m thinking along similar lines and appreciate your articulation.
“How do we make… [self-interested] AGI that cares enough to act compassionately for the benefit of all beings?” Or: under what conditions would compassion in self-interested AGI be selected for?
Not a concrete answer, but the end of this post gestures at one: https://www.lesswrong.com/posts/9f2nFkuv4PrrCyveJ/make-superintelligence-loving