Don’t you need AI to go through the many millions of experiences that it might take to develop a good morality strategy?
I’m entranced by Jordan Peterson’s descriptions, which seem to light up the evolutionary path of morality for humans. Shouldn’t AI be set up to try to grind through the same progress?
I think the main thing you’re missing here is that an AI is not generally going to share common learning facilities with humans. An AI growing up as a human will make it wildly different from a normal human because they aren’t built precisely to learn from those experiences the way a human does.
Don’t you need AI to go through the many millions of experiences that it might take to develop a good morality strategy?
I’m entranced by Jordan Peterson’s descriptions, which seem to light up the evolutionary path of morality for humans. Shouldn’t AI be set up to try to grind through the same progress?
I think the main thing you’re missing here is that an AI is not generally going to share common learning facilities with humans. An AI growing up as a human will make it wildly different from a normal human because they aren’t built precisely to learn from those experiences the way a human does.