If the smarter you get, the more things you think are social convention and the fewer you think are absolute morality, then what is our self-improving AI going to eventually think about the CEV we coded in back when he was but an egg?
It isn’t going to think the CEV is an absolute morality—it’ll just keep doing what it is programmed to do because that is what it does. If the programming is correct it’ll keep implementing CEV. If it was incorrect then we’ll probably all die.
The relevance to ‘absolute morality’ here is that if the programmers happened to believe there was an absolute morality and tried to program the AI to follow that then they would fail, potentially catastrophically.
It isn’t going to think the CEV is an absolute morality—it’ll just keep doing what it is programmed to do because that is what it does. If the programming is correct it’ll keep implementing CEV. If it was incorrect then we’ll probably all die.
The relevance to ‘absolute morality’ here is that if the programmers happened to believe there was an absolute morality and tried to program the AI to follow that then they would fail, potentially catastrophically.