I gave an example where choice of representation is important: Eliezer’s CEV. If the choice of representation shouldn’t to be important, then that seems to be argument against CEV.
Bullet acknowledged and bitten. A Friendly AI attempting to identify humanity’s supposed CEV will also have to be a politician and have enough support so that they don’t shut it down. As a politician, it will have to appeal to people with the standard biases. So it’s not enough for it to say, “okay, here’s something all of you should agree on as a value, and benefit from me moving humanity to that state”.
And in figuring out what would appeal to humans, it will have to model the same biases that blur the distinction.
I gave an example where choice of representation is important: Eliezer’s CEV. If the choice of representation shouldn’t to be important, then that seems to be argument against CEV.
Bullet acknowledged and bitten. A Friendly AI attempting to identify humanity’s supposed CEV will also have to be a politician and have enough support so that they don’t shut it down. As a politician, it will have to appeal to people with the standard biases. So it’s not enough for it to say, “okay, here’s something all of you should agree on as a value, and benefit from me moving humanity to that state”.
And in figuring out what would appeal to humans, it will have to model the same biases that blur the distinction.
I was referring to you referring to my post on playing with utility/prior representations.