I agree with Andy Wood and Nick Tarleton. To put what they have said another way, you have taken the 2-place function
Rightness(person,act)
And replaced it with a certain unspecified unary rightness function which I will call “Eliezer’s_big_computation( -- )”. You have told us informally that we can approximate
Eliezer’s_big_computation( X ) = happiness( X ) + survival( X ) + justice( X ) + individuality( X ) + …
But others may define other “big computations”. For example
God’s_big_computation( X ) = submission( X ) + Oppression_of_women( X ) + Conquest_of_heathens( X ) + Worship_of_god( X ) + …
How are we to decide which “big computation” encompasses that which we should pursue?
You have simply replaced the problem of deciding which actions are right with the equivalent problem of deciding which action-guiding computation we should use.
Your CEV algorithm is likely to return something more like God’s_big_computation( - ) than Eliezer’s_big_computation( - ), which is because God’s_big_computation more closely resembles the beliefs of the 6 billion people on this planet. And even if it did return Eliezer’s_big_computation( - ), I’m not sure I agree with that outcome. In any case, I don’t think you said anything new or particularly useful here; I think that we all need to think about this issue more.
As a matter of fact myself and Richard Hollerith have independently thought of a canonical notion of goodness which is objective. He calls it “goal system zero”, I call it “universal instrumental values”.
I agree with Andy Wood and Nick Tarleton. To put what they have said another way, you have taken the 2-place function
Rightness(person,act)
And replaced it with a certain unspecified unary rightness function which I will call “Eliezer’s_big_computation( -- )”. You have told us informally that we can approximate
Eliezer’s_big_computation( X ) = happiness( X ) + survival( X ) + justice( X ) + individuality( X ) + …
But others may define other “big computations”. For example
God’s_big_computation( X ) = submission( X ) + Oppression_of_women( X ) + Conquest_of_heathens( X ) + Worship_of_god( X ) + …
How are we to decide which “big computation” encompasses that which we should pursue?
You have simply replaced the problem of deciding which actions are right with the equivalent problem of deciding which action-guiding computation we should use.
Your CEV algorithm is likely to return something more like God’s_big_computation( - ) than Eliezer’s_big_computation( - ), which is because God’s_big_computation more closely resembles the beliefs of the 6 billion people on this planet. And even if it did return Eliezer’s_big_computation( - ), I’m not sure I agree with that outcome. In any case, I don’t think you said anything new or particularly useful here; I think that we all need to think about this issue more.
As a matter of fact myself and Richard Hollerith have independently thought of a canonical notion of goodness which is objective. He calls it “goal system zero”, I call it “universal instrumental values”.