First I was like, “wow, good question!” Then I was like, “ooh, that one’s easy”. In world A people are just like us except they don’t die. In world B people are just like us except they don’t feel grief about dying. RIght now, do you prefer our world to evolve into world A or world B? As far as I can tell, this is the general procedure for distinguishing your actual values from wireheading.
That’s not really a fair comparison, is it? There is no reason to choose world B since in world A nobody feels grief about dying either (since nobody dies).
To make it fair, I think we need to change world A so that nobody dies, but everyone feels grief at random intervals, so that the average amount of grief is the same as in our world. Then it’s not clear to me which world I should prefer...
You’re right, I didn’t think about that. However, if avoiding grief were a terminal value but avoiding death weren’t, you’d be indifferent between world A and world B (in my original formulation). Are you?
I do prefer world A to world B in your original formulation. Unfortunately, from that fact we can only deduce that I’m not certain that avoiding death isn’t a terminal value. But I already knew that...
If there is a fact of the matter on whether avoiding death is a terminal value, where does that fact reside? Do you believe your mind contains some additional information for identifying terminal values, but that information is somehow hidden and didn’t stop you from claiming that “you’re not certain”?
World A is clearly better, because not only can people in it not feel grief, but they can do so indefinitely, without death stopping them from not feeling grief.
First I was like, “wow, good question!” Then I was like, “ooh, that one’s easy”. In world A people are just like us except they don’t die. In world B people are just like us except they don’t feel grief about dying. RIght now, do you prefer our world to evolve into world A or world B? As far as I can tell, this is the general procedure for distinguishing your actual values from wireheading.
That’s not really a fair comparison, is it? There is no reason to choose world B since in world A nobody feels grief about dying either (since nobody dies).
To make it fair, I think we need to change world A so that nobody dies, but everyone feels grief at random intervals, so that the average amount of grief is the same as in our world. Then it’s not clear to me which world I should prefer...
You’re right, I didn’t think about that. However, if avoiding grief were a terminal value but avoiding death weren’t, you’d be indifferent between world A and world B (in my original formulation). Are you?
I do prefer world A to world B in your original formulation. Unfortunately, from that fact we can only deduce that I’m not certain that avoiding death isn’t a terminal value. But I already knew that...
If there is a fact of the matter on whether avoiding death is a terminal value, where does that fact reside? Do you believe your mind contains some additional information for identifying terminal values, but that information is somehow hidden and didn’t stop you from claiming that “you’re not certain”?
I’m not Wei_Dai but in the general case that is how facts of the matter work.
World A is clearly better, because not only can people in it not feel grief, but they can do so indefinitely, without death stopping them from not feeling grief.