...an alien ray gun that changed your ice cream preference from chocolate to vanilla, but you wouldn’t be okay being hit with an alien ray gun that changed your preferences from not-wanting-to-rape-people to wanting-to-rape-people.
I’m completely lost about that. I don’t see how vanilla preferences differ from rape preferences. We just happen to weigh them differently. But that is solely a fact about our evolutionary history.
Vanilla preferences are instrumental. I prefer chocolate because of the pleasure I get from eating it. If the alien ray made me want to eat vanilla ice cream rather than chocolate ice cream while still enjoy chocolate ice cream more, I would prefer not be hit by it.
All I’m talking about is how I compute my utility function. I’m not postulating that my way of assigning utility lines up with any absolute facts, so I don’t see how the fact that our brains were evolved is relevant.
Is there a specific part of my post that you don’t understand or that you disagree with?
I think that there may be a failure-to-communicate going on because I play Rationalist’s Taboo with words like ‘should’ and ‘right’ when I’m not talking about something technical. In my mind, these words assert the existence of an objective morality, so I wouldn’t feel comfortable using them unless everyone’s utility functions converged to the same morality—this seems really really unlikely so far.
So, instead I talk about world-states that my utility function assigns utility to. What I think that Eliezer’s trying to get at in No License To Be Human is that you shouldn’t (for the sake of not creating rendering your stated utility function inconsistent with your emotions) be a moral relativist, and that you should pursue your utility function instead of wireheading your brain to make it feel like you’re creating utility.
I think that I’ve interpreted this correctly, but I’d appreciate Eliezer telling me whether I have or not.
I’m completely lost about that. I don’t see how vanilla preferences differ from rape preferences. We just happen to weigh them differently. But that is solely a fact about our evolutionary history.
Vanilla preferences are instrumental. I prefer chocolate because of the pleasure I get from eating it. If the alien ray made me want to eat vanilla ice cream rather than chocolate ice cream while still enjoy chocolate ice cream more, I would prefer not be hit by it.
All I’m talking about is how I compute my utility function. I’m not postulating that my way of assigning utility lines up with any absolute facts, so I don’t see how the fact that our brains were evolved is relevant.
Is there a specific part of my post that you don’t understand or that you disagree with?
I agree with you, I disagree with Yudkowksy (or don’t understand him). By what you wrote you seem to disagree with him as well.
Could you link me to the post of Eliezer’s that you disagree with on this? I’d like to see it.
This comment, as I wrote here. I don’t understand this post.
I think that there may be a failure-to-communicate going on because I play Rationalist’s Taboo with words like ‘should’ and ‘right’ when I’m not talking about something technical. In my mind, these words assert the existence of an objective morality, so I wouldn’t feel comfortable using them unless everyone’s utility functions converged to the same morality—this seems really really unlikely so far.
So, instead I talk about world-states that my utility function assigns utility to. What I think that Eliezer’s trying to get at in No License To Be Human is that you shouldn’t (for the sake of not creating rendering your stated utility function inconsistent with your emotions) be a moral relativist, and that you should pursue your utility function instead of wireheading your brain to make it feel like you’re creating utility.
I think that I’ve interpreted this correctly, but I’d appreciate Eliezer telling me whether I have or not.