Is this equivalent to the modified Newcomb’s problem?
Omega looks at my code and produces a perfect copy of me which it puts in a separate room. One of us (decided by the toss of a coin if you like) is told, “if you put $1000 in the box, I will give $1000000 to your clone.”
Once Omega tells us this, we know that putting $1000 in the box won’t get us anything, but if we are the sort of person who puts $1000 in the box then we would have gotten $1000000 if we were the other clone.
What happens now if Omega is able to change my utility function? Maybe I am a paperclipper, but my copy has been modified so that every instance of “paperclip” in its decision calculus has been replaced by “paperweight” (or more precisely, the copy is what would have happened if my entire history had been modified by replacing paperclips by paperweights). Omega then offers one of the copies of me, chosen randomly, the choice between producing 1000 paperclips (resp. paperweights) and 1000000 paperweights (resp. paperclips). This seems like just as reasonable a question, if changing an agent’s utility function makes sense. But now suppose I remove the randomness, and just always give the paperweighter the choice between making 1000 paperweights and 1000000 paperclips. Now I can’t find a reasonable argument for making the paperclips.
There is at least a slight difference in that in the stated version it is at least question whether any version of you is actually getting anything useful out of giving Omega money.
Is this equivalent to the modified Newcomb’s problem?
Omega looks at my code and produces a perfect copy of me which it puts in a separate room. One of us (decided by the toss of a coin if you like) is told, “if you put $1000 in the box, I will give $1000000 to your clone.”
Once Omega tells us this, we know that putting $1000 in the box won’t get us anything, but if we are the sort of person who puts $1000 in the box then we would have gotten $1000000 if we were the other clone.
What happens now if Omega is able to change my utility function? Maybe I am a paperclipper, but my copy has been modified so that every instance of “paperclip” in its decision calculus has been replaced by “paperweight” (or more precisely, the copy is what would have happened if my entire history had been modified by replacing paperclips by paperweights). Omega then offers one of the copies of me, chosen randomly, the choice between producing 1000 paperclips (resp. paperweights) and 1000000 paperweights (resp. paperclips). This seems like just as reasonable a question, if changing an agent’s utility function makes sense. But now suppose I remove the randomness, and just always give the paperweighter the choice between making 1000 paperweights and 1000000 paperclips. Now I can’t find a reasonable argument for making the paperclips.
There is at least a slight difference in that in the stated version it is at least question whether any version of you is actually getting anything useful out of giving Omega money.