More near-equivalent reformulations of the problem (in support of the second horn):
A trillion copies will be created, believing they have won the lottery. All but one will be killed (1/trillion that your current state leads directly to your future state). If you add some uniportant differentiation between the copies—give each one a speratate number—then the situation is clearer: you have one chance in a trillion that the future self will remember your number (so your unique contribution will have 1/trillion chance of happening), while he will be certain to believe he has won the lottery (he gets that belief from everyone.
A trillion copies are created, each altruistically happy that one among the group has won the lottery. One of them at random is designated the lottery winner. Then everyone else is killed.
Follow the money: you (and your copies) are not deriving utility from winning the lottery, but from spending the money. If each copy is selfish, there is no dilema: the lottery winnings divided amongst a trillion cancels out the trillion copies. If each copy is altruistic, then the example is the same as above; in which case there is a mass of utility generated from the copies, which vanish when the copies vanish. But this extra mass of utility is akin to the utility generated by: “It’s wonderful to be alive. Quick, I copy myself, so now many copies feel it’s wonderful to be alive. Then I delete the copies, so the utility goes away”.
“You (and your copies) are not deriving utility from winning the lottery, but from spending the money”
I would say that you derive utility from knowing that you’ve won money you can spend. But, if you only get $1, you haven’t won very much.
I think that a better problem would be if you split if your favourite team won the super bowl. Then you’d have a high probability of experiencing this happiness, and the number of copies wouldn’t reduce it.
I think that a better problem would be if you split if your favourite team won the super bowl. Then you’d have a high probability of experiencing this happiness, and the number of copies wouldn’t reduce it.
More near-equivalent reformulations of the problem (in support of the second horn):
A trillion copies will be created, believing they have won the lottery. All but one will be killed (1/trillion that your current state leads directly to your future state). If you add some uniportant differentiation between the copies—give each one a speratate number—then the situation is clearer: you have one chance in a trillion that the future self will remember your number (so your unique contribution will have 1/trillion chance of happening), while he will be certain to believe he has won the lottery (he gets that belief from everyone.
A trillion copies are created, each altruistically happy that one among the group has won the lottery. One of them at random is designated the lottery winner. Then everyone else is killed.
Follow the money: you (and your copies) are not deriving utility from winning the lottery, but from spending the money. If each copy is selfish, there is no dilema: the lottery winnings divided amongst a trillion cancels out the trillion copies. If each copy is altruistic, then the example is the same as above; in which case there is a mass of utility generated from the copies, which vanish when the copies vanish. But this extra mass of utility is akin to the utility generated by: “It’s wonderful to be alive. Quick, I copy myself, so now many copies feel it’s wonderful to be alive. Then I delete the copies, so the utility goes away”.
“You (and your copies) are not deriving utility from winning the lottery, but from spending the money”
I would say that you derive utility from knowing that you’ve won money you can spend. But, if you only get $1, you haven’t won very much.
I think that a better problem would be if you split if your favourite team won the super bowl. Then you’d have a high probability of experiencing this happiness, and the number of copies wouldn’t reduce it.
Neat!