The Validity of Self-Locating Probabilities (Pt. 2)

In a previous post, I argued self-locating probabilities are not valid concepts. Many comments want me to use examples with solid numbers and bets. So here is the example that is essentially the Sleeping Beauty Problem with a small twist to highlight the problems of self-locating probability.

Cloning with Memory (with a coin toss)

The experiment is almost the same as before. Tonight during your sleep some mad scientist will scan your body at a molecular level to create a highly accurate clone. The process is highly advanced so the created person will accurately retain the original’s memory to a degree not discernible by human cognition. So after waking up, there is no way to tell whether you are the Original or the Clone. However, the mad scientist will only perform the cloning if a fair coin toss lands on Tails (He will scan you regardless). Now, after waking up ask yourself this: “what is the probability that I am the Original?” Also, “What is the probability that the coin landed on Heads?”

Possible Answers

Let me present my answer first so it is out of the way: The probability of Heads is 12 since it is a fair coin toss. And the “probability I am the Orignal” is not a valid concept. “I” is an identification not based on anything but my first-person perspective. Whether “I” am the Orignal or the Clone is something primitive, not analyzable. Any attempt to justify this probability requires additional postulates such as equating “I” to a random sample of some sort.

Perhaps the more popular answer would be to say the probability that I am the Orignal is 23. Whereas the probability for Heads is 13. This would correspond to Thrider’s camp in the Sleeping Beauty Problem. The rationale for it may not be the same for all Thirders. But it typically follows the Self-Indication Assumption, and finding myself exists eliminates the possibility that I am the Clone and the coin landed Heads.

Another camp would be saying the probability of Heads is 12, and the probability that I am the Orignal is 34. This corresponds to the Halfer camp in the Sleeping Beauty problem. This camp endorses the “no new information” argument. But have different reasons regarding how to update given self-locating information.

If You Say P(Heads)=1/​3

Say the Mad scientist wants to encourage people to participate in his experiment. So he decides to give 2 gold bars to each copy after waking him up. He will always offer each copy a bet. You can give up these 2 bars for a chance to win 5 if the coin landed Heads. All this is disclosed to you. There is no new information when being offered the bars and bet. Say your objective is simple: “I just want more gold” (and risk-neutral). Given you think P(heads)=1/​3, would you take the bet? If you would take the bet, what makes your decision not reflecting the probability?

If You Say P(Heads)=1/​2

It faces the same trouble as pointed out by Elga in the sleeping beauty problem. What happens when I learn that I am the Orignal. i.e. what is P(heads|I am the Original)? Standard Bayesian update would give the probability of Heads is 23. But that can’t be right. For this experiment, the Original and the Clone do not have to be waked up at the same time. The mad scientist could wake up the Orignal first. In fact, the coin can be tossed after it. For dramatic effect, after telling you you are the original, the mad scientist can give you the coin and let you toss it. It seems absurd to say the probability for Heads is anything but 12. Why is does the probability for Heads remains unchanged after learning you are the Orignal? How come Bayesian update is not applicable for self-locating information?