Reply to Stuart on anthropics

You wake up in a hospital bed, remembering nothing of your past life. A stranger sits beside the bed, smiling. He says:

“I happen to know an amusing story about you. Many years ago, before you were born, your parents were arguing about how many kids to have. They settled on flipping a coin. If the coin came up heads, they would have one child. If it came up tails, they would have ten.”

“I will tell you which way the coin came up in a minute. But first let’s play a little game. Would you like a small piece of chocolate, or a big tasty cake? There’s a catch though: if you choose the cake, you will only receive it if you’re the only child of your parents.”

Stuart Armstrong has proposed a solution to this problem (see the fourth model in his post). Namely, you switch to caring about the average that all kids receive in your branch. This doesn’t change the utility all kids get in all possible worlds, but makes the problem amenable to UDT, which says all agents would have precommitted to choosing cake as long as it’s better than two pieces of chocolate (the first model in Stuart’s post).

But.

Creating two physically separate worlds with probability 50% should be decision-theoretically equivalent to creating them both with probability 100%. In other words, a correct solution should still work if the coin is quantum. In other words, the problem should be equivalent to creating 11 kids, offering them chocolate or cake, and giving cake only if you’re the first kid. But would you really choose cake in this case, knowing that you could get the chocolate for certain? What if there were 1001 kids? This is a hard bullet to swallow, and it seems to suggest that Stuart’s analysis of his first model may be incorrect.

I await comments from Stuart or anyone else who can figure this out.