Monty Hall Sleeping Beauty

A friend referred me to an­other pa­per on the Sleep­ing Beauty prob­lem. It comes down on the side of the halfers.

I didn’t have the pa­tience to finish it, be­cause I think SB is a pointless ar­gu­ment about what “be­lief” means. If, in­stead of ask­ing Sleep­ing Beauty about her “sub­jec­tive prob­a­bil­ity”, you asked her to place a bet, or take some ac­tion, ev­ery­one could agree what the best an­swer was. That it per­plexes peo­ple is a sign that they’re talk­ing non-sense, us­ing words with­out agree­ing on their mean­ings.

But, we can make it more ob­vi­ous what the ar­gu­ment is about by us­ing a trick that works with the Monty Hall prob­lem: Add more doors. By doors I mean days.

The Monty Hall Sleep­ing Beauty Prob­lem is then:

• On Sun­day she’s given a drug that sends her to sleep for a thou­sand years, and a coin is tossed.

• If the coin lands heads, Beauty is awak­ened and in­ter­viewed once.

• If the coin comes up tails, she is awak­ened and in­ter­viewed 1,000,000 times.

• After each in­ter­view, she’s given a drug that makes her fall asleep again and for­get she was wo­ken.

• Each time she’s wo­ken up, she’s asked, “With what prob­a­bil­ity do you be­lieve that the coin landed tails?”

The halfer po­si­tion im­plies that she should still say 12 in this sce­nario.

Does stat­ing it this way make it clearer what the ar­gu­ment is about?

• I like An­thropic SB bet­ter.

SB has the fol­low­ing rules ex­plained to her on Sun­day.

You will be drugged to sleep now.
Then we will flip a coin.
On a tails, you will be wo­ken up to­mor­row and asked “What is the prob­a­bil­ity that the coin landed tails?”

Who still thinks that SB should as­sign 12 to the prob­a­bil­ity that the coin landed heads?

• In my video here I look at a lot of the ram­ifi­ca­tions of SB de­ci­sions: https://​​www.youtube.com/​​watch?v=aiGOGkBiWEo

What’s rele­vant here is the fre­quen­tist po­si­tion. Imag­ine you do the SB ex­per­i­ment a thou­sand times in a row. If you tell SB “be cor­rect the most of­ten you are asked”, she will be­have as a thirder. If you tell SB “be cor­rect in the most ex­per­i­ments”, then she will be­have as a halfer. So fre­quen­tism no longer con­verges to a unique sub­jec­tive prob­a­bil­ity in the long run.

• No; you are ask­ing her two differ­ent ques­tions, so it is cor­rect for fre­quen­tism to give differ­ent an­swers to the differ­ent ques­tions.

• Of course. But the two ques­tions are the same out­side of an­thropic situ­a­tions; they are two ex­ten­sions of the un­der­defined “how of­ten was I right?” Or, if you pre­fer, the fre­quen­tist an­swer in an­thropic situ­a­tions is de­pen­dent on the ex­act ques­tion asked, show­ing that “an­thropic prob­a­bil­ity” is not a well defined con­cept.

• This isn’t a new idea. It’s men­tioned in http://​​www.an­thropic-prin­ci­ple.com/​​preprints/​​beauty/​​syn­the­sis.pdf , for in­stance.

Also, I be­lieve if you read the (de­tailed) ar­gu­ments for each side, you’ll find it much harder to re­duce them to dis­agree­ment over word mean­ing. Or at least that’s what I re­mem­ber from when I looked at them.

• Last two links are pay­walled.

• Usu­ally “Monty Hall”?

• Oh, yeah. Too much D&D.

• It’s not clear to me ex­actly what your po­si­tion is, so I will as­sume you’re a thirder. If this is not the case and I have mis­in­ter­preted your po­si­tion, feel free to cor­rect me at will.

I dis­agree with you be­cause I think that “sub­jec­tive prob­a­bil­ity” is in­deed what one should be ask­ing, be­cause only in this way one can be­lieve differ­ent things de­pend­ing on the bet made.

For ex­am­ple, let me at­tack your Monty-halled SB:

• in an urn there are two white balls and a red one: one is ex­tracted, if the ball is white the SB is awaken and in­ter­viewed once, if red is ex­tracted then she’s in­ter­viewed one mil­lion times;

• the sleep­ing beauty must de­cide be­fore­hand whether she wants to bet on red or white. If she’s cor­rect then she wins a mil­lion dol­lar.

If the thirder was always the cor­rect an­swer, one could calcu­late that the red branch gets be­fore­hand a prob­a­bil­ity of .999999, so the SB would always bet on the red and lose on av­er­age on the ‘halfer’ beauty.

• the sleep­ing beauty must de­cide beforehand

You’ve changed the prob­lem to suit your an­swer.

• Yes, but to be clear, my ‘an­swer’ is that theres’ no uni­ver­sal right an­swer: when­ever the ques­tion asked is about the sin­gle an­thropic po­si­tion, thirders are cor­rect, when it’s about the global struc­ture of the branch they’re in, halfers’ is the cor­rect an­swer.
The ar­gu­ment would have car­ried through even if I had not de­stroyed the sym­me­try be­tween the branches, be­cause in that case it would have been that both po­si­tions would have won on av­er­age, and so there was no ‘ob­vi­ously’ cor­rect an­swer, but I think this way is clearer, be­cause in the Monty Hall ver­sion one of the branch gets al­most all prob­a­bil­ity mass.

• Only one of those ques­tions is asked in the prob­lem proper. The other is the product of a poor rephras­ing or some­body seek­ing the ques­tion to which their in­cor­rect an­swer ceases to be in­cor­rect.

• I don’t know /​ care to track the prob­lem as it was origi­nally for­mu­lated. If it’s as you say so, then I whole­heart­edly agree that the cor­rect an­swer is 13.
It’s just nice to be able to rea­son cor­rectly about this kind of an­thropic ques­tions and to be aware that the an­swer changes (which is not a given in non Bayesian takes on prob­a­bil­ity).

• AFAICT, the ar­gu­ment has noth­ing to do with the prob­lem at all, and ev­ery­thing to do with defend­ing “your” side.

My ini­tial re­sponse was “halfer,” the naively ob­vi­ous an­swer. Then, know­ing that these prob­lems always have a trick, I ex­am­ined the pre­cise phras­ing of the ques­tion more closely, and “thirder” is clearly cor­rect. That’s the -point- of the prob­lem, and what makes it in­ter­est­ing—it’s de­signed to make you come to the wrong con­clu­sion us­ing naive logic, for the pure pur­pose of show­ing that the naive logic is, well, naive. We wouldn’t be dis­cussing the prob­lem if it didn’t have that prop­erty—if the naive solu­tion wasn’t wrong, it would be a com­pletely un­in­ter­est­ing prob­lem.

Spend some time guess­ing the teacher’s pass­word, peo­ple, be­fore you marry your an­swer, and then pro­ceed to spend hours try­ing to in­vent a novel rea­son why your an­swer must be the cor­rect one. The prob­lem ex­ists -be­cause- it defies your ex­pec­ta­tions. In­stead of try­ing to jus­tify your ex­pec­ta­tions, take a look at what the prob­lem is try­ing to teach you, be­cause that is what it was de­signed to do.

TLDR? The Sleep­ing Beauty prob­lem was de­signed to im­part a les­son about naive logic. Quit fight­ing the les­son.

• Care to elab­o­rate?

You just woke up. You don’t know if the coin was head or tails, and you have no fur­ther in­for­ma­tion. You knew it was 50-50 be­fore go­ing to sleep. No new in­for­ma­tion, no new an­swer. I don’t see what the “twist” is. Monty Hall, there’s an­other in­for­ma­tion in­put—the door the host opens never has the prize be­hind it.

Or, an­other per­spec­tive : a perfect era­sure of some­one’s mem­o­ries and restora­tion of their body to the pre-event state is ex­actly the same as if the event in ques­tion never oc­curred. So delete the 1 mil­lion from con­sid­er­a­tion. It’s just 1 in­ter­view post wak­ing. Heads or Tails?

• You’ve wo­ken up. That, it­self, is in­for­ma­tion.

Your lat­ter para­graph, like so many 5050 jus­tifi­ca­tions, re­places the ac­tual ques­tion with a differ­ent one.