That’s why I specified that she changed her mind only due to an unlikely chance recollection of a conversation from years ago, which she knows would be unlikely to have happened on any previous awakening.
Hmm. This opens a tangent discussion about determinism and whether the amnesia is supposed to return her to the exactly the same state as before, but thankfully we do not need to go there. We can just assume that a prince charming bursts into the room to rescue the beauty or something
The setting is only “broken up” when she decides to leave the room, but she can think about probabilities before that. Are you saying that once she decides to leave the room, her probabilities for aspects of the external world should change?
I’m saying that some of her probabilities become meaningful, even though they were not before. Tails&Monday, Tails&Tuesday, Heads&Monday become three elementary outcomes for her when she is suddenly not participating in the experiment. But when she is going along the experiment, Tails&Tuesday always follows Tails&Monday—these outcomes are causally connected and if you treat them as if they are not you arrive to the wrong conclusion.
It’s easy to show that SB that every day of the experiment has a small chance of being interrupted/change her mind and walk away can correctly guess Tails with about 2⁄3 accuracy when she was interrupted/changed her mind. But only with 1⁄2 accuracy otherwise.
def interruption(heads_chance=0.5, interrupt_chance=0.001):
days, coin = classic(heads_chance=heads_chance)
for day in days:
if interrupt_chance > random.random():
return day, coin
return None, coin
interrupted_coin_guess = []
not_interrupted_coin_guess = []
for n in range(100000):
day, coin = interruption()
if day is not None:
interrupted_coin_guess.append(coin == 'Heads')
else:
not_interrupted_coin_guess.append(coin == 'Heads')
print(interrupted_coin_guess.count(True)/len(interrupted_coin_guess))
# 0.3006993006993007
print(not_interrupted_coin_guess.count(True)/len(not_interrupted_coin_guess))
# 0.5017374846029823
The python code for interruption() doesn’t quite make sense to me.
for day in days:
if interrupt_chance > random.random():
return day, coin
Suppose that day is Tuesday here. Then the function returns Tuesday, Tails, which represents that on a Tails Tuesday Beauty wakes up and is rescued by Prince Charming. But in this scenario she also woke up on Monday and was not rescued. This day still happened and somehow it needs to be recorded in the overall stats for the answer to be accurate.
This particular outcome is extremely rare: less then a tenth of a percent. It doesn’t contribute much to the results:
interrupted_coin_guess = []
not_interrupted_coin_guess = []
for i in range(100000):
day, coin = interruption()
if day is not None:
interrupted_coin_guess.append(coin == 'Heads')
if day == 'Tuesday':
not_interrupted_coin_guess.append(coin == 'Heads')
else:
not_interrupted_coin_guess.append(coin == 'Heads')
print(interrupted_coin_guess.count(True)/len(interrupted_coin_guess))
# 0.363013698630137
print(not_interrupted_coin_guess.count(True)/len(not_interrupted_coin_guess))
# 0.501831795159256
The point is that specifically in the rare outcomes where the Beauty is interrupted (some low probable random event happens and beauty notices it) she can guess Tails with 2⁄3 accuracy (actually a bit worse than that, the rarer the event the closer it is to 2⁄3) per experiment. Which she can not do when she is not interrupted.
Sure, it’s rare with the given constants, but we should also be able to run the game with interrupt_chance = 0.1, 0.5, 0.99, or 1.0, and the code should output a valid answer.
Naively, if an interruption increases the probability of the coin being Tails, then not being interrupted should increase the probability of the coin being Heads. But with the current python code, I don’t see that effect, trying with interrupt_chance of 0.1, 0.5, or 0.9.
Sure, it’s rare with the given constants, but we should also be able to run the game with interrupt_chance = 0.1, 0.5, 0.99, or 1.0, and the code should output a valid answer.
Well, if you want to use custom chance then you shouldn’t actually count not interrupted Monday if the Tuesday was interrupted. You see, here we count beauty being interrupted per experiment. If we count not interrupted Monday as a separate experiment we mess our metrics up, which isn’t a big deal with very low chance of interruption but becomes relevant with custom chances.
Naively, if an interruption increases the probability of the coin being Tails, then not being interrupted should increase the probability of the coin being Heads.
If you do not count non-interrupted Monday, when Tuesday is interrupted that’s indeed what happens and is clearly visible on high interruption chances. After all, if there is a highly probable event that can happen at every awakening it’s much more likely not to happen if you are awaken only once. The probability of Heads on not being interrupted increases from 0.5 and approaches 1 with the interruption chance increase. This would be extremely helpful, sadly this information is unavailable. Beauty can’t be shure what high likely event didn’t happen in the experiment due to the memory loss—only what event didn’t happen in this awakening.
This code-based approach is a very concrete approach to the problem, by the way, so thank you.
if you want to use custom chance then you shouldn’t actually count not interrupted Monday if the Tuesday was interrupted.
Sure. So let’s go back to the first way you had of calculating this:
for n in range(100000):
day, coin = interruption()
if day is not None:
interrupted_coin_guess.append(coin == 'Heads')
else:
not_interrupted_coin_guess.append(coin == 'Heads')
print(interrupted_coin_guess.count(True)/len(interrupted_coin_guess))
# 0.3006993006993007
The probability this is calculating is a per-experiment probability that the experiment will be interrupted. But Beauty doesn’t ever get the information “this experiment will be interrupted”. Instead, she experiences, or doesn’t experience, the interruption. It’s possible for her to not experience an interruption, even though she will later be interrupted, the following day. So this doesn’t seem like a helpful calculation from Beauty’s perspective, when Prince Charming busts in through the window.
The beauty gets the information “This experiment is interrupted” when she observed the interruption on her awakening. She never gets the information “This experiment is not to be interrupted” because the interruption could happen on the other awakening.
This means that specifically in the awakening that the experiment is interrupted/some random low probable event happens, the Beauty can determine how the coin landed in this particular experiment better than chance, contrary to other awakenings.
This opens a tangent discussion about determinism and whether the amnesia is supposed to return her to the exactly the same state as before, but thankfully we do not need to go there.
I’m saying that some of her probabilities become meaningful, even though they were not before. Tails&Monday, Tails&Tuesday, Heads&Monday become three elementary outcomes for her when she is suddenly not participating in the experiment. But when she is going along the experiment, Tails&Tuesday always follows Tails&Monday—these outcomes are causally connected and if you treat them as if they are not you arrive to the wrong conclusion.
Some of her probabilities “become meaningful” in the sense that she is now more interested in them from a practical perspective. But they were meaningful beliefs before by any normal rational perspective.
I’m not sure how to continue this discussion. Your claim that Beauty’s probabilities for Tails&Monday, Tails&Tuesday, and Heads&Monday need not sum to one is completely contrary to common sense, the formal theory of probability, and the way probability is used in numerous practical applications. What can I say when you deny the basis of rational discussion?
I think you have become mentally trapped by the Halfer position, and are blind to the fact that in trying to defend it you’ve adopted a position that is completely absurd. This may perhaps be aided by an abstract view of the problem, in which you never really consider Beauty to be an actual person. No actual person thinks in the absurd way you are advocating.
Regarding your simulation program, you build in the Halfer conclusion by apending only one guess when the experiment is not interrupted, even if the coin lands Tails.
To briefly go there… returning her to the exactly same state would violate the no-cloning theorem of quantum mechanics.
The state doesn’t have to be the same up to the quantum mechanics just the same to the point that SB has the same thoughts for the same reasons and it’s quite possible that QM isn’t required for it. Nevertheless let’s not pursue this line of inquiry anymore.
I’m not sure how to continue this discussion. Your claim that Beauty’s probabilities for Tails&Monday, Tails&Tuesday, and Heads&Monday need not sum to one is completely contrary to common sense, the formal theory of probability, and the way probability is used in numerous practical applications. What can I say when you deny the basis of rational discussion?
I claim that they indeed need to sum to 1 so that we could use the formal probability theory in this setting as Kholmogorov axioms require that probability of the whole sample space was equal to one. However, they do not, as they are not three independant elementary outcomes as long as the beauty participates in the experiment. Thus we can’t define probability space thus can’t lawfully use the mathematical apparatus of formal probability theory.
I don’t think it’s fair to say that I deny the basis of rational discussion or adopted a completely absurd position. Sometimes maps do not corrspond to the territory. This includes mathematical models. There are settings in which 2+2 is not 4. There are settings where operation of “addition” cannot be defined. Likewise there are settings where probabilities of some events can not be defined. I believe Mikaël Cozic noticed one of the firsts that SB setting is problematic for certain probabilities in his paper for Double Halfer’s position and the idea wasn’t dismissed as a priori absurd.
I think you have become mentally trapped by the Halfer position
Actually, as you may see from the post I’m Halfer for incubator SB and Double Halfer for the classic one. And I quite understand what Thirders position in classic points to and do not have much problem with the Antropical Motte. On the contrary you claimed that all the arguments in favour of 1⁄2 are bad. I think, between two of us, you can be more justifiably claimed to be mentally “trapped in a position”.
However, I don’t think it’s a fruitful direction of a discussion. I empathize how ridiculous and absurd I look from your perspective because that’s exactly how you look from mine. I believe you can do the same. So let’s focus on our empathy instead of outgroup mentality and keep the discussion in more respectful manner.
Regarding your simulation program, you build in the Halfer conclusion by apending only one guess when the experiment is not interrupted, even if the coin lands Tails
Of course I’m appending only one guess when the experiment is not interrupted, after all I’m, likewise, appending only one guess when the experiment is interrupted. Here I’m talking about Antropical Bailey: the ability to actually guess the coin side better then chance per experiment, not just taking advantage of a specific scoring rule.
In any case, this is beside the point. Simulation shows how just by ceasing to participate in the experiment SB’s probabilities for aspects of the external world should change, no matter how counterintuitive it may sound. When the experiment is interrupted, Tails&Monday, Tails&Tuesday and Heads&Monday are not causally connected anymore, their sum becomes 1 and thriders mathematical model becomes aplicable: Beauty correctly guesses Tails 2⁄3 of times even though we are adding only one guess per experiment.
I’m not trying to be disrespectful here, just trying to honestly state what I think (which I believe is more respectful than not doing so).
If I understand your position correctly, you think Beauty should rationally think as follows:
Wakes up, notices the experimenters are not here to tell her it’s Wednesday and the experiment is over. Concludes it’s Monday or Tuesday.
Thinks to herself, “so, I wonder if the coin landed Heads?”.
Thinks a bit, and decides the probability of Heads is 1⁄2.
Looks up and sees a dead rat crash to the ground just outside the window of her room. Thinks, “my god! I guess an eagle must have caught and killed the rat, and then lost its grip on it while flying away, and it landed outside my window!”
Says to herself, “you know, death can come at any time...”
Decides that wasting time on this experiment is not the right thing to do, and instead she’ll leave and do something worthwhile.
Happens to ask herself again what the probability is that the coin landed Heads.
After thinking a bit, she decides the probability of Heads is now 1⁄3.
I think the combination of steps (3) and (8) is absurd. This is not how reason and reality work.
Here is how I think Beauty should rationally think:
Beauty notices the experimenters are not here to tell her it’s Wednesday and the experiment is over. Concludes it’s Monday or Tuesday.
Thinks to herself, “so, I wonder if the coin landed Heads?”
(optional) Entertains the idea that her awakening itself is an evidence in favor of Tails, but realizes that it would be true only if her current experience was randomly sampled among three independent outcomes Tails&Monday, Tails&Tuesday and Heads&Monday, which isn’t true for this experiment. Tails&Tuesday necessary follows Tails&Monday and the causal process that determines her state isn’t random sample from these outcomes.
Concludes that she doesn’t have any evidence that would distinguish an outcome where coin landed Heads from the outcome where the coin landed Tails, thus keeps her prior 1⁄2.
(either) Suddenly a prince bursts through the door and explains that he managed to overcome the defenses of the evil scientists keeping her here in an unlikely feat of martial prowess and is rescuing her now
(or) The beauty has second thought about participating in the experiment. She has good reasons to stay but also some part of her wants to leave. Thankfully there is a random number generator in her room (she has another coin, for example), capable of outputting numbers in some huge interval. She thinks about a number and decides that if the generator output exactly it she will leave the room, otherwise she stays. The generator outputs the number the beauty has guessed. And she leaves.
(or) Suddenly the beauty has panic attack that makes her leave the room immediately. She remembers that scientists told her that the experimental sleeping pills she took on Sunday have this rare side effects that can manifest in any person in 0.1% of cases in any day of the next week.
(or) Any other event, that the beauty is rightfully confident to be low probable but possible for both Monday and Tuesday, happens. Actually even leaving the room is not necessary. The symmetry and thus the setting of the experiment is broken as long as beauty gets this new evidence. Mind you this wouldn’t work with some observation which has high probability of happening—you can check it yourself with the code I provided. Likewise, it also wouldn’t work if the beauty just observes any number on the random generator without precommiting to observing it in particular as it wouldn’t be a rare event from her perspective. See this two posts: https://www.lesswrong.com/posts/u7kSTyiWFHxDXrmQT/sleeping-beauty-resolved https://www.lesswrong.com/posts/8MZPQJhMzyoJMYfz5/sleeping-beauty-not-resolved
And now, due to this new evidence, the Beauty updates to 1⁄3. Because now there is actually a random sampling process going on and she is twice as likely to observe the low probable outcome she has just observed when the coin is Tails
Do the steps from 4 to 8 in your example count as such evidence? They may. If it indeed was a random occurrence and not a deliberately staged performance, the rat crashing to the ground near her window is a relevant evidence! If it’s surprising for you, notice that you’ve updated to 1⁄3 without any evidence whatsoever.
And yes, this is exactly how reason and reality work. Which I’ve shown with an actual python program faithfully replicating the logic of the experiment, that can be run on a computer from our reality. Reasoning like this, the beauty can correctly guess Tails with 2⁄3 probability per experiment without any shenanigans with the scoring rule.
Any other event, that the beauty is rightfully confident to be low probable but possible for both Monday and Tuesday, happens.
And since this happens absolutely every time she wakes, Beauty should always assess the probability of Heads as 1⁄3.
There’s always fly crawling on the wall in a random direction, unlikely to be the same on Monday and Tuesday, or a stray thought about aardvarks, or a dimming of the light from the window as a cloud passes overhead, or any of millions of other things entering her consciousness in ways that won’t be the same Monday and Tuesday.
If you’ve read the posts you link to, you must realize that this is central to my argument for why Heads has probability 1⁄3.
So why are you a Halfer? One reason seems to be explained by your following comment:
it also wouldn’t work if the beauty just observes any number on the random generator without precommiting to observing it in particular as it wouldn’t be a rare event from her perspective
But this is not how probability theory works. A rare event is a rare event. You’ve just decided to define another event, “random number generator produces number I’d guessed beforehand”, and noted that that event didn’t occur. This doesn’t change the fact that the random number generator produced a number that is unlikely to be the same as that produced on another day.
But the more fundamental reason seems to be that you don’t actually want to find the answer to the Sleeping Beauty problem. You want to find the answer to a more fantastical problem in which Beauty can be magically duplicated exactly and kept in a room totally isolated from the external world, and hence guaranteed to have no random experiences, or in which Beauty is a computer program that can be reset to its previous state, and have its inputs totally controlled by an experimenter.
The actual Sleeping Beauty problem is only slightly fantastical—a suitable memory erasing drug might well be discovered tomorrow, and nobody would think this was an incredible discovery that necessitates changing the foundations of probability theory or our notions of consciousness. People already forget things. People already have head injuries that make them forget everything for some period of time.
Of course, there’s nothing wrong with thinking about the fantastical problem. But it’s going to be hard to answer it when we haven’t yet answered questions such whether a computer program can be conscious, and if so, whether consciousness requires actually running the program, or whether it’s enough to set up the (deterministic) program on the computer, so that it could be run, even though we don’t actually push the Start button. Without answers to such questions, how can one have any confidence that you’re reasoning correctly in this situation that is far, far outside ordinary experience?
But in any case, wouldn’t it be interesting and useful to first establish the answer to the actual Sleeping Beauty problem?
And since this happens absolutely every time she wakes, Beauty should always assess the probability of Heads as 1⁄3.
Nope. Doesn’t work this way. There is an important difference between a probability of a specific low probable event happening and a probability of any low probable event from a huge class of events happening. Unsurprisingly, the former is much more probable than the latter and the trick works only with low probable events. As I’ve explicitly said, and you could’ve checked yourself as I provided you the code for it.
It’s easy to see why something that happens absolutely every time she wakes doesn’t help at all. You see, 50% of the coin tosses are Heads. If Beauty correctly guessed Tails 2⁄3 out of all experiments that would be a contradiction. But it’s possible for the Beauty to correctly guess Tails 2⁄3 out of some subset of experiments. To get the 2⁄3 score she need some kind of evidence that happens more often when the coin is Tails than when it is Heads, not all the time, and then guess only when she gets this evidence.
There’s always fly crawling on the wall in a random direction, unlikely to be the same on Monday and Tuesday, or a stray thought about aardvarks, or a dimming of the light from the window as a cloud passes overhead, or any of millions of other things entering her consciousness in ways that won’t be the same Monday and Tuesday.
This is irrelevant unless the Beauty somehow knows where the fly is supposed to be on Monday and where on Tuesday. She can try to guess tails when the fly is in a specific place that the beauty precommited to, hoping that the causal process that define fly position is close enough to placing the fly in this place with the same low probability for every day but it’s not guaranteed to work.
If you’ve read the posts you link to, you must realize that this is central to my argument for why Heads has probability 1⁄3.
I’ve linked two posts. You need to read the second one as well, to understand the mistake in the reasoning of the first.
But this is not how probability theory works. A rare event is a rare event. You’ve just decided to define another event, “random number generator produces number I’d guessed beforehand”, and noted that that event didn’t occur. This doesn’t change the fact that the random number generator produced a number that is unlikely to be the same as that produced on another day.
This is exactly how probability theory works. The event “random number generator produced any number” has very high probability. The event “random number generator produced this specific number” has low probability. Which event we are talking about depends on whether the number was specified or not. It can be confusing if you forget that probabilities are in the mind, that it’s about Beauty decision making process, not the metaphysical essence of randomness.
You want to find the answer to a more fantastical problem in which Beauty can be magically duplicated exactly and kept in a room totally isolated from the external world, and hence guaranteed to have no random experiences
The fact that Beauty is unable to tell which day it is or whether she has been awakened before is an important condition of the experiment.
But this doesn’t have to mean that her experiences on Monday and Tuesday or on Heads and Tails are necessary exactly the same—she just doesn’t have to be able to tell which is which. Beauty can be placed in the differently colored rooms on Monday&Tails, Monday&Heads and Tuesday&Tails. All the furniture can be completely different as well. There can be ciphered message describing the result of the coin toss. Unless she knows how to break the cipher, or knows the pattern in colors/furniture this doesn’t help her. The mathematical model is still the same.
On a repeated experiment beauty can try executing a strategy but this requires precommitment to this strategy. And without this precommitment she will not be able to get useful information from all the differences in the outcomes.
But it’s going to be hard to answer it when we haven’t yet answered questions such whether a computer program can be conscious, and if so, whether consciousness requires actually running the program, or whether it’s enough to set up the (deterministic) program on the computer, so that it could be run, even though we don’t actually push the Start button.
How is the consciousness angle relevant here? Are you under the impression that probability theory works differently depending on whether we are reasoning about conscious or unconscious objects?
Nope. Doesn’t work this way. There is an important difference between a probability of a specific low probable event happening and a probability of any low probable event from a huge class of events happening.
In Bayesian probability theory, it certainly does work this way. To find the posterior probability of Heads, given what you have observed, you combine the prior probability with the likelihood for Heads vs. Tails based on everything that you have observed. You don’t say, “but this observation is one of a large class of observations that I’ve decided to group together, so I’ll only update based on the probability that any observation in the group would occur (which is one for both Heads and Tails in this situation)”.
You’re arguing in a frequentist fashion. A similar sort of issue for a frequentist would arise if you flipped a coin 9 times and found that 2 of the flips were Heads. If you then ask the frequentist what the p-value is for testing the hypothesis that the coin was fair, they’ll be unable to answer until you tell them whether you pre-committed to flipping the coin 9 times, or to flipping it until 2 Heads occurred (they’ll be completely lost if you tell them you just flipped until your finger got tired). Bayesians think this is ridiculous.
Of course, there are plenty of frequentists in the world, but I presume they are uninterested in the Sleeping Beauty problem, since to a frequentist, Beauty’s probability for Heads is a meaningless concept, since they don’t think probability can be used to represent degrees of belief.
How is the consciousness angle relevant here? Are you under the impression that probability theory works differently depending on whether we are reasoning about conscious or unconscious objects?
I think if Beauty isn’t a conscious being, it doesn’t make much sense to talk about how she should reason regarding philosophical arguments about probability.
I suspect we’re at a bit of an impasse with this line of discussion. I’ll just mention that probability is supposed to be useful. And if you extend the problem to allow Beauty to make bets, in various scenarios, the bets the make Beauty the most money are the ones she will make by assessing the probability of Heads to be 1⁄3 and then applying standard decision theory. Halfers are losers.
You are making a fascinating mistake, and I may make a separate post about it, even though it’s not particularly related to anthropics and just a curious detail about probability theory, which in retrospect I relize I was confused myself about. I’d recommend you to meditate on it for a while. You already have all the information required to figure it out. You just need to switch yourself from the “argument mode” to “investigation mode”.
Here are a couple more hints that you may find useful.
1) Suppose you observed number 71 on a random number generator that produces numbers from 0 to 99.
Is it
1 in 100 occurence because the number is exactly 71?
1 in 50 occurence becaue the number consist of these two digits: 7 and 1?
1 in 10 occurence because the first digit is 7?
1 in 2 occurence because the number is more or equal 50?
1 in n occurence because it’s possible to come with some other arbitrary rule?
What determine which case is actually true?
2) Suppose you observed a list of numbers with length n, produced by this random number generator. The probability that exactly this series is produced is 1/100n
At what n are you completely shocked and in total disbelief about your reality, after all you’ve just observed an event that your model of reality claims to be extremely improbable?
Would you be more shocked if all the numbers in this list are the same? If so why?
Can you now produce arbitrary improbable events just by having a random number generator? In what sense are these events have probability 1/100n if you can witness as many of them as you want any time?
You do not need to tell me the answers. It’s just something I believe will be helpful for you to honestly think about.
To find the posterior probability of Heads, given what you have observed, you combine the prior probability with the likelihood for Heads vs. Tails based on everything that you have observed.
Here is the last hint, actually I have a feeling that this just spoils the solution outright so it’s in rot13:
Gur bofreingvbaf “Enaqbz ahzore trarengbe cebqhprq n ahzore” naq “Enaqbz ahzore trarengbe cebqhprq gur rknpg ahzore V’ir thrffrq” ner qvssrerag bofreingvbaf. Lbh pna bofreir gur ynggre bayl vs lbh’ir thrffrq n ahzore orsberunaq. Lbh znl guvax nobhg nf nal bgure novyvgl gb rkgenpg vasbezngvba sebz lbhe raivebazrag.
Fhccbfr jura gur pbva vf Gnvyf gur ebbz unf terra jnyyf naq jura vg’f Urnqf gur ebbz unf oyhr jnyyf. N crefba jub xabjf nobhg guvf naq vfa’g pbybe oyvaq pna thrff gur erfhyg bs n pbva gbff cresrpgyl. N pbybe oyvaq crefba jub xabjf nobhg guvf ehyr—pna’g. Rira vs gurl xabj gung gur ebbz unf fbzr pbybe, gurl ner hanoyr gb rkrphgr gur fgengrtl “thrff Gnvyf rirel gvzr gur ebbz vf terra”.
N crefba jub qvqa’g thrff n ahzore orsberunaq qbrfa’g cbffrff gur novyvgl gb bofreir rirag “Enaqbz ahzore trarengbe cebqhprq gur rknpg ahzore V’ir thrffrq” whfg nf n pbybe oyvaq crefba qbrfa’g unir na novyvgl gb bofreir na rirag “Gur ebbz vf terra”.
I think if Beauty isn’t a conscious being, it doesn’t make much sense to talk about how she should reason regarding philosophical arguments about probability.
The Beauty doesn’t need to experience qualia or be self aware to have meaningful probability estimate.
I’ll just mention that probability is supposed to be useful. And if you extend the problem to allow Beauty to make bets, in various scenarios, the bets the make Beauty the most money are the ones she will make by assessing the probability of Heads to be 1⁄3 and then applying standard decision theory.
Betting arguments are not particularly helpful. They are describing the motte—specific scoring rule, and not the actual ability to guess the outcome of the coin toss in the experiment. As I’ve written in the post itself:
As long as we do not claim that this fact gives an ability to predict the result of the coin toss better than chance, then we are just using different definitions, while agreeing on everything. We can translate from Thirder language to mine and back without any problem. Whatever betting schema is proposed, all other things being equal, we will agree to the same bets.
That is, if betting happens every day, Halfers and Double Halfers need to weight the odds by the number of bets, while Thirders already include this weighting in their definitions “probability”. On the other hand, if only one bet per experiment counts, suddenly it’s thirders who need to discount this weighting from their “probability” and Halfers and Double Halfers who are fine by default.
There are rules for how to do arithmetic. If you want to get the right answer, you have to follow them. So, when adding 18 and 17, you can’t just decide that you don’t like to carry 1s today, and hence compute that 18+17=25.
Similarly, there are rules for how to do Bayesian probability calculations. If you want to get the right answer, you have to follow them. One of the rules is that the posterior probability of something is found by conditioning on all the data you have. If you do a clinical trial with 1000 subjects, you can’t just decide that you’d like to compute the posterior probability that the treatment works by conditioning on the data for just the first 700.
If you’ve seen the output of a random number generator, and are using this to compute a posterior probability, you condition on the actual number observed, say 71. You do not condition on any of the other events you mention, because they are less informative than the actual number—conditioning on them would amount to ignoring part of the data. (In some circumstances, the result of conditioning on all the data may be the same as the result of conditioning on some function of the data—when that function is a “sufficient statistic”, but it’s always correct to condition on all the data.)
This is absolutely standard Bayesian procedure. There is nothing in the least bit controversial about it. (That is, it is definitely how Bayesian inference works—there are of course some people who don’t accept that Bayesian inference is the right thing to do.)
Similarly, there are certain rules for how to apply decision theory to choose an action to maximize your expected utility, based on probability judgements that you’ve made.
If you compute probabilities incorrectly, and then incorrectly apply decision theory to choose an action based on these incorrect probabilities, it is possible that your two errors will cancel out. That is actually rather likely if you have other ways of telling what the right answer is, and hence have the opportunity to make ad hoc (incorrect) alterations to how you apply decision theory in order to get the right decision with the wrong probabilities.
If you’d like to outline some specific betting scenario for Sleeping Beauty, I’ll show you how applying decision theory correctly produces the right action only if Beauty judges the probability of Heads to be 1⁄3.
Of course, there are plenty of frequentists in the world, but I presume they are uninterested in the Sleeping Beauty problem, since to a frequentist, Beauty’s probability for Heads is a meaningless concept, since they don’t think probability can be used to represent degrees of belief.
To make the concept meaningful under Frequentism, Luna has Beauty perform an experiment of asking the higher level experimenters which awakening she is in (H1, T1, or T2). If she undergoes both sets of experiments many times, the frequency of the experimenters responding H1 will tend to 1⁄3, and so the Frequentist probability is similarly 1⁄3.
I say “apparently Frequentist” because Luna doesn’t use the term and I’m not sure of the exact terminology when Luna reasons about the frequency of hypothetical experiments that Beauty has not actually performed.
Hmm. This opens a tangent discussion about determinism and whether the amnesia is supposed to return her to the exactly the same state as before, but thankfully we do not need to go there. We can just assume that a prince charming bursts into the room to rescue the beauty or something
I’m saying that some of her probabilities become meaningful, even though they were not before. Tails&Monday, Tails&Tuesday, Heads&Monday become three elementary outcomes for her when she is suddenly not participating in the experiment. But when she is going along the experiment, Tails&Tuesday always follows Tails&Monday—these outcomes are causally connected and if you treat them as if they are not you arrive to the wrong conclusion.
It’s easy to show that SB that every day of the experiment has a small chance of being interrupted/change her mind and walk away can correctly guess Tails with about 2⁄3 accuracy when she was interrupted/changed her mind. But only with 1⁄2 accuracy otherwise.
The python code for
interruption()
doesn’t quite make sense to me.Suppose that day is Tuesday here. Then the function returns Tuesday, Tails, which represents that on a Tails Tuesday Beauty wakes up and is rescued by Prince Charming. But in this scenario she also woke up on Monday and was not rescued. This day still happened and somehow it needs to be recorded in the overall stats for the answer to be accurate.
This particular outcome is extremely rare: less then a tenth of a percent. It doesn’t contribute much to the results:
The point is that specifically in the rare outcomes where the Beauty is interrupted (some low probable random event happens and beauty notices it) she can guess Tails with 2⁄3 accuracy (actually a bit worse than that, the rarer the event the closer it is to 2⁄3) per experiment. Which she can not do when she is not interrupted.
Sure, it’s rare with the given constants, but we should also be able to run the game with interrupt_chance = 0.1, 0.5, 0.99, or 1.0, and the code should output a valid answer.
Naively, if an interruption increases the probability of the coin being Tails, then not being interrupted should increase the probability of the coin being Heads. But with the current python code, I don’t see that effect, trying with interrupt_chance of 0.1, 0.5, or 0.9.
Well, if you want to use custom chance then you shouldn’t actually count not interrupted Monday if the Tuesday was interrupted. You see, here we count beauty being interrupted per experiment. If we count not interrupted Monday as a separate experiment we mess our metrics up, which isn’t a big deal with very low chance of interruption but becomes relevant with custom chances.
If you do not count non-interrupted Monday, when Tuesday is interrupted that’s indeed what happens and is clearly visible on high interruption chances. After all, if there is a highly probable event that can happen at every awakening it’s much more likely not to happen if you are awaken only once. The probability of Heads on not being interrupted increases from 0.5 and approaches 1 with the interruption chance increase. This would be extremely helpful, sadly this information is unavailable. Beauty can’t be shure what high likely event didn’t happen in the experiment due to the memory loss—only what event didn’t happen in this awakening.
This code-based approach is a very concrete approach to the problem, by the way, so thank you.
Sure. So let’s go back to the first way you had of calculating this:
The probability this is calculating is a per-experiment probability that the experiment will be interrupted. But Beauty doesn’t ever get the information “this experiment will be interrupted”. Instead, she experiences, or doesn’t experience, the interruption. It’s possible for her to not experience an interruption, even though she will later be interrupted, the following day. So this doesn’t seem like a helpful calculation from Beauty’s perspective, when Prince Charming busts in through the window.
The beauty gets the information “This experiment is interrupted” when she observed the interruption on her awakening. She never gets the information “This experiment is not to be interrupted” because the interruption could happen on the other awakening.
This means that specifically in the awakening that the experiment is interrupted/some random low probable event happens, the Beauty can determine how the coin landed in this particular experiment better than chance, contrary to other awakenings.
This opens a tangent discussion about determinism and whether the amnesia is supposed to return her to the exactly the same state as before, but thankfully we do not need to go there.
To briefly go there… returning her to the exactly same state would violate the no-cloning theorem of quantum mechanics. See https://en.wikipedia.org/wiki/No-cloning_theorem
I’m saying that some of her probabilities become meaningful, even though they were not before. Tails&Monday, Tails&Tuesday, Heads&Monday become three elementary outcomes for her when she is suddenly not participating in the experiment. But when she is going along the experiment, Tails&Tuesday always follows Tails&Monday—these outcomes are causally connected and if you treat them as if they are not you arrive to the wrong conclusion.
Some of her probabilities “become meaningful” in the sense that she is now more interested in them from a practical perspective. But they were meaningful beliefs before by any normal rational perspective.
I’m not sure how to continue this discussion. Your claim that Beauty’s probabilities for Tails&Monday, Tails&Tuesday, and Heads&Monday need not sum to one is completely contrary to common sense, the formal theory of probability, and the way probability is used in numerous practical applications. What can I say when you deny the basis of rational discussion?
I think you have become mentally trapped by the Halfer position, and are blind to the fact that in trying to defend it you’ve adopted a position that is completely absurd. This may perhaps be aided by an abstract view of the problem, in which you never really consider Beauty to be an actual person. No actual person thinks in the absurd way you are advocating.
Regarding your simulation program, you build in the Halfer conclusion by apending only one guess when the experiment is not interrupted, even if the coin lands Tails.
The state doesn’t have to be the same up to the quantum mechanics just the same to the point that SB has the same thoughts for the same reasons and it’s quite possible that QM isn’t required for it. Nevertheless let’s not pursue this line of inquiry anymore.
I claim that they indeed need to sum to 1 so that we could use the formal probability theory in this setting as Kholmogorov axioms require that probability of the whole sample space was equal to one. However, they do not, as they are not three independant elementary outcomes as long as the beauty participates in the experiment. Thus we can’t define probability space thus can’t lawfully use the mathematical apparatus of formal probability theory.
I don’t think it’s fair to say that I deny the basis of rational discussion or adopted a completely absurd position. Sometimes maps do not corrspond to the territory. This includes mathematical models. There are settings in which 2+2 is not 4. There are settings where operation of “addition” cannot be defined. Likewise there are settings where probabilities of some events can not be defined. I believe Mikaël Cozic noticed one of the firsts that SB setting is problematic for certain probabilities in his paper for Double Halfer’s position and the idea wasn’t dismissed as a priori absurd.
Actually, as you may see from the post I’m Halfer for incubator SB and Double Halfer for the classic one. And I quite understand what Thirders position in classic points to and do not have much problem with the Antropical Motte. On the contrary you claimed that all the arguments in favour of 1⁄2 are bad. I think, between two of us, you can be more justifiably claimed to be mentally “trapped in a position”.
However, I don’t think it’s a fruitful direction of a discussion. I empathize how ridiculous and absurd I look from your perspective because that’s exactly how you look from mine. I believe you can do the same. So let’s focus on our empathy instead of outgroup mentality and keep the discussion in more respectful manner.
Of course I’m appending only one guess when the experiment is not interrupted, after all I’m, likewise, appending only one guess when the experiment is interrupted. Here I’m talking about Antropical Bailey: the ability to actually guess the coin side better then chance per experiment, not just taking advantage of a specific scoring rule.
In any case, this is beside the point. Simulation shows how just by ceasing to participate in the experiment SB’s probabilities for aspects of the external world should change, no matter how counterintuitive it may sound. When the experiment is interrupted, Tails&Monday, Tails&Tuesday and Heads&Monday are not causally connected anymore, their sum becomes 1 and thriders mathematical model becomes aplicable: Beauty correctly guesses Tails 2⁄3 of times even though we are adding only one guess per experiment.
I’m not trying to be disrespectful here, just trying to honestly state what I think (which I believe is more respectful than not doing so).
If I understand your position correctly, you think Beauty should rationally think as follows:
Wakes up, notices the experimenters are not here to tell her it’s Wednesday and the experiment is over. Concludes it’s Monday or Tuesday.
Thinks to herself, “so, I wonder if the coin landed Heads?”.
Thinks a bit, and decides the probability of Heads is 1⁄2.
Looks up and sees a dead rat crash to the ground just outside the window of her room. Thinks, “my god! I guess an eagle must have caught and killed the rat, and then lost its grip on it while flying away, and it landed outside my window!”
Says to herself, “you know, death can come at any time...”
Decides that wasting time on this experiment is not the right thing to do, and instead she’ll leave and do something worthwhile.
Happens to ask herself again what the probability is that the coin landed Heads.
After thinking a bit, she decides the probability of Heads is now 1⁄3.
I think the combination of steps (3) and (8) is absurd. This is not how reason and reality work.
Here is how I think Beauty should rationally think:
Beauty notices the experimenters are not here to tell her it’s Wednesday and the experiment is over. Concludes it’s Monday or Tuesday.
Thinks to herself, “so, I wonder if the coin landed Heads?”
(optional) Entertains the idea that her awakening itself is an evidence in favor of Tails, but realizes that it would be true only if her current experience was randomly sampled among three independent outcomes Tails&Monday, Tails&Tuesday and Heads&Monday, which isn’t true for this experiment. Tails&Tuesday necessary follows Tails&Monday and the causal process that determines her state isn’t random sample from these outcomes.
Concludes that she doesn’t have any evidence that would distinguish an outcome where coin landed Heads from the outcome where the coin landed Tails, thus keeps her prior 1⁄2.
(either) Suddenly a prince bursts through the door and explains that he managed to overcome the defenses of the evil scientists keeping her here in an unlikely feat of martial prowess and is rescuing her now
(or) The beauty has second thought about participating in the experiment. She has good reasons to stay but also some part of her wants to leave. Thankfully there is a random number generator in her room (she has another coin, for example), capable of outputting numbers in some huge interval. She thinks about a number and decides that if the generator output exactly it she will leave the room, otherwise she stays. The generator outputs the number the beauty has guessed. And she leaves.
(or) Suddenly the beauty has panic attack that makes her leave the room immediately. She remembers that scientists told her that the experimental sleeping pills she took on Sunday have this rare side effects that can manifest in any person in 0.1% of cases in any day of the next week.
(or) Any other event, that the beauty is rightfully confident to be low probable but possible for both Monday and Tuesday, happens. Actually even leaving the room is not necessary. The symmetry and thus the setting of the experiment is broken as long as beauty gets this new evidence. Mind you this wouldn’t work with some observation which has high probability of happening—you can check it yourself with the code I provided. Likewise, it also wouldn’t work if the beauty just observes any number on the random generator without precommiting to observing it in particular as it wouldn’t be a rare event from her perspective. See this two posts:
https://www.lesswrong.com/posts/u7kSTyiWFHxDXrmQT/sleeping-beauty-resolved
https://www.lesswrong.com/posts/8MZPQJhMzyoJMYfz5/sleeping-beauty-not-resolved
And now, due to this new evidence, the Beauty updates to 1⁄3. Because now there is actually a random sampling process going on and she is twice as likely to observe the low probable outcome she has just observed when the coin is Tails
Do the steps from 4 to 8 in your example count as such evidence? They may. If it indeed was a random occurrence and not a deliberately staged performance, the rat crashing to the ground near her window is a relevant evidence! If it’s surprising for you, notice that you’ve updated to 1⁄3 without any evidence whatsoever.
And yes, this is exactly how reason and reality work. Which I’ve shown with an actual python program faithfully replicating the logic of the experiment, that can be run on a computer from our reality. Reasoning like this, the beauty can correctly guess Tails with 2⁄3 probability per experiment without any shenanigans with the scoring rule.
Any other event, that the beauty is rightfully confident to be low probable but possible for both Monday and Tuesday, happens.
And since this happens absolutely every time she wakes, Beauty should always assess the probability of Heads as 1⁄3.
There’s always fly crawling on the wall in a random direction, unlikely to be the same on Monday and Tuesday, or a stray thought about aardvarks, or a dimming of the light from the window as a cloud passes overhead, or any of millions of other things entering her consciousness in ways that won’t be the same Monday and Tuesday.
If you’ve read the posts you link to, you must realize that this is central to my argument for why Heads has probability 1⁄3.
So why are you a Halfer? One reason seems to be explained by your following comment:
it also wouldn’t work if the beauty just observes any number on the random generator without precommiting to observing it in particular as it wouldn’t be a rare event from her perspective
But this is not how probability theory works. A rare event is a rare event. You’ve just decided to define another event, “random number generator produces number I’d guessed beforehand”, and noted that that event didn’t occur. This doesn’t change the fact that the random number generator produced a number that is unlikely to be the same as that produced on another day.
But the more fundamental reason seems to be that you don’t actually want to find the answer to the Sleeping Beauty problem. You want to find the answer to a more fantastical problem in which Beauty can be magically duplicated exactly and kept in a room totally isolated from the external world, and hence guaranteed to have no random experiences, or in which Beauty is a computer program that can be reset to its previous state, and have its inputs totally controlled by an experimenter.
The actual Sleeping Beauty problem is only slightly fantastical—a suitable memory erasing drug might well be discovered tomorrow, and nobody would think this was an incredible discovery that necessitates changing the foundations of probability theory or our notions of consciousness. People already forget things. People already have head injuries that make them forget everything for some period of time.
Of course, there’s nothing wrong with thinking about the fantastical problem. But it’s going to be hard to answer it when we haven’t yet answered questions such whether a computer program can be conscious, and if so, whether consciousness requires actually running the program, or whether it’s enough to set up the (deterministic) program on the computer, so that it could be run, even though we don’t actually push the Start button. Without answers to such questions, how can one have any confidence that you’re reasoning correctly in this situation that is far, far outside ordinary experience?
But in any case, wouldn’t it be interesting and useful to first establish the answer to the actual Sleeping Beauty problem?
Nope. Doesn’t work this way. There is an important difference between a probability of a specific low probable event happening and a probability of any low probable event from a huge class of events happening. Unsurprisingly, the former is much more probable than the latter and the trick works only with low probable events. As I’ve explicitly said, and you could’ve checked yourself as I provided you the code for it.
It’s easy to see why something that happens absolutely every time she wakes doesn’t help at all. You see, 50% of the coin tosses are Heads. If Beauty correctly guessed Tails 2⁄3 out of all experiments that would be a contradiction. But it’s possible for the Beauty to correctly guess Tails 2⁄3 out of some subset of experiments. To get the 2⁄3 score she need some kind of evidence that happens more often when the coin is Tails than when it is Heads, not all the time, and then guess only when she gets this evidence.
This is irrelevant unless the Beauty somehow knows where the fly is supposed to be on Monday and where on Tuesday. She can try to guess tails when the fly is in a specific place that the beauty precommited to, hoping that the causal process that define fly position is close enough to placing the fly in this place with the same low probability for every day but it’s not guaranteed to work.
I’ve linked two posts. You need to read the second one as well, to understand the mistake in the reasoning of the first.
This is exactly how probability theory works. The event “random number generator produced any number” has very high probability. The event “random number generator produced this specific number” has low probability. Which event we are talking about depends on whether the number was specified or not. It can be confusing if you forget that probabilities are in the mind, that it’s about Beauty decision making process, not the metaphysical essence of randomness.
The fact that Beauty is unable to tell which day it is or whether she has been awakened before is an important condition of the experiment.
But this doesn’t have to mean that her experiences on Monday and Tuesday or on Heads and Tails are necessary exactly the same—she just doesn’t have to be able to tell which is which. Beauty can be placed in the differently colored rooms on Monday&Tails, Monday&Heads and Tuesday&Tails. All the furniture can be completely different as well. There can be ciphered message describing the result of the coin toss. Unless she knows how to break the cipher, or knows the pattern in colors/furniture this doesn’t help her. The mathematical model is still the same.
On a repeated experiment beauty can try executing a strategy but this requires precommitment to this strategy. And without this precommitment she will not be able to get useful information from all the differences in the outcomes.
How is the consciousness angle relevant here? Are you under the impression that probability theory works differently depending on whether we are reasoning about conscious or unconscious objects?
Nope. Doesn’t work this way. There is an important difference between a probability of a specific low probable event happening and a probability of any low probable event from a huge class of events happening.
In Bayesian probability theory, it certainly does work this way. To find the posterior probability of Heads, given what you have observed, you combine the prior probability with the likelihood for Heads vs. Tails based on everything that you have observed. You don’t say, “but this observation is one of a large class of observations that I’ve decided to group together, so I’ll only update based on the probability that any observation in the group would occur (which is one for both Heads and Tails in this situation)”.
You’re arguing in a frequentist fashion. A similar sort of issue for a frequentist would arise if you flipped a coin 9 times and found that 2 of the flips were Heads. If you then ask the frequentist what the p-value is for testing the hypothesis that the coin was fair, they’ll be unable to answer until you tell them whether you pre-committed to flipping the coin 9 times, or to flipping it until 2 Heads occurred (they’ll be completely lost if you tell them you just flipped until your finger got tired). Bayesians think this is ridiculous.
Of course, there are plenty of frequentists in the world, but I presume they are uninterested in the Sleeping Beauty problem, since to a frequentist, Beauty’s probability for Heads is a meaningless concept, since they don’t think probability can be used to represent degrees of belief.
How is the consciousness angle relevant here? Are you under the impression that probability theory works differently depending on whether we are reasoning about conscious or unconscious objects?
I think if Beauty isn’t a conscious being, it doesn’t make much sense to talk about how she should reason regarding philosophical arguments about probability.
I suspect we’re at a bit of an impasse with this line of discussion. I’ll just mention that probability is supposed to be useful. And if you extend the problem to allow Beauty to make bets, in various scenarios, the bets the make Beauty the most money are the ones she will make by assessing the probability of Heads to be 1⁄3 and then applying standard decision theory. Halfers are losers.
You are making a fascinating mistake, and I may make a separate post about it, even though it’s not particularly related to anthropics and just a curious detail about probability theory, which in retrospect I relize I was confused myself about. I’d recommend you to meditate on it for a while. You already have all the information required to figure it out. You just need to switch yourself from the “argument mode” to “investigation mode”.
Here are a couple more hints that you may find useful.
1) Suppose you observed number 71 on a random number generator that produces numbers from 0 to 99.
Is it
1 in 100 occurence because the number is exactly 71?
1 in 50 occurence becaue the number consist of these two digits: 7 and 1?
1 in 10 occurence because the first digit is 7?
1 in 2 occurence because the number is more or equal 50?
1 in n occurence because it’s possible to come with some other arbitrary rule?
What determine which case is actually true?
2) Suppose you observed a list of numbers with length n, produced by this random number generator. The probability that exactly this series is produced is 1/100n
At what n are you completely shocked and in total disbelief about your reality, after all you’ve just observed an event that your model of reality claims to be extremely improbable?
Would you be more shocked if all the numbers in this list are the same? If so why?
Can you now produce arbitrary improbable events just by having a random number generator? In what sense are these events have probability 1/100n if you can witness as many of them as you want any time?
You do not need to tell me the answers. It’s just something I believe will be helpful for you to honestly think about.
Here is the last hint, actually I have a feeling that this just spoils the solution outright so it’s in rot13:
Gur bofreingvbaf “Enaqbz ahzore trarengbe cebqhprq n ahzore” naq “Enaqbz ahzore trarengbe cebqhprq gur rknpg ahzore V’ir thrffrq” ner qvssrerag bofreingvbaf. Lbh pna bofreir gur ynggre bayl vs lbh’ir thrffrq n ahzore orsberunaq. Lbh znl guvax nobhg nf nal bgure novyvgl gb rkgenpg vasbezngvba sebz lbhe raivebazrag.
Fhccbfr jura gur pbva vf Gnvyf gur ebbz unf terra jnyyf naq jura vg’f Urnqf gur ebbz unf oyhr jnyyf. N crefba jub xabjf nobhg guvf naq vfa’g pbybe oyvaq pna thrff gur erfhyg bs n pbva gbff cresrpgyl. N pbybe oyvaq crefba jub xabjf nobhg guvf ehyr—pna’g. Rira vs gurl xabj gung gur ebbz unf fbzr pbybe, gurl ner hanoyr gb rkrphgr gur fgengrtl “thrff Gnvyf rirel gvzr gur ebbz vf terra”.
N crefba jub qvqa’g thrff n ahzore orsberunaq qbrfa’g cbffrff gur novyvgl gb bofreir rirag “Enaqbz ahzore trarengbe cebqhprq gur rknpg ahzore V’ir thrffrq” whfg nf n pbybe oyvaq crefba qbrfa’g unir na novyvgl gb bofreir na rirag “Gur ebbz vf terra”.
The Beauty doesn’t need to experience qualia or be self aware to have meaningful probability estimate.
Betting arguments are not particularly helpful. They are describing the motte—specific scoring rule, and not the actual ability to guess the outcome of the coin toss in the experiment. As I’ve written in the post itself:
That is, if betting happens every day, Halfers and Double Halfers need to weight the odds by the number of bets, while Thirders already include this weighting in their definitions “probability”. On the other hand, if only one bet per experiment counts, suddenly it’s thirders who need to discount this weighting from their “probability” and Halfers and Double Halfers who are fine by default.
There are rules for how to do arithmetic. If you want to get the right answer, you have to follow them. So, when adding 18 and 17, you can’t just decide that you don’t like to carry 1s today, and hence compute that 18+17=25.
Similarly, there are rules for how to do Bayesian probability calculations. If you want to get the right answer, you have to follow them. One of the rules is that the posterior probability of something is found by conditioning on all the data you have. If you do a clinical trial with 1000 subjects, you can’t just decide that you’d like to compute the posterior probability that the treatment works by conditioning on the data for just the first 700.
If you’ve seen the output of a random number generator, and are using this to compute a posterior probability, you condition on the actual number observed, say 71. You do not condition on any of the other events you mention, because they are less informative than the actual number—conditioning on them would amount to ignoring part of the data. (In some circumstances, the result of conditioning on all the data may be the same as the result of conditioning on some function of the data—when that function is a “sufficient statistic”, but it’s always correct to condition on all the data.)
This is absolutely standard Bayesian procedure. There is nothing in the least bit controversial about it. (That is, it is definitely how Bayesian inference works—there are of course some people who don’t accept that Bayesian inference is the right thing to do.)
Similarly, there are certain rules for how to apply decision theory to choose an action to maximize your expected utility, based on probability judgements that you’ve made.
If you compute probabilities incorrectly, and then incorrectly apply decision theory to choose an action based on these incorrect probabilities, it is possible that your two errors will cancel out. That is actually rather likely if you have other ways of telling what the right answer is, and hence have the opportunity to make ad hoc (incorrect) alterations to how you apply decision theory in order to get the right decision with the wrong probabilities.
If you’d like to outline some specific betting scenario for Sleeping Beauty, I’ll show you how applying decision theory correctly produces the right action only if Beauty judges the probability of Heads to be 1⁄3.
Tangent: I ran across an apparently Frequentist analysis of Sleeping Beauty here: Sleeping Beauty: Exploring a Neglected Solution, Luna
To make the concept meaningful under Frequentism, Luna has Beauty perform an experiment of asking the higher level experimenters which awakening she is in (H1, T1, or T2). If she undergoes both sets of experiments many times, the frequency of the experimenters responding H1 will tend to 1⁄3, and so the Frequentist probability is similarly 1⁄3.
I say “apparently Frequentist” because Luna doesn’t use the term and I’m not sure of the exact terminology when Luna reasons about the frequency of hypothetical experiments that Beauty has not actually performed.