Here is how I think Beauty should rationally think:
Beauty notices the experimenters are not here to tell her it’s Wednesday and the experiment is over. Concludes it’s Monday or Tuesday.
Thinks to herself, “so, I wonder if the coin landed Heads?”
(optional) Entertains the idea that her awakening itself is an evidence in favor of Tails, but realizes that it would be true only if her current experience was randomly sampled among three independent outcomes Tails&Monday, Tails&Tuesday and Heads&Monday, which isn’t true for this experiment. Tails&Tuesday necessary follows Tails&Monday and the causal process that determines her state isn’t random sample from these outcomes.
Concludes that she doesn’t have any evidence that would distinguish an outcome where coin landed Heads from the outcome where the coin landed Tails, thus keeps her prior 1⁄2.
(either) Suddenly a prince bursts through the door and explains that he managed to overcome the defenses of the evil scientists keeping her here in an unlikely feat of martial prowess and is rescuing her now
(or) The beauty has second thought about participating in the experiment. She has good reasons to stay but also some part of her wants to leave. Thankfully there is a random number generator in her room (she has another coin, for example), capable of outputting numbers in some huge interval. She thinks about a number and decides that if the generator output exactly it she will leave the room, otherwise she stays. The generator outputs the number the beauty has guessed. And she leaves.
(or) Suddenly the beauty has panic attack that makes her leave the room immediately. She remembers that scientists told her that the experimental sleeping pills she took on Sunday have this rare side effects that can manifest in any person in 0.1% of cases in any day of the next week.
(or) Any other event, that the beauty is rightfully confident to be low probable but possible for both Monday and Tuesday, happens. Actually even leaving the room is not necessary. The symmetry and thus the setting of the experiment is broken as long as beauty gets this new evidence. Mind you this wouldn’t work with some observation which has high probability of happening—you can check it yourself with the code I provided. Likewise, it also wouldn’t work if the beauty just observes any number on the random generator without precommiting to observing it in particular as it wouldn’t be a rare event from her perspective. See this two posts: https://www.lesswrong.com/posts/u7kSTyiWFHxDXrmQT/sleeping-beauty-resolved https://www.lesswrong.com/posts/8MZPQJhMzyoJMYfz5/sleeping-beauty-not-resolved
And now, due to this new evidence, the Beauty updates to 1⁄3. Because now there is actually a random sampling process going on and she is twice as likely to observe the low probable outcome she has just observed when the coin is Tails
Do the steps from 4 to 8 in your example count as such evidence? They may. If it indeed was a random occurrence and not a deliberately staged performance, the rat crashing to the ground near her window is a relevant evidence! If it’s surprising for you, notice that you’ve updated to 1⁄3 without any evidence whatsoever.
And yes, this is exactly how reason and reality work. Which I’ve shown with an actual python program faithfully replicating the logic of the experiment, that can be run on a computer from our reality. Reasoning like this, the beauty can correctly guess Tails with 2⁄3 probability per experiment without any shenanigans with the scoring rule.
Any other event, that the beauty is rightfully confident to be low probable but possible for both Monday and Tuesday, happens.
And since this happens absolutely every time she wakes, Beauty should always assess the probability of Heads as 1⁄3.
There’s always fly crawling on the wall in a random direction, unlikely to be the same on Monday and Tuesday, or a stray thought about aardvarks, or a dimming of the light from the window as a cloud passes overhead, or any of millions of other things entering her consciousness in ways that won’t be the same Monday and Tuesday.
If you’ve read the posts you link to, you must realize that this is central to my argument for why Heads has probability 1⁄3.
So why are you a Halfer? One reason seems to be explained by your following comment:
it also wouldn’t work if the beauty just observes any number on the random generator without precommiting to observing it in particular as it wouldn’t be a rare event from her perspective
But this is not how probability theory works. A rare event is a rare event. You’ve just decided to define another event, “random number generator produces number I’d guessed beforehand”, and noted that that event didn’t occur. This doesn’t change the fact that the random number generator produced a number that is unlikely to be the same as that produced on another day.
But the more fundamental reason seems to be that you don’t actually want to find the answer to the Sleeping Beauty problem. You want to find the answer to a more fantastical problem in which Beauty can be magically duplicated exactly and kept in a room totally isolated from the external world, and hence guaranteed to have no random experiences, or in which Beauty is a computer program that can be reset to its previous state, and have its inputs totally controlled by an experimenter.
The actual Sleeping Beauty problem is only slightly fantastical—a suitable memory erasing drug might well be discovered tomorrow, and nobody would think this was an incredible discovery that necessitates changing the foundations of probability theory or our notions of consciousness. People already forget things. People already have head injuries that make them forget everything for some period of time.
Of course, there’s nothing wrong with thinking about the fantastical problem. But it’s going to be hard to answer it when we haven’t yet answered questions such whether a computer program can be conscious, and if so, whether consciousness requires actually running the program, or whether it’s enough to set up the (deterministic) program on the computer, so that it could be run, even though we don’t actually push the Start button. Without answers to such questions, how can one have any confidence that you’re reasoning correctly in this situation that is far, far outside ordinary experience?
But in any case, wouldn’t it be interesting and useful to first establish the answer to the actual Sleeping Beauty problem?
And since this happens absolutely every time she wakes, Beauty should always assess the probability of Heads as 1⁄3.
Nope. Doesn’t work this way. There is an important difference between a probability of a specific low probable event happening and a probability of any low probable event from a huge class of events happening. Unsurprisingly, the former is much more probable than the latter and the trick works only with low probable events. As I’ve explicitly said, and you could’ve checked yourself as I provided you the code for it.
It’s easy to see why something that happens absolutely every time she wakes doesn’t help at all. You see, 50% of the coin tosses are Heads. If Beauty correctly guessed Tails 2⁄3 out of all experiments that would be a contradiction. But it’s possible for the Beauty to correctly guess Tails 2⁄3 out of some subset of experiments. To get the 2⁄3 score she need some kind of evidence that happens more often when the coin is Tails than when it is Heads, not all the time, and then guess only when she gets this evidence.
There’s always fly crawling on the wall in a random direction, unlikely to be the same on Monday and Tuesday, or a stray thought about aardvarks, or a dimming of the light from the window as a cloud passes overhead, or any of millions of other things entering her consciousness in ways that won’t be the same Monday and Tuesday.
This is irrelevant unless the Beauty somehow knows where the fly is supposed to be on Monday and where on Tuesday. She can try to guess tails when the fly is in a specific place that the beauty precommited to, hoping that the causal process that define fly position is close enough to placing the fly in this place with the same low probability for every day but it’s not guaranteed to work.
If you’ve read the posts you link to, you must realize that this is central to my argument for why Heads has probability 1⁄3.
I’ve linked two posts. You need to read the second one as well, to understand the mistake in the reasoning of the first.
But this is not how probability theory works. A rare event is a rare event. You’ve just decided to define another event, “random number generator produces number I’d guessed beforehand”, and noted that that event didn’t occur. This doesn’t change the fact that the random number generator produced a number that is unlikely to be the same as that produced on another day.
This is exactly how probability theory works. The event “random number generator produced any number” has very high probability. The event “random number generator produced this specific number” has low probability. Which event we are talking about depends on whether the number was specified or not. It can be confusing if you forget that probabilities are in the mind, that it’s about Beauty decision making process, not the metaphysical essence of randomness.
You want to find the answer to a more fantastical problem in which Beauty can be magically duplicated exactly and kept in a room totally isolated from the external world, and hence guaranteed to have no random experiences
The fact that Beauty is unable to tell which day it is or whether she has been awakened before is an important condition of the experiment.
But this doesn’t have to mean that her experiences on Monday and Tuesday or on Heads and Tails are necessary exactly the same—she just doesn’t have to be able to tell which is which. Beauty can be placed in the differently colored rooms on Monday&Tails, Monday&Heads and Tuesday&Tails. All the furniture can be completely different as well. There can be ciphered message describing the result of the coin toss. Unless she knows how to break the cipher, or knows the pattern in colors/furniture this doesn’t help her. The mathematical model is still the same.
On a repeated experiment beauty can try executing a strategy but this requires precommitment to this strategy. And without this precommitment she will not be able to get useful information from all the differences in the outcomes.
But it’s going to be hard to answer it when we haven’t yet answered questions such whether a computer program can be conscious, and if so, whether consciousness requires actually running the program, or whether it’s enough to set up the (deterministic) program on the computer, so that it could be run, even though we don’t actually push the Start button.
How is the consciousness angle relevant here? Are you under the impression that probability theory works differently depending on whether we are reasoning about conscious or unconscious objects?
Nope. Doesn’t work this way. There is an important difference between a probability of a specific low probable event happening and a probability of any low probable event from a huge class of events happening.
In Bayesian probability theory, it certainly does work this way. To find the posterior probability of Heads, given what you have observed, you combine the prior probability with the likelihood for Heads vs. Tails based on everything that you have observed. You don’t say, “but this observation is one of a large class of observations that I’ve decided to group together, so I’ll only update based on the probability that any observation in the group would occur (which is one for both Heads and Tails in this situation)”.
You’re arguing in a frequentist fashion. A similar sort of issue for a frequentist would arise if you flipped a coin 9 times and found that 2 of the flips were Heads. If you then ask the frequentist what the p-value is for testing the hypothesis that the coin was fair, they’ll be unable to answer until you tell them whether you pre-committed to flipping the coin 9 times, or to flipping it until 2 Heads occurred (they’ll be completely lost if you tell them you just flipped until your finger got tired). Bayesians think this is ridiculous.
Of course, there are plenty of frequentists in the world, but I presume they are uninterested in the Sleeping Beauty problem, since to a frequentist, Beauty’s probability for Heads is a meaningless concept, since they don’t think probability can be used to represent degrees of belief.
How is the consciousness angle relevant here? Are you under the impression that probability theory works differently depending on whether we are reasoning about conscious or unconscious objects?
I think if Beauty isn’t a conscious being, it doesn’t make much sense to talk about how she should reason regarding philosophical arguments about probability.
I suspect we’re at a bit of an impasse with this line of discussion. I’ll just mention that probability is supposed to be useful. And if you extend the problem to allow Beauty to make bets, in various scenarios, the bets the make Beauty the most money are the ones she will make by assessing the probability of Heads to be 1⁄3 and then applying standard decision theory. Halfers are losers.
You are making a fascinating mistake, and I may make a separate post about it, even though it’s not particularly related to anthropics and just a curious detail about probability theory, which in retrospect I relize I was confused myself about. I’d recommend you to meditate on it for a while. You already have all the information required to figure it out. You just need to switch yourself from the “argument mode” to “investigation mode”.
Here are a couple more hints that you may find useful.
1) Suppose you observed number 71 on a random number generator that produces numbers from 0 to 99.
Is it
1 in 100 occurence because the number is exactly 71?
1 in 50 occurence becaue the number consist of these two digits: 7 and 1?
1 in 10 occurence because the first digit is 7?
1 in 2 occurence because the number is more or equal 50?
1 in n occurence because it’s possible to come with some other arbitrary rule?
What determine which case is actually true?
2) Suppose you observed a list of numbers with length n, produced by this random number generator. The probability that exactly this series is produced is 1/100n
At what n are you completely shocked and in total disbelief about your reality, after all you’ve just observed an event that your model of reality claims to be extremely improbable?
Would you be more shocked if all the numbers in this list are the same? If so why?
Can you now produce arbitrary improbable events just by having a random number generator? In what sense are these events have probability 1/100n if you can witness as many of them as you want any time?
You do not need to tell me the answers. It’s just something I believe will be helpful for you to honestly think about.
To find the posterior probability of Heads, given what you have observed, you combine the prior probability with the likelihood for Heads vs. Tails based on everything that you have observed.
Here is the last hint, actually I have a feeling that this just spoils the solution outright so it’s in rot13:
Gur bofreingvbaf “Enaqbz ahzore trarengbe cebqhprq n ahzore” naq “Enaqbz ahzore trarengbe cebqhprq gur rknpg ahzore V’ir thrffrq” ner qvssrerag bofreingvbaf. Lbh pna bofreir gur ynggre bayl vs lbh’ir thrffrq n ahzore orsberunaq. Lbh znl guvax nobhg nf nal bgure novyvgl gb rkgenpg vasbezngvba sebz lbhe raivebazrag.
Fhccbfr jura gur pbva vf Gnvyf gur ebbz unf terra jnyyf naq jura vg’f Urnqf gur ebbz unf oyhr jnyyf. N crefba jub xabjf nobhg guvf naq vfa’g pbybe oyvaq pna thrff gur erfhyg bs n pbva gbff cresrpgyl. N pbybe oyvaq crefba jub xabjf nobhg guvf ehyr—pna’g. Rira vs gurl xabj gung gur ebbz unf fbzr pbybe, gurl ner hanoyr gb rkrphgr gur fgengrtl “thrff Gnvyf rirel gvzr gur ebbz vf terra”.
N crefba jub qvqa’g thrff n ahzore orsberunaq qbrfa’g cbffrff gur novyvgl gb bofreir rirag “Enaqbz ahzore trarengbe cebqhprq gur rknpg ahzore V’ir thrffrq” whfg nf n pbybe oyvaq crefba qbrfa’g unir na novyvgl gb bofreir na rirag “Gur ebbz vf terra”.
I think if Beauty isn’t a conscious being, it doesn’t make much sense to talk about how she should reason regarding philosophical arguments about probability.
The Beauty doesn’t need to experience qualia or be self aware to have meaningful probability estimate.
I’ll just mention that probability is supposed to be useful. And if you extend the problem to allow Beauty to make bets, in various scenarios, the bets the make Beauty the most money are the ones she will make by assessing the probability of Heads to be 1⁄3 and then applying standard decision theory.
Betting arguments are not particularly helpful. They are describing the motte—specific scoring rule, and not the actual ability to guess the outcome of the coin toss in the experiment. As I’ve written in the post itself:
As long as we do not claim that this fact gives an ability to predict the result of the coin toss better than chance, then we are just using different definitions, while agreeing on everything. We can translate from Thirder language to mine and back without any problem. Whatever betting schema is proposed, all other things being equal, we will agree to the same bets.
That is, if betting happens every day, Halfers and Double Halfers need to weight the odds by the number of bets, while Thirders already include this weighting in their definitions “probability”. On the other hand, if only one bet per experiment counts, suddenly it’s thirders who need to discount this weighting from their “probability” and Halfers and Double Halfers who are fine by default.
There are rules for how to do arithmetic. If you want to get the right answer, you have to follow them. So, when adding 18 and 17, you can’t just decide that you don’t like to carry 1s today, and hence compute that 18+17=25.
Similarly, there are rules for how to do Bayesian probability calculations. If you want to get the right answer, you have to follow them. One of the rules is that the posterior probability of something is found by conditioning on all the data you have. If you do a clinical trial with 1000 subjects, you can’t just decide that you’d like to compute the posterior probability that the treatment works by conditioning on the data for just the first 700.
If you’ve seen the output of a random number generator, and are using this to compute a posterior probability, you condition on the actual number observed, say 71. You do not condition on any of the other events you mention, because they are less informative than the actual number—conditioning on them would amount to ignoring part of the data. (In some circumstances, the result of conditioning on all the data may be the same as the result of conditioning on some function of the data—when that function is a “sufficient statistic”, but it’s always correct to condition on all the data.)
This is absolutely standard Bayesian procedure. There is nothing in the least bit controversial about it. (That is, it is definitely how Bayesian inference works—there are of course some people who don’t accept that Bayesian inference is the right thing to do.)
Similarly, there are certain rules for how to apply decision theory to choose an action to maximize your expected utility, based on probability judgements that you’ve made.
If you compute probabilities incorrectly, and then incorrectly apply decision theory to choose an action based on these incorrect probabilities, it is possible that your two errors will cancel out. That is actually rather likely if you have other ways of telling what the right answer is, and hence have the opportunity to make ad hoc (incorrect) alterations to how you apply decision theory in order to get the right decision with the wrong probabilities.
If you’d like to outline some specific betting scenario for Sleeping Beauty, I’ll show you how applying decision theory correctly produces the right action only if Beauty judges the probability of Heads to be 1⁄3.
Of course, there are plenty of frequentists in the world, but I presume they are uninterested in the Sleeping Beauty problem, since to a frequentist, Beauty’s probability for Heads is a meaningless concept, since they don’t think probability can be used to represent degrees of belief.
To make the concept meaningful under Frequentism, Luna has Beauty perform an experiment of asking the higher level experimenters which awakening she is in (H1, T1, or T2). If she undergoes both sets of experiments many times, the frequency of the experimenters responding H1 will tend to 1⁄3, and so the Frequentist probability is similarly 1⁄3.
I say “apparently Frequentist” because Luna doesn’t use the term and I’m not sure of the exact terminology when Luna reasons about the frequency of hypothetical experiments that Beauty has not actually performed.
Here is how I think Beauty should rationally think:
Beauty notices the experimenters are not here to tell her it’s Wednesday and the experiment is over. Concludes it’s Monday or Tuesday.
Thinks to herself, “so, I wonder if the coin landed Heads?”
(optional) Entertains the idea that her awakening itself is an evidence in favor of Tails, but realizes that it would be true only if her current experience was randomly sampled among three independent outcomes Tails&Monday, Tails&Tuesday and Heads&Monday, which isn’t true for this experiment. Tails&Tuesday necessary follows Tails&Monday and the causal process that determines her state isn’t random sample from these outcomes.
Concludes that she doesn’t have any evidence that would distinguish an outcome where coin landed Heads from the outcome where the coin landed Tails, thus keeps her prior 1⁄2.
(either) Suddenly a prince bursts through the door and explains that he managed to overcome the defenses of the evil scientists keeping her here in an unlikely feat of martial prowess and is rescuing her now
(or) The beauty has second thought about participating in the experiment. She has good reasons to stay but also some part of her wants to leave. Thankfully there is a random number generator in her room (she has another coin, for example), capable of outputting numbers in some huge interval. She thinks about a number and decides that if the generator output exactly it she will leave the room, otherwise she stays. The generator outputs the number the beauty has guessed. And she leaves.
(or) Suddenly the beauty has panic attack that makes her leave the room immediately. She remembers that scientists told her that the experimental sleeping pills she took on Sunday have this rare side effects that can manifest in any person in 0.1% of cases in any day of the next week.
(or) Any other event, that the beauty is rightfully confident to be low probable but possible for both Monday and Tuesday, happens. Actually even leaving the room is not necessary. The symmetry and thus the setting of the experiment is broken as long as beauty gets this new evidence. Mind you this wouldn’t work with some observation which has high probability of happening—you can check it yourself with the code I provided. Likewise, it also wouldn’t work if the beauty just observes any number on the random generator without precommiting to observing it in particular as it wouldn’t be a rare event from her perspective. See this two posts:
https://www.lesswrong.com/posts/u7kSTyiWFHxDXrmQT/sleeping-beauty-resolved
https://www.lesswrong.com/posts/8MZPQJhMzyoJMYfz5/sleeping-beauty-not-resolved
And now, due to this new evidence, the Beauty updates to 1⁄3. Because now there is actually a random sampling process going on and she is twice as likely to observe the low probable outcome she has just observed when the coin is Tails
Do the steps from 4 to 8 in your example count as such evidence? They may. If it indeed was a random occurrence and not a deliberately staged performance, the rat crashing to the ground near her window is a relevant evidence! If it’s surprising for you, notice that you’ve updated to 1⁄3 without any evidence whatsoever.
And yes, this is exactly how reason and reality work. Which I’ve shown with an actual python program faithfully replicating the logic of the experiment, that can be run on a computer from our reality. Reasoning like this, the beauty can correctly guess Tails with 2⁄3 probability per experiment without any shenanigans with the scoring rule.
Any other event, that the beauty is rightfully confident to be low probable but possible for both Monday and Tuesday, happens.
And since this happens absolutely every time she wakes, Beauty should always assess the probability of Heads as 1⁄3.
There’s always fly crawling on the wall in a random direction, unlikely to be the same on Monday and Tuesday, or a stray thought about aardvarks, or a dimming of the light from the window as a cloud passes overhead, or any of millions of other things entering her consciousness in ways that won’t be the same Monday and Tuesday.
If you’ve read the posts you link to, you must realize that this is central to my argument for why Heads has probability 1⁄3.
So why are you a Halfer? One reason seems to be explained by your following comment:
it also wouldn’t work if the beauty just observes any number on the random generator without precommiting to observing it in particular as it wouldn’t be a rare event from her perspective
But this is not how probability theory works. A rare event is a rare event. You’ve just decided to define another event, “random number generator produces number I’d guessed beforehand”, and noted that that event didn’t occur. This doesn’t change the fact that the random number generator produced a number that is unlikely to be the same as that produced on another day.
But the more fundamental reason seems to be that you don’t actually want to find the answer to the Sleeping Beauty problem. You want to find the answer to a more fantastical problem in which Beauty can be magically duplicated exactly and kept in a room totally isolated from the external world, and hence guaranteed to have no random experiences, or in which Beauty is a computer program that can be reset to its previous state, and have its inputs totally controlled by an experimenter.
The actual Sleeping Beauty problem is only slightly fantastical—a suitable memory erasing drug might well be discovered tomorrow, and nobody would think this was an incredible discovery that necessitates changing the foundations of probability theory or our notions of consciousness. People already forget things. People already have head injuries that make them forget everything for some period of time.
Of course, there’s nothing wrong with thinking about the fantastical problem. But it’s going to be hard to answer it when we haven’t yet answered questions such whether a computer program can be conscious, and if so, whether consciousness requires actually running the program, or whether it’s enough to set up the (deterministic) program on the computer, so that it could be run, even though we don’t actually push the Start button. Without answers to such questions, how can one have any confidence that you’re reasoning correctly in this situation that is far, far outside ordinary experience?
But in any case, wouldn’t it be interesting and useful to first establish the answer to the actual Sleeping Beauty problem?
Nope. Doesn’t work this way. There is an important difference between a probability of a specific low probable event happening and a probability of any low probable event from a huge class of events happening. Unsurprisingly, the former is much more probable than the latter and the trick works only with low probable events. As I’ve explicitly said, and you could’ve checked yourself as I provided you the code for it.
It’s easy to see why something that happens absolutely every time she wakes doesn’t help at all. You see, 50% of the coin tosses are Heads. If Beauty correctly guessed Tails 2⁄3 out of all experiments that would be a contradiction. But it’s possible for the Beauty to correctly guess Tails 2⁄3 out of some subset of experiments. To get the 2⁄3 score she need some kind of evidence that happens more often when the coin is Tails than when it is Heads, not all the time, and then guess only when she gets this evidence.
This is irrelevant unless the Beauty somehow knows where the fly is supposed to be on Monday and where on Tuesday. She can try to guess tails when the fly is in a specific place that the beauty precommited to, hoping that the causal process that define fly position is close enough to placing the fly in this place with the same low probability for every day but it’s not guaranteed to work.
I’ve linked two posts. You need to read the second one as well, to understand the mistake in the reasoning of the first.
This is exactly how probability theory works. The event “random number generator produced any number” has very high probability. The event “random number generator produced this specific number” has low probability. Which event we are talking about depends on whether the number was specified or not. It can be confusing if you forget that probabilities are in the mind, that it’s about Beauty decision making process, not the metaphysical essence of randomness.
The fact that Beauty is unable to tell which day it is or whether she has been awakened before is an important condition of the experiment.
But this doesn’t have to mean that her experiences on Monday and Tuesday or on Heads and Tails are necessary exactly the same—she just doesn’t have to be able to tell which is which. Beauty can be placed in the differently colored rooms on Monday&Tails, Monday&Heads and Tuesday&Tails. All the furniture can be completely different as well. There can be ciphered message describing the result of the coin toss. Unless she knows how to break the cipher, or knows the pattern in colors/furniture this doesn’t help her. The mathematical model is still the same.
On a repeated experiment beauty can try executing a strategy but this requires precommitment to this strategy. And without this precommitment she will not be able to get useful information from all the differences in the outcomes.
How is the consciousness angle relevant here? Are you under the impression that probability theory works differently depending on whether we are reasoning about conscious or unconscious objects?
Nope. Doesn’t work this way. There is an important difference between a probability of a specific low probable event happening and a probability of any low probable event from a huge class of events happening.
In Bayesian probability theory, it certainly does work this way. To find the posterior probability of Heads, given what you have observed, you combine the prior probability with the likelihood for Heads vs. Tails based on everything that you have observed. You don’t say, “but this observation is one of a large class of observations that I’ve decided to group together, so I’ll only update based on the probability that any observation in the group would occur (which is one for both Heads and Tails in this situation)”.
You’re arguing in a frequentist fashion. A similar sort of issue for a frequentist would arise if you flipped a coin 9 times and found that 2 of the flips were Heads. If you then ask the frequentist what the p-value is for testing the hypothesis that the coin was fair, they’ll be unable to answer until you tell them whether you pre-committed to flipping the coin 9 times, or to flipping it until 2 Heads occurred (they’ll be completely lost if you tell them you just flipped until your finger got tired). Bayesians think this is ridiculous.
Of course, there are plenty of frequentists in the world, but I presume they are uninterested in the Sleeping Beauty problem, since to a frequentist, Beauty’s probability for Heads is a meaningless concept, since they don’t think probability can be used to represent degrees of belief.
How is the consciousness angle relevant here? Are you under the impression that probability theory works differently depending on whether we are reasoning about conscious or unconscious objects?
I think if Beauty isn’t a conscious being, it doesn’t make much sense to talk about how she should reason regarding philosophical arguments about probability.
I suspect we’re at a bit of an impasse with this line of discussion. I’ll just mention that probability is supposed to be useful. And if you extend the problem to allow Beauty to make bets, in various scenarios, the bets the make Beauty the most money are the ones she will make by assessing the probability of Heads to be 1⁄3 and then applying standard decision theory. Halfers are losers.
You are making a fascinating mistake, and I may make a separate post about it, even though it’s not particularly related to anthropics and just a curious detail about probability theory, which in retrospect I relize I was confused myself about. I’d recommend you to meditate on it for a while. You already have all the information required to figure it out. You just need to switch yourself from the “argument mode” to “investigation mode”.
Here are a couple more hints that you may find useful.
1) Suppose you observed number 71 on a random number generator that produces numbers from 0 to 99.
Is it
1 in 100 occurence because the number is exactly 71?
1 in 50 occurence becaue the number consist of these two digits: 7 and 1?
1 in 10 occurence because the first digit is 7?
1 in 2 occurence because the number is more or equal 50?
1 in n occurence because it’s possible to come with some other arbitrary rule?
What determine which case is actually true?
2) Suppose you observed a list of numbers with length n, produced by this random number generator. The probability that exactly this series is produced is 1/100n
At what n are you completely shocked and in total disbelief about your reality, after all you’ve just observed an event that your model of reality claims to be extremely improbable?
Would you be more shocked if all the numbers in this list are the same? If so why?
Can you now produce arbitrary improbable events just by having a random number generator? In what sense are these events have probability 1/100n if you can witness as many of them as you want any time?
You do not need to tell me the answers. It’s just something I believe will be helpful for you to honestly think about.
Here is the last hint, actually I have a feeling that this just spoils the solution outright so it’s in rot13:
Gur bofreingvbaf “Enaqbz ahzore trarengbe cebqhprq n ahzore” naq “Enaqbz ahzore trarengbe cebqhprq gur rknpg ahzore V’ir thrffrq” ner qvssrerag bofreingvbaf. Lbh pna bofreir gur ynggre bayl vs lbh’ir thrffrq n ahzore orsberunaq. Lbh znl guvax nobhg nf nal bgure novyvgl gb rkgenpg vasbezngvba sebz lbhe raivebazrag.
Fhccbfr jura gur pbva vf Gnvyf gur ebbz unf terra jnyyf naq jura vg’f Urnqf gur ebbz unf oyhr jnyyf. N crefba jub xabjf nobhg guvf naq vfa’g pbybe oyvaq pna thrff gur erfhyg bs n pbva gbff cresrpgyl. N pbybe oyvaq crefba jub xabjf nobhg guvf ehyr—pna’g. Rira vs gurl xabj gung gur ebbz unf fbzr pbybe, gurl ner hanoyr gb rkrphgr gur fgengrtl “thrff Gnvyf rirel gvzr gur ebbz vf terra”.
N crefba jub qvqa’g thrff n ahzore orsberunaq qbrfa’g cbffrff gur novyvgl gb bofreir rirag “Enaqbz ahzore trarengbe cebqhprq gur rknpg ahzore V’ir thrffrq” whfg nf n pbybe oyvaq crefba qbrfa’g unir na novyvgl gb bofreir na rirag “Gur ebbz vf terra”.
The Beauty doesn’t need to experience qualia or be self aware to have meaningful probability estimate.
Betting arguments are not particularly helpful. They are describing the motte—specific scoring rule, and not the actual ability to guess the outcome of the coin toss in the experiment. As I’ve written in the post itself:
That is, if betting happens every day, Halfers and Double Halfers need to weight the odds by the number of bets, while Thirders already include this weighting in their definitions “probability”. On the other hand, if only one bet per experiment counts, suddenly it’s thirders who need to discount this weighting from their “probability” and Halfers and Double Halfers who are fine by default.
There are rules for how to do arithmetic. If you want to get the right answer, you have to follow them. So, when adding 18 and 17, you can’t just decide that you don’t like to carry 1s today, and hence compute that 18+17=25.
Similarly, there are rules for how to do Bayesian probability calculations. If you want to get the right answer, you have to follow them. One of the rules is that the posterior probability of something is found by conditioning on all the data you have. If you do a clinical trial with 1000 subjects, you can’t just decide that you’d like to compute the posterior probability that the treatment works by conditioning on the data for just the first 700.
If you’ve seen the output of a random number generator, and are using this to compute a posterior probability, you condition on the actual number observed, say 71. You do not condition on any of the other events you mention, because they are less informative than the actual number—conditioning on them would amount to ignoring part of the data. (In some circumstances, the result of conditioning on all the data may be the same as the result of conditioning on some function of the data—when that function is a “sufficient statistic”, but it’s always correct to condition on all the data.)
This is absolutely standard Bayesian procedure. There is nothing in the least bit controversial about it. (That is, it is definitely how Bayesian inference works—there are of course some people who don’t accept that Bayesian inference is the right thing to do.)
Similarly, there are certain rules for how to apply decision theory to choose an action to maximize your expected utility, based on probability judgements that you’ve made.
If you compute probabilities incorrectly, and then incorrectly apply decision theory to choose an action based on these incorrect probabilities, it is possible that your two errors will cancel out. That is actually rather likely if you have other ways of telling what the right answer is, and hence have the opportunity to make ad hoc (incorrect) alterations to how you apply decision theory in order to get the right decision with the wrong probabilities.
If you’d like to outline some specific betting scenario for Sleeping Beauty, I’ll show you how applying decision theory correctly produces the right action only if Beauty judges the probability of Heads to be 1⁄3.
Tangent: I ran across an apparently Frequentist analysis of Sleeping Beauty here: Sleeping Beauty: Exploring a Neglected Solution, Luna
To make the concept meaningful under Frequentism, Luna has Beauty perform an experiment of asking the higher level experimenters which awakening she is in (H1, T1, or T2). If she undergoes both sets of experiments many times, the frequency of the experimenters responding H1 will tend to 1⁄3, and so the Frequentist probability is similarly 1⁄3.
I say “apparently Frequentist” because Luna doesn’t use the term and I’m not sure of the exact terminology when Luna reasons about the frequency of hypothetical experiments that Beauty has not actually performed.