there seems to be consensus that the probability “in the experiment” is 1⁄2
I also don’t think this is just a matter of confusion. With respect to the motte and bailey you describe, it looks to me like many thirders hold the bailey position, both in “classic” and “incubator” versions of the problem.
Well, you see this is preciesly the confusion I’m talking about.
If there are thirders who hold the bailey position: that particiopating in the experiment gives them the ability to correctly guess tails in 2⁄3 of the experiments, then there can’t be a consensus that probability “in the experiment” is 1⁄2.
The whole “paradox” is that despite the fact that any random awakening are 2⁄3 likely to happen when the coin landed Tails, the fact that you are awaken doesn’t help you guess the outcome of the coin toss in this experiment better than chance. So it’s very important to be very precise about what you mean by “credence” or “probability”.
So if you claim that the bailey position is wrong, then there is a real dispute in play.
Yep. This is specifically the dispute I want to adress but to be able to do it one has to properly separate the bailey from the motte first. Next post will explore the bailey position in more details and show how it violates conservation of expected evidence.
I’m now unclear exactly what the bailey position is from your perspective. You said in the opening post, regarding the classic Sleeping Beauty problem:
The Bailey is the claim that the coin is actually more likely to be Tails when I participate in the experiment myself. That is, my awakening on Monday or Tuesday gives me evidence that lawfully update me to thinking that the coin landed Tails with 2⁄3 probability.
From the perspective of the Bayesian Beauty paper, the thirder position is that, given the classic (non-incubator) Sleeping Beauty experiment, with these anthropic priors:
P(Monday | Heads) = 1⁄2
P(Monday | Tails) = 1⁄2
P(Heads) = 1⁄2
Then the following is true:
P(Heads | Awake) = 1⁄3
I think this follows from the given assumptions and priors. Do you agree?
One conversion of this into words is that my awakening (Awake=True) gives me evidence that lawfully updates me from, on Sunday, thinking that the coin will equally land either way (P(Heads) = 1⁄2) to waking up and thinking that the coin right now is more likely to be showing tails (P(Heads | Awake) = 1⁄3). Do you disagree with the conversion of the math into words? Would you perhaps phrase it differently?
Whereas now you define the bailey position as:
The bailey position: that participating in the experiment gives them the ability to correctly guess tails in 2⁄3 of the experiments.
I agree with you that this is false, but it reads to me as a different position.
The Bailey is the claim that the coin is actually more likely to be Tails when I participate in the experiment myself. That is, my awakening on Monday or Tuesday gives me evidence that lawfully update me to thinking that the coin landed Tails with 2⁄3 probability.
The bailey position: that participating in the experiment gives them the ability to correctly guess tails in 2⁄3 of the experiments.
Could you explain what is the difference you see between these two position?
If you receive some evidence that lawfully updates you to believing that the coin is Tails with 2⁄3 probability in this experiment, in 2 out of 3 experiments the coin have to be Tails when you receive this evidence.
If you receive this evidence every experiment you participate in, then the coin have to be Tails 2 out of 3 times when you participate in the experiment and thus you have to be able to correctly guess Tails in 2 out of 3 of experiments.
P(Monday | Heads) = 1⁄2
P(Monday | Tails) = 1⁄2
P(Heads) = 1⁄2
Then the following is true:
P(Heads | Awake) = 1⁄3
I think this follows from the given assumptions and priors. Do you agree?
There is a fundamental issue with trying to apply formal probability theory to the classic Sleeping Beauty, because the setting doesn’t satisfy the assumptions of Kolmogorov axioms. P(Monday) and P(Tuesday) are poorly defined and are not actually two elementary outcomes necessary for solution space because Tuesday follows Monday. Likewise, P(Heads&Monday), P(Tails&Monday) and P(Tails&Tuesday) are poorly defined and are not three elementary outcomes for the similar reason.
I’ll give a deeper read to the Bayesian Beauty paper, but from what I’ve already seen it just keeps uplying the same mathematical apparatus to the setting that it’s not properly fitting.
Could you explain what is the difference you see between these two position?
In the second one you specifically describe “the ability to correctly guess tails in 2⁄3 of the experiments”, whereas in the first you more loosely describe “thinking that the coin landed Tails with 2⁄3 probability”, which I previously read as being a probability per-awakening rather than per-coin-flip.
Would it be less misleading if I change the first phrase like this:
The Bailey is the claim that the coin is actually more likely to be Tails when I participate in the experiment myself. That is, my awakening on Monday or Tuesday gives me evidence that lawfully update me to thinking that the coin landed Tails with 2⁄3 probability in this experiment, not just on average awakening.
Well, you see this is preciesly the confusion I’m talking about.
If there are thirders who hold the bailey position: that particiopating in the experiment gives them the ability to correctly guess tails in 2⁄3 of the experiments, then there can’t be a consensus that probability “in the experiment” is 1⁄2.
The whole “paradox” is that despite the fact that any random awakening are 2⁄3 likely to happen when the coin landed Tails, the fact that you are awaken doesn’t help you guess the outcome of the coin toss in this experiment better than chance. So it’s very important to be very precise about what you mean by “credence” or “probability”.
Yep. This is specifically the dispute I want to adress but to be able to do it one has to properly separate the bailey from the motte first. Next post will explore the bailey position in more details and show how it violates conservation of expected evidence.
I’m now unclear exactly what the bailey position is from your perspective. You said in the opening post, regarding the classic Sleeping Beauty problem:
From the perspective of the Bayesian Beauty paper, the thirder position is that, given the classic (non-incubator) Sleeping Beauty experiment, with these anthropic priors:
P(Monday | Heads) = 1⁄2
P(Monday | Tails) = 1⁄2
P(Heads) = 1⁄2
Then the following is true:
P(Heads | Awake) = 1⁄3
I think this follows from the given assumptions and priors. Do you agree?
One conversion of this into words is that my awakening (Awake=True) gives me evidence that lawfully updates me from, on Sunday, thinking that the coin will equally land either way (P(Heads) = 1⁄2) to waking up and thinking that the coin right now is more likely to be showing tails (P(Heads | Awake) = 1⁄3). Do you disagree with the conversion of the math into words? Would you perhaps phrase it differently?
Whereas now you define the bailey position as:
I agree with you that this is false, but it reads to me as a different position.
Could you explain what is the difference you see between these two position?
If you receive some evidence that lawfully updates you to believing that the coin is Tails with 2⁄3 probability in this experiment, in 2 out of 3 experiments the coin have to be Tails when you receive this evidence.
If you receive this evidence every experiment you participate in, then the coin have to be Tails 2 out of 3 times when you participate in the experiment and thus you have to be able to correctly guess Tails in 2 out of 3 of experiments.
There is a fundamental issue with trying to apply formal probability theory to the classic Sleeping Beauty, because the setting doesn’t satisfy the assumptions of Kolmogorov axioms. P(Monday) and P(Tuesday) are poorly defined and are not actually two elementary outcomes necessary for solution space because Tuesday follows Monday. Likewise, P(Heads&Monday), P(Tails&Monday) and P(Tails&Tuesday) are poorly defined and are not three elementary outcomes for the similar reason.
I’ll give a deeper read to the Bayesian Beauty paper, but from what I’ve already seen it just keeps uplying the same mathematical apparatus to the setting that it’s not properly fitting.
In the second one you specifically describe “the ability to correctly guess tails in 2⁄3 of the experiments”, whereas in the first you more loosely describe “thinking that the coin landed Tails with 2⁄3 probability”, which I previously read as being a probability per-awakening rather than per-coin-flip.
Would it be less misleading if I change the first phrase like this: