I’m now unclear exactly what the bailey position is from your perspective. You said in the opening post, regarding the classic Sleeping Beauty problem:
The Bailey is the claim that the coin is actually more likely to be Tails when I participate in the experiment myself. That is, my awakening on Monday or Tuesday gives me evidence that lawfully update me to thinking that the coin landed Tails with 2⁄3 probability.
From the perspective of the Bayesian Beauty paper, the thirder position is that, given the classic (non-incubator) Sleeping Beauty experiment, with these anthropic priors:
P(Monday | Heads) = 1⁄2
P(Monday | Tails) = 1⁄2
P(Heads) = 1⁄2
Then the following is true:
P(Heads | Awake) = 1⁄3
I think this follows from the given assumptions and priors. Do you agree?
One conversion of this into words is that my awakening (Awake=True) gives me evidence that lawfully updates me from, on Sunday, thinking that the coin will equally land either way (P(Heads) = 1⁄2) to waking up and thinking that the coin right now is more likely to be showing tails (P(Heads | Awake) = 1⁄3). Do you disagree with the conversion of the math into words? Would you perhaps phrase it differently?
Whereas now you define the bailey position as:
The bailey position: that participating in the experiment gives them the ability to correctly guess tails in 2⁄3 of the experiments.
I agree with you that this is false, but it reads to me as a different position.
The Bailey is the claim that the coin is actually more likely to be Tails when I participate in the experiment myself. That is, my awakening on Monday or Tuesday gives me evidence that lawfully update me to thinking that the coin landed Tails with 2⁄3 probability.
The bailey position: that participating in the experiment gives them the ability to correctly guess tails in 2⁄3 of the experiments.
Could you explain what is the difference you see between these two position?
If you receive some evidence that lawfully updates you to believing that the coin is Tails with 2⁄3 probability in this experiment, in 2 out of 3 experiments the coin have to be Tails when you receive this evidence.
If you receive this evidence every experiment you participate in, then the coin have to be Tails 2 out of 3 times when you participate in the experiment and thus you have to be able to correctly guess Tails in 2 out of 3 of experiments.
P(Monday | Heads) = 1⁄2
P(Monday | Tails) = 1⁄2
P(Heads) = 1⁄2
Then the following is true:
P(Heads | Awake) = 1⁄3
I think this follows from the given assumptions and priors. Do you agree?
There is a fundamental issue with trying to apply formal probability theory to the classic Sleeping Beauty, because the setting doesn’t satisfy the assumptions of Kolmogorov axioms. P(Monday) and P(Tuesday) are poorly defined and are not actually two elementary outcomes necessary for solution space because Tuesday follows Monday. Likewise, P(Heads&Monday), P(Tails&Monday) and P(Tails&Tuesday) are poorly defined and are not three elementary outcomes for the similar reason.
I’ll give a deeper read to the Bayesian Beauty paper, but from what I’ve already seen it just keeps uplying the same mathematical apparatus to the setting that it’s not properly fitting.
Could you explain what is the difference you see between these two position?
In the second one you specifically describe “the ability to correctly guess tails in 2⁄3 of the experiments”, whereas in the first you more loosely describe “thinking that the coin landed Tails with 2⁄3 probability”, which I previously read as being a probability per-awakening rather than per-coin-flip.
Would it be less misleading if I change the first phrase like this:
The Bailey is the claim that the coin is actually more likely to be Tails when I participate in the experiment myself. That is, my awakening on Monday or Tuesday gives me evidence that lawfully update me to thinking that the coin landed Tails with 2⁄3 probability in this experiment, not just on average awakening.
I’m now unclear exactly what the bailey position is from your perspective. You said in the opening post, regarding the classic Sleeping Beauty problem:
From the perspective of the Bayesian Beauty paper, the thirder position is that, given the classic (non-incubator) Sleeping Beauty experiment, with these anthropic priors:
P(Monday | Heads) = 1⁄2
P(Monday | Tails) = 1⁄2
P(Heads) = 1⁄2
Then the following is true:
P(Heads | Awake) = 1⁄3
I think this follows from the given assumptions and priors. Do you agree?
One conversion of this into words is that my awakening (Awake=True) gives me evidence that lawfully updates me from, on Sunday, thinking that the coin will equally land either way (P(Heads) = 1⁄2) to waking up and thinking that the coin right now is more likely to be showing tails (P(Heads | Awake) = 1⁄3). Do you disagree with the conversion of the math into words? Would you perhaps phrase it differently?
Whereas now you define the bailey position as:
I agree with you that this is false, but it reads to me as a different position.
Could you explain what is the difference you see between these two position?
If you receive some evidence that lawfully updates you to believing that the coin is Tails with 2⁄3 probability in this experiment, in 2 out of 3 experiments the coin have to be Tails when you receive this evidence.
If you receive this evidence every experiment you participate in, then the coin have to be Tails 2 out of 3 times when you participate in the experiment and thus you have to be able to correctly guess Tails in 2 out of 3 of experiments.
There is a fundamental issue with trying to apply formal probability theory to the classic Sleeping Beauty, because the setting doesn’t satisfy the assumptions of Kolmogorov axioms. P(Monday) and P(Tuesday) are poorly defined and are not actually two elementary outcomes necessary for solution space because Tuesday follows Monday. Likewise, P(Heads&Monday), P(Tails&Monday) and P(Tails&Tuesday) are poorly defined and are not three elementary outcomes for the similar reason.
I’ll give a deeper read to the Bayesian Beauty paper, but from what I’ve already seen it just keeps uplying the same mathematical apparatus to the setting that it’s not properly fitting.
In the second one you specifically describe “the ability to correctly guess tails in 2⁄3 of the experiments”, whereas in the first you more loosely describe “thinking that the coin landed Tails with 2⁄3 probability”, which I previously read as being a probability per-awakening rather than per-coin-flip.
Would it be less misleading if I change the first phrase like this: