The main complaint here seems to be subjectivity of these probabilities. This does not bother me, as in my point of view a probability is any measure on a space that satisfies the axioms of probability. Whether a given model using probabilities matches observed reality depends upon the interpretations that are part of that model.
So essentially, whether Sleeping Beauty “should” think that some probability of it being Monday is 1⁄2 or 1⁄3 is of little interest to me. Those are each appropriate probabilities in different models. When you apply either of those models to make actual predictions or strategies, you use these probabilities in different ways and get the same final result either way. So who really cares whether 1⁄2 an angel or a 1⁄3 of an angel is dancing on the head of a coin in the interim?
The only real problem arises when someone uses probabilities from one model and misapplies parts of another model to make a prediction from them.
If you think 1⁄2 is a valid probability in its own model. I would assume you are also interested in the probability update rule of this model, i.e. how can Beauty justify the probability of Heads to be 1⁄2 after learning it is Monday.
Since you said 1⁄2 is a valid answer for its own model. You would want to know if that model is self-consistent? Not just picking whichever answer that seems least problematic?
What I mean is: It seems a bizarre thing to start with a model and then conjure a conclusion and then try to justify that the conclusion is consistent with the model. Why would you assume that I would be interested in doing any such thing?
The main complaint here seems to be subjectivity of these probabilities. This does not bother me, as in my point of view a probability is any measure on a space that satisfies the axioms of probability. Whether a given model using probabilities matches observed reality depends upon the interpretations that are part of that model.
So essentially, whether Sleeping Beauty “should” think that some probability of it being Monday is 1⁄2 or 1⁄3 is of little interest to me. Those are each appropriate probabilities in different models. When you apply either of those models to make actual predictions or strategies, you use these probabilities in different ways and get the same final result either way. So who really cares whether 1⁄2 an angel or a 1⁄3 of an angel is dancing on the head of a coin in the interim?
The only real problem arises when someone uses probabilities from one model and misapplies parts of another model to make a prediction from them.
If you think 1⁄2 is a valid probability in its own model. I would assume you are also interested in the probability update rule of this model, i.e. how can Beauty justify the probability of Heads to be 1⁄2 after learning it is Monday.
Why would I be interested in finding a justification for that particular update?
Since you said 1⁄2 is a valid answer for its own model. You would want to know if that model is self-consistent? Not just picking whichever answer that seems least problematic?
What I mean is: It seems a bizarre thing to start with a model and then conjure a conclusion and then try to justify that the conclusion is consistent with the model. Why would you assume that I would be interested in doing any such thing?