In case the bet is offered on every awakening: do you mean if she gives conflicting answers on Monday and Tuesday that the bet nevertheless is regarded as accepted?
Yes I do.
Of course, if the experiment is run as stated she wouldn’t be able to give conflicting answers, so the point is moot. But having a strict algorithm for resolving such theoretical cases is a good thing anyway.
My initial idea was, that if for example only her Monday answer counts and Beauty knows that, she could reason that when her answer counts it is Monday, arriving at the conclusion that it is reasonable to act as if it was Monday on every awakening, thus grounding her answer on P(H/Monday)=1/2. Same logic holds for rule „last awakening counts“ and „random awakening counts“.
Yes, I got it. As a matter of fact this is unlawful. Probability estimate is about the evidence you receive not about what “counts” for a betting scheme. If the Beauty receives the same evidence when her awakening counts and when it doesn’t count she can’t update her probability estimate. If in order to arrive to the correct answer she needs to behave as if every day is Monday it means that there is something wrong with her model.
Thankfully for thirdism, she does not have to do it. She can just assign zero utility to Tuesday awakening and get the correct betting odds.
Anyway, all this is quite tangental to the question of utility instability. Which is about the Beauty making a bet on Sunday and then reflecting on it during the experiment, even if no bets are proposed. According to thirdism probability of the coin being Heads changes on awakening, so, in order for Beauty not to regret about making an optimal bet on Sunday, her utility has to change as well. Therefore utility instability.
Honestly, I do not see any unlawful reasoning going on here. First of all, it‘s certainly important to distinguish between a probability model and a strategy. The job of a probability model is simply to suggest the probability of certain events and to describe how probabilities are affected by the realization of other events. A strategy on the other hand is to guide decision making to arrive at certain predefined goals.
My point is, that the probabilities a model suggests you to have based on the currently available evidence do NOT neccessarily have to match the probabilities that are relevant to your strategy and decisions. If Beauty is awake and doesn‘t know if it is the day her bet counts, it is in fact a rational strategy to behave and decide as if her bet counts today. If she knows that her bet only counts on Monday and her probability model suggests that „Today is Monday“ is relevant for H, then ideal rationality requires her to base her decision on P(H/Monday) cause she knows that Monday is realized when her decision counts. This guarantees that on her Monday awakening when her decision counts, she is calculating the probability for heads based on all relevant evidence that is realized on that day.
It is true that the thirder model does not suggest such a strategy, but suggesting strategies and therefore suggesting which probabilities are relevant for decisions is not the job of a probability model anyway. Similar is the case of the Technicolor Beauty: The strategy „only updating if Red“ is neither suggested nor hinted by your model. All your model suggests are probabilities conditional on the realization of certain events. It can’t tell you to treat the observation „Red room“ as a realization of the event „There is an awakening in a red room“ while treating the observation „Blue room“ merely as a realization of the event „There is an awakening in a red or a blue room“ instead of „There is an awakening in a blue room“. The observation of a blue room is always a realization of both of these events, and it is your strategy „tracking red“ and not your probability model that suggests to prefer one over the other as the relevant evidence to calculate your probabilities. I had been thinking over this for a while after I recently discovered this „Updating only if Red“-strategy for myself and how this strategy could be directly derived from the halfer model. But I honestly see no better justification to apply it than the plain fact that it proves to be more successful in the long run.
First of all, it‘s certainly important to distinguish between a probability model and a strategy. The job of a probability model is simply to suggest the probability of certain events and to describe how probabilities are affected by the realization of other events. A strategy on the other hand is to guide decision making to arrive at certain predefined goals.
Of course. As soon as we are talking about goals and strategies we are not talking about just probabilities anymore. We are also talking about utilities and expected utilities. However, probabilities do not suddenly change because of it. Probabilistic model is the same, there are simply additional considerations as well.
My point is, that the probabilities a model suggests you to have based on the currently available evidence do NOT neccessarily have to match the probabilities that are relevant to your strategy and decisions.
Whether or not your probability model leads to optimal descision making is the test allowing to falsify it. There are no separate “theoretical probabilities” and “decision making probabilities”. Only the ones that guide your behaviour can be correct. What’s the point of a theory that is not applicable to practice, anyway?
If your model claims that the probability based on your evidence is 1⁄3 but the optimal decision making happens when you act as if it’s 1⁄2, then your model is wrong and you switch to a model that claims that the probability is 1⁄2. That’s the whole reason why betting arguments are popular.
If Beauty is awake and doesn‘t know if it is the day her bet counts, it is in fact a rational strategy to behave and decide as if her bet counts today.
Questions of what “counts” or “matters” are not the realm of probability. However, the Beauty is free to adjust her utilities based on the specifics of the betting scheme.
All your model suggests are probabilities conditional on the realization of certain events.
The model says that
P(Heads|Red) = 1⁄3
P(Heads|Blue) = 1⁄3
but
P(Heads|Red or Blue) = 1⁄2
Which obviosly translates in a betting scheme: someone who bets on Tails only when the room is Red wins 2⁄3 of times and someone who bets on Tails only when the room is Blue wins 2⁄3 of times, while someone who always bet on Tails wins only 1⁄2 of time.
This leads to a conclusion that observing event “Red” instead of “Red or Blue” is possible only for someone who has been expecting to observe event “Red” in particular. Likewise, observing HTHHTTHT is possible for a person who was expecting this particular sequence of coin tosses, instead of any combination with length 8. See Another Non-Anthropic Paradox: The Unsurprising Rareness of Rare Events
Yes I do.
Of course, if the experiment is run as stated she wouldn’t be able to give conflicting answers, so the point is moot. But having a strict algorithm for resolving such theoretical cases is a good thing anyway.
Yes, I got it. As a matter of fact this is unlawful. Probability estimate is about the evidence you receive not about what “counts” for a betting scheme. If the Beauty receives the same evidence when her awakening counts and when it doesn’t count she can’t update her probability estimate. If in order to arrive to the correct answer she needs to behave as if every day is Monday it means that there is something wrong with her model.
Thankfully for thirdism, she does not have to do it. She can just assign zero utility to Tuesday awakening and get the correct betting odds.
Anyway, all this is quite tangental to the question of utility instability. Which is about the Beauty making a bet on Sunday and then reflecting on it during the experiment, even if no bets are proposed. According to thirdism probability of the coin being Heads changes on awakening, so, in order for Beauty not to regret about making an optimal bet on Sunday, her utility has to change as well. Therefore utility instability.
Honestly, I do not see any unlawful reasoning going on here. First of all, it‘s certainly important to distinguish between a probability model and a strategy. The job of a probability model is simply to suggest the probability of certain events and to describe how probabilities are affected by the realization of other events. A strategy on the other hand is to guide decision making to arrive at certain predefined goals.
My point is, that the probabilities a model suggests you to have based on the currently available evidence do NOT neccessarily have to match the probabilities that are relevant to your strategy and decisions. If Beauty is awake and doesn‘t know if it is the day her bet counts, it is in fact a rational strategy to behave and decide as if her bet counts today. If she knows that her bet only counts on Monday and her probability model suggests that „Today is Monday“ is relevant for H, then ideal rationality requires her to base her decision on P(H/Monday) cause she knows that Monday is realized when her decision counts. This guarantees that on her Monday awakening when her decision counts, she is calculating the probability for heads based on all relevant evidence that is realized on that day.
It is true that the thirder model does not suggest such a strategy, but suggesting strategies and therefore suggesting which probabilities are relevant for decisions is not the job of a probability model anyway. Similar is the case of the Technicolor Beauty: The strategy „only updating if Red“ is neither suggested nor hinted by your model. All your model suggests are probabilities conditional on the realization of certain events. It can’t tell you to treat the observation „Red room“ as a realization of the event „There is an awakening in a red room“ while treating the observation „Blue room“ merely as a realization of the event „There is an awakening in a red or a blue room“ instead of „There is an awakening in a blue room“. The observation of a blue room is always a realization of both of these events, and it is your strategy „tracking red“ and not your probability model that suggests to prefer one over the other as the relevant evidence to calculate your probabilities. I had been thinking over this for a while after I recently discovered this „Updating only if Red“-strategy for myself and how this strategy could be directly derived from the halfer model. But I honestly see no better justification to apply it than the plain fact that it proves to be more successful in the long run.
Of course. As soon as we are talking about goals and strategies we are not talking about just probabilities anymore. We are also talking about utilities and expected utilities. However, probabilities do not suddenly change because of it. Probabilistic model is the same, there are simply additional considerations as well.
Whether or not your probability model leads to optimal descision making is the test allowing to falsify it. There are no separate “theoretical probabilities” and “decision making probabilities”. Only the ones that guide your behaviour can be correct. What’s the point of a theory that is not applicable to practice, anyway?
If your model claims that the probability based on your evidence is 1⁄3 but the optimal decision making happens when you act as if it’s 1⁄2, then your model is wrong and you switch to a model that claims that the probability is 1⁄2. That’s the whole reason why betting arguments are popular.
Questions of what “counts” or “matters” are not the realm of probability. However, the Beauty is free to adjust her utilities based on the specifics of the betting scheme.
The model says that
P(Heads|Red) = 1⁄3
P(Heads|Blue) = 1⁄3
but
P(Heads|Red or Blue) = 1⁄2
Which obviosly translates in a betting scheme: someone who bets on Tails only when the room is Red wins 2⁄3 of times and someone who bets on Tails only when the room is Blue wins 2⁄3 of times, while someone who always bet on Tails wins only 1⁄2 of time.
This leads to a conclusion that observing event “Red” instead of “Red or Blue” is possible only for someone who has been expecting to observe event “Red” in particular. Likewise, observing HTHHTTHT is possible for a person who was expecting this particular sequence of coin tosses, instead of any combination with length 8. See Another Non-Anthropic Paradox: The Unsurprising Rareness of Rare Events