Thanks for raising this important point. When modeling these situations carefully, we need to give terms like “today” a precise semantics that’s well-defined for the agent. With proper semantics established, we can examine what credences make sense under different ways of handling indexicals. Matthias Hild’s paper “Auto-epistemology and updating” demonstrates how to carefully construct time-indexed probability updates. We could then add centered worlds or other approaches for self-locating probabilities.
Some cases might lead to puzzles, particularly where epistemic fixed points don’t exist. This might push us toward modeling credences differently or finding other solutions. But once we properly formalize “today” as an event, we can work on satisfying richness conditions. Whether this leads to inconsistent attitudes depends on what constraints we place on those attitudes—something that reasonable people might disagree about, as debates over sleeping beauty suggest.
There is, in fact, no way to formalize “Today” in a setting where the participant doesn’t know which day it is, multiple days happens in the same iteration of probability experiment and probability estimate should be different on different days. Which the experiment I described demonstrates pretty well.
Framework of centered possible worlds is deeply flawed and completely unjustified. It’s essentially talking about a different experiment instead of the stated one, or a different function instead of probability.
For your purposes, however it’s not particularly important. All you need is to explicitly add the notion that propositions should be well-defined events. This will save you from all such paradoxical cases.
You might be interested in this paper by Wolfgang Spohn on auto-epistemology and Sleeping Beauty (and related) problems (Sleeping Beauty starts on p. 388). Auto-epistemic models have more machinery than the basic model described in this post has, but I’m not sure there’s anything special about your example that prevents it being modeled in a similar way.
Sleeping Beauty is more subtle problem, so it’s less obvious why the application of centred possible worlds fails.
But in principle we can construct a similar argument. If we suppose that, in terms of the paper, ones epistemic state should follow function P’, instead of P on awakening in Sleeping Beauty we get ourselves into this precarious situation:
P’(Today is Monday|Tails) = P’(Today is Tuesday|Tails) = 1⁄2
as this estimate stays true for both awakenings:
P’(At Least One Awakening Happens On Monday|Tails) = 1 - P’(Today is Tuesday|Tails)^2 = 3⁄4
While the actual credence should be 100%. Which gives an obvious opportunity to money pump the Beauty by bets on awakenings on the days in the experiment.
This problem, of course, doesn’t happen when we simply keep using function P for which “Today is Monday” and “Today is Tuesday” are ill-defined, but instead:
P(Monday Awakening Happens in the Experiment|Tails) = 1
P(Tuesday Awakening Happens in the Experiment|Tails) = 1
and
P(At Least One Awakening Happens On Monday|Tails) = P(Monday Awakening Happens in the Experiment|Tails) = 1
But again, this is a more subtle situation. The initial example with money in envelope is superior in this regard, because it’s immediately clear that there is no coherent value for P’(Money in Envelope 1) in the first place.
Thanks for raising this important point. When modeling these situations carefully, we need to give terms like “today” a precise semantics that’s well-defined for the agent. With proper semantics established, we can examine what credences make sense under different ways of handling indexicals. Matthias Hild’s paper “Auto-epistemology and updating” demonstrates how to carefully construct time-indexed probability updates. We could then add centered worlds or other approaches for self-locating probabilities.
Some cases might lead to puzzles, particularly where epistemic fixed points don’t exist. This might push us toward modeling credences differently or finding other solutions. But once we properly formalize “today” as an event, we can work on satisfying richness conditions. Whether this leads to inconsistent attitudes depends on what constraints we place on those attitudes—something that reasonable people might disagree about, as debates over sleeping beauty suggest.
There is, in fact, no way to formalize “Today” in a setting where the participant doesn’t know which day it is, multiple days happens in the same iteration of probability experiment and probability estimate should be different on different days. Which the experiment I described demonstrates pretty well.
Framework of centered possible worlds is deeply flawed and completely unjustified. It’s essentially talking about a different experiment instead of the stated one, or a different function instead of probability.
For your purposes, however it’s not particularly important. All you need is to explicitly add the notion that propositions should be well-defined events. This will save you from all such paradoxical cases.
You might be interested in this paper by Wolfgang Spohn on auto-epistemology and Sleeping Beauty (and related) problems (Sleeping Beauty starts on p. 388). Auto-epistemic models have more machinery than the basic model described in this post has, but I’m not sure there’s anything special about your example that prevents it being modeled in a similar way.
Sleeping Beauty is more subtle problem, so it’s less obvious why the application of centred possible worlds fails.
But in principle we can construct a similar argument. If we suppose that, in terms of the paper, ones epistemic state should follow function P’, instead of P on awakening in Sleeping Beauty we get ourselves into this precarious situation:
P’(Today is Monday|Tails) = P’(Today is Tuesday|Tails) = 1⁄2
as this estimate stays true for both awakenings:
P’(At Least One Awakening Happens On Monday|Tails) = 1 - P’(Today is Tuesday|Tails)^2 = 3⁄4
While the actual credence should be 100%. Which gives an obvious opportunity to money pump the Beauty by bets on awakenings on the days in the experiment.
This problem, of course, doesn’t happen when we simply keep using function P for which “Today is Monday” and “Today is Tuesday” are ill-defined, but instead:
P(Monday Awakening Happens in the Experiment|Tails) = 1
P(Tuesday Awakening Happens in the Experiment|Tails) = 1
and
P(At Least One Awakening Happens On Monday|Tails) = P(Monday Awakening Happens in the Experiment|Tails) = 1
But again, this is a more subtle situation. The initial example with money in envelope is superior in this regard, because it’s immediately clear that there is no coherent value for P’(Money in Envelope 1) in the first place.