I’m not sure if I fully understand why this is supposed to pose a problem, but maybe it helps to say that by “meaningfully consider” we mean something like, is actually part of the agent’s theory of the world. In your situation, since the agent is considering which envelope to take, I would guess that to satisfy richness she should have a credence in the proposition.
Okay, then I believe you definetely have a problem with this example and would be glad to show you where exactly.
I think (maybe?) what makes this case tricky or counterintuitive is that the agent seems to lack any basis for forming beliefs about which envelope contains the money—their memory is erased each time and the location depends on their previous (now forgotten) choice.
However, this doesn’t mean they can’t or don’t have credences about the envelope contents. From the agent’s subjective perspective upon waking, they might assign 0.5 credence to each envelope containing the money, reasoning that they have no information to favor either envelope.
Let’s suppose that the agent does exactly that. Suppose they believe that on every awakening there is 50% chance that money is in envelope 1. Then picking envelope 1 every time will in expectation lead to winning 350$ per experiment.
But this is clearly false. The experiment is specifically designed in such a manner that the agent can win money only on the first awakening. On every other day (6 times out of 7) the money would be in the envelope 2.
So should the agent believe that there is only 1⁄7 chance that money are in envelope 1 then? Also no. I suppose you can see why. As soon as he tries to act on such belief it will turn out that 6 times out of 7 the money are in envelope 1.
In fact, we can notice, that there is no coherent value of credence for statement “Today the money are in envelope 1” that would not lead the agent to irrational behavior. This is because the term “Today” is not well-defined in the setting of such experiment.
By which I mean that in the same iteration of the experiment propositions including “Today” may not have a unique value. On the first day of the experiment statement “Today money are in envelope 1″ may be true, while on the second day it may be false, so in the single iteration of the experiment that lasts 7 days the statement is simultaneously true and false!
Which means that “Today money are in envelope 1” isn’t actually an event from the event space of the experiment and therefore doesn’t have a probability value, as probability function’s domain is event space.
But this is a nuance of formal probability theory that most people do not notice, or even try to ignore outright. Our intuitions are accustoimed to situations where statements about “Today” can be represented as well-defined events from the event space and therefore we assume that they can always be “meaningfully considered”.
And so if you try to base you decision theory framework on what feels as meaningfull to an agent instead of what is formalizable mathematically, you will end up with a bunch of paradoxical situations, like the one I’ve just described.
Thanks for raising this important point. When modeling these situations carefully, we need to give terms like “today” a precise semantics that’s well-defined for the agent. With proper semantics established, we can examine what credences make sense under different ways of handling indexicals. Matthias Hild’s paper “Auto-epistemology and updating” demonstrates how to carefully construct time-indexed probability updates. We could then add centered worlds or other approaches for self-locating probabilities.
Some cases might lead to puzzles, particularly where epistemic fixed points don’t exist. This might push us toward modeling credences differently or finding other solutions. But once we properly formalize “today” as an event, we can work on satisfying richness conditions. Whether this leads to inconsistent attitudes depends on what constraints we place on those attitudes—something that reasonable people might disagree about, as debates over sleeping beauty suggest.
There is, in fact, no way to formalize “Today” in a setting where the participant doesn’t know which day it is, multiple days happens in the same iteration of probability experiment and probability estimate should be different on different days. Which the experiment I described demonstrates pretty well.
Framework of centered possible worlds is deeply flawed and completely unjustified. It’s essentially talking about a different experiment instead of the stated one, or a different function instead of probability.
For your purposes, however it’s not particularly important. All you need is to explicitly add the notion that propositions should be well-defined events. This will save you from all such paradoxical cases.
You might be interested in this paper by Wolfgang Spohn on auto-epistemology and Sleeping Beauty (and related) problems (Sleeping Beauty starts on p. 388). Auto-epistemic models have more machinery than the basic model described in this post has, but I’m not sure there’s anything special about your example that prevents it being modeled in a similar way.
Sleeping Beauty is more subtle problem, so it’s less obvious why the application of centred possible worlds fails.
But in principle we can construct a similar argument. If we suppose that, in terms of the paper, ones epistemic state should follow function P’, instead of P on awakening in Sleeping Beauty we get ourselves into this precarious situation:
P’(Today is Monday|Tails) = P’(Today is Tuesday|Tails) = 1⁄2
as this estimate stays true for both awakenings:
P’(At Least One Awakening Happens On Monday|Tails) = 1 - P’(Today is Tuesday|Tails)^2 = 3⁄4
While the actual credence should be 100%. Which gives an obvious opportunity to money pump the Beauty by bets on awakenings on the days in the experiment.
This problem, of course, doesn’t happen when we simply keep using function P for which “Today is Monday” and “Today is Tuesday” are ill-defined, but instead:
P(Monday Awakening Happens in the Experiment|Tails) = 1
P(Tuesday Awakening Happens in the Experiment|Tails) = 1
and
P(At Least One Awakening Happens On Monday|Tails) = P(Monday Awakening Happens in the Experiment|Tails) = 1
But again, this is a more subtle situation. The initial example with money in envelope is superior in this regard, because it’s immediately clear that there is no coherent value for P’(Money in Envelope 1) in the first place.
Okay, then I believe you definetely have a problem with this example and would be glad to show you where exactly.
Let’s suppose that the agent does exactly that. Suppose they believe that on every awakening there is 50% chance that money is in envelope 1. Then picking envelope 1 every time will in expectation lead to winning 350$ per experiment.
But this is clearly false. The experiment is specifically designed in such a manner that the agent can win money only on the first awakening. On every other day (6 times out of 7) the money would be in the envelope 2.
So should the agent believe that there is only 1⁄7 chance that money are in envelope 1 then? Also no. I suppose you can see why. As soon as he tries to act on such belief it will turn out that 6 times out of 7 the money are in envelope 1.
In fact, we can notice, that there is no coherent value of credence for statement “Today the money are in envelope 1” that would not lead the agent to irrational behavior. This is because the term “Today” is not well-defined in the setting of such experiment.
By which I mean that in the same iteration of the experiment propositions including “Today” may not have a unique value. On the first day of the experiment statement “Today money are in envelope 1″ may be true, while on the second day it may be false, so in the single iteration of the experiment that lasts 7 days the statement is simultaneously true and false!
Which means that “Today money are in envelope 1” isn’t actually an event from the event space of the experiment and therefore doesn’t have a probability value, as probability function’s domain is event space.
But this is a nuance of formal probability theory that most people do not notice, or even try to ignore outright. Our intuitions are accustoimed to situations where statements about “Today” can be represented as well-defined events from the event space and therefore we assume that they can always be “meaningfully considered”.
And so if you try to base you decision theory framework on what feels as meaningfull to an agent instead of what is formalizable mathematically, you will end up with a bunch of paradoxical situations, like the one I’ve just described.
Thanks for raising this important point. When modeling these situations carefully, we need to give terms like “today” a precise semantics that’s well-defined for the agent. With proper semantics established, we can examine what credences make sense under different ways of handling indexicals. Matthias Hild’s paper “Auto-epistemology and updating” demonstrates how to carefully construct time-indexed probability updates. We could then add centered worlds or other approaches for self-locating probabilities.
Some cases might lead to puzzles, particularly where epistemic fixed points don’t exist. This might push us toward modeling credences differently or finding other solutions. But once we properly formalize “today” as an event, we can work on satisfying richness conditions. Whether this leads to inconsistent attitudes depends on what constraints we place on those attitudes—something that reasonable people might disagree about, as debates over sleeping beauty suggest.
There is, in fact, no way to formalize “Today” in a setting where the participant doesn’t know which day it is, multiple days happens in the same iteration of probability experiment and probability estimate should be different on different days. Which the experiment I described demonstrates pretty well.
Framework of centered possible worlds is deeply flawed and completely unjustified. It’s essentially talking about a different experiment instead of the stated one, or a different function instead of probability.
For your purposes, however it’s not particularly important. All you need is to explicitly add the notion that propositions should be well-defined events. This will save you from all such paradoxical cases.
You might be interested in this paper by Wolfgang Spohn on auto-epistemology and Sleeping Beauty (and related) problems (Sleeping Beauty starts on p. 388). Auto-epistemic models have more machinery than the basic model described in this post has, but I’m not sure there’s anything special about your example that prevents it being modeled in a similar way.
Sleeping Beauty is more subtle problem, so it’s less obvious why the application of centred possible worlds fails.
But in principle we can construct a similar argument. If we suppose that, in terms of the paper, ones epistemic state should follow function P’, instead of P on awakening in Sleeping Beauty we get ourselves into this precarious situation:
P’(Today is Monday|Tails) = P’(Today is Tuesday|Tails) = 1⁄2
as this estimate stays true for both awakenings:
P’(At Least One Awakening Happens On Monday|Tails) = 1 - P’(Today is Tuesday|Tails)^2 = 3⁄4
While the actual credence should be 100%. Which gives an obvious opportunity to money pump the Beauty by bets on awakenings on the days in the experiment.
This problem, of course, doesn’t happen when we simply keep using function P for which “Today is Monday” and “Today is Tuesday” are ill-defined, but instead:
P(Monday Awakening Happens in the Experiment|Tails) = 1
P(Tuesday Awakening Happens in the Experiment|Tails) = 1
and
P(At Least One Awakening Happens On Monday|Tails) = P(Monday Awakening Happens in the Experiment|Tails) = 1
But again, this is a more subtle situation. The initial example with money in envelope is superior in this regard, because it’s immediately clear that there is no coherent value for P’(Money in Envelope 1) in the first place.