MSF Theory: Another Explanation of Subjectively Objective Probability

Before I read Probability is in the Mind and Probability is Subjectively Objective I was a realist about probabilities; I was a frequentest. After I read them, I was just confused. I couldn’t understand how a mind could accurately say the probability of getting a heart in a standard deck of playing cards was not 25%. It wasn’t until I tried to explain the contrast between my view and the subjective view in a comment on Probability is Subjectively Objective that I realized I was a subjective Bayesian all along. So, if you’ve read Probability is in the Mind and read Probability is Subjectively Objective but still feel a little confused, hopefully, this will help.

I should mention that I’m not sure that EY would agree with my view of probability, but the view to be presented agrees with EY’s view on at least these propositions:

  • Probability is always in a mind, not in the world.

  • The probability that an agent should ascribe to a proposition is directly related to that agent’s knowledge of the world.

  • There is only one correct probability to assign to a proposition given your partial knowledge of the world.

  • If there is no uncertainty, there is no probability.

And any position that holds these propositions is a non-realist-subjective view of probability.


Imagine a pre-shuffled deck of playing cards and two agents (they don’t have to be humans), named “Johnny” and “Sally”, which are betting 1 dollar each on the suit of the top card. As everyone knows, 14 of the cards in a playing card deck are hearts. We will name this belief F1; F1 stands for “1/​4 of the cards in the deck are hearts.”. Johnny and Sally both believe F1. F1 is all that Johnny knows about the deck of cards, but sally knows a little bit more about this deck. Sally also knows that 8 of the top 10 cards are hearts. Let F2 stand for “8 out of the 10 top cards are hearts.”. Sally believes F2. John doesn’t know whether or not F2. F1 and F2 are beliefs about the deck of cards and they are either true or false.

So, sally bets that the top card is a heart and Johnny bets against her, i.e., she puts her money on “Top card is a heart.” being true; he puts his money on “~The top card is a heart.” being true. After they make their bets, one could imagine Johnny making fun of Sally; he might say something like: “Are you nuts? You know, I have a 75% chance of winning. 14 of the cards are hearts; you can’t argue with that!” Sally might reply: “Don’t forget that the probability you assign to ‘~The top card is a heart.’ depends on what you know about the deck. I think you would agree with me that there is an 80% chance that ‘The top card is a heart’ if you knew just a bit more about the state of the deck.”

To be undecided about a proposition is to not know which possible world you are in; am I in the possible world where that proposition is true, or in the one where it is false? Both Johnny and Sally are undecided about “The top card is a heart.”; their model of the world splits at that point of representation. Their knowledge is consistent with being in a possible world where the top card is a heart, or in a possible world where the top card is not a heart. The more statements they decide on, the smaller the configuration space of possible worlds they think they might find themselves in; deciding on a proposition takes a chunk off of that configuration space, and the content of that proposition determines the shape of the eliminated chunk; Sally’s and Johnny’s beliefs constrain their respective expected experiences, but not all the way to a point. The trick when constraining one’s space of viable worlds, is to make sure that the real world is among the possible worlds that satisfy your beliefs. Sally still has the upper hand, because her space of viably possible worlds is smaller than Johnny’s. There are many more ways you could arrange a standard deck of playing cards that satisfies F1 than there are ways to arrange a deck of cards that satisfies F1 and F2. To be clear, we don’t need to believe that possible worlds actually exist to accept this view of belief; we just need to believe that any agent capable of being undecided about a proposition is also capable of imagining alternative ways the world could consistently turn out to be, i.e., capable of imagining possible worlds.

For convenience, we will say that a possible world W, is viable for an agent A, if and only if, W satisfies A’s background knowledge of decided propositions, i.e., A thinks that W might be the world it finds itself in.

Of the possible worlds that satisfy F1, i.e., of the possible worlds where “1/​4 of the cards are hearts” is true, 34 of them also satisfy “~The top card is a heart.” Since Johnny holds that F1, and since he has no further information that might put stronger restrictions on his space of viable worlds, he ascribes a 75% probability to “~The top card is a heart.” Sally, however, holds that F2 as well as F1. She knows that of the possible worlds that satisfy F1 only 14 of them satisfy “The top card is a heart.” But she holds a proposition that constrains her space of viably possible worlds even further, namely F2. Most of the possible worlds that satisfy F1 are eliminated as viable worlds if we hold that F2 as well, because most of the possible worlds that satisfy F1 don’t satisfy F2. Of the possible worlds that satisfy F2 exactly 80% of them satisfy “The top card is a heart.” So, duh, Sally assigns an 80% probability to “The top card is a heart.” They give that proposition different probabilities, and they are both right in assigning their respective probabilities; they don’t disagree about how to assign probabilities, they just have different resources for doing so in this case. P(~The top card is a heart|F1) really is 75% and P(The top card is a heart|F2) really is 80%.

This setup makes it clear (to me at least) that the right probability to assign to a proposition depends on what you know. The more you know, i.e., the more you constrain the space of worlds you think you might be in, the more useful the probability you assign. The probability that an agent should ascribe to a proposition is directly related to that agent’s knowledge of the world.

This setup also makes it easy to see how an agent can be wrong about the probability it assigns to a proposition given its background knowledge. Imagine a third agent, named “Billy”, that has the same information as Sally, but say’s that there’s a 99% chance of “The top card is a heart.” Billy doesn’t have any information that further constrains the possible worlds he thinks he might find himself in; he’s just wrong about the fraction of possible worlds that satisfy F2 that also satisfy “The top card is a heart.”. Of all the possible worlds that satisfy F2 exactly 80% of them satisfy “The top card is a heart.”, no more, no less. There is only one correct probability to assign to a proposition given your partial knowledge.

The last benefit of this way of talking I’ll mention is that it makes probability’s dependence on ignorance clear. We can imagine another agent that knows the truth value of every proposition, lets call him “FSM”. There is only one possible world that satisfies all of FSM’s background knowledge; the only viable world for FSM is the real world. Of the possible worlds that satisfy FSM’s background knowledge, either all of them satisfy “The top card is a heart.” or none of them do, since there is only one viable world for FSM. So the only probabilities FSM can assign to “The top card is a heart.” are 1 or 0. In fact, those are the only probabilities FSM can assign to any proposition. If there is no uncertainty, there is no probability.

The world knows whether or not any given proposition is true (assuming determinism). The world itself is never uncertain, only the parts of the world that we call agents can be uncertain. Hence, Probability is always in a mind, not in the world. The probabilities that the universe assigns to a proposition are always 1 or 0, for the same reasons FSM only assigns a 1 or 0, and 1 and 0 aren’t really probabilities.

In conclusion, I’ll risk the hypothesis that: Where 0≤x≤1, “P(a|b)=x” is true, if and only if, of the possible worlds that satisfy “b”, x of them also satisfy “a”. Probabilities are propositional attitudes, and the probability value (or range of values) you assign to a proposition is representative of the fraction of possible worlds you find viable that satisfy that proposition. You may be wrong about the value of that fraction, and as a result you may be wrong about the probability you assign.

We may call the position summarized by the hypothesis above “Modal Satisfaction Frequency theory”, or “MSF theory”.