Believing others’ priors

Meet the Bayesians

In one way of looking at Bayesian reasoners, there are a bunch of possible worlds and a bunch of people, who start out with some guesses about what possible world we’re in. Everyone knows everyone else’s initial guesses. As evidence comes in, agents change their guesses about which world they’re in via Bayesian updating.

The Bayesians can share information just by sharing how their beliefs have changed.

“Bob initially thought that last Monday would be sunny with probability 0.8, but now he thinks it was sunny with probability 0.9, so he must have has seen evidence that he judges as 4/​9ths as likely if it wasn’t sunny than if it was”

If they have the same priors, they’ll converge to the same beliefs. But if they don’t, it seems they can agree to disagree. This is a bit frustrating, because we don’t want people to ignore our very convincing evidence just because they’ve gotten away with having a stupid weird prior.

What can we say about which priors are permissible? Robin Hanson offers an argument that we must either (a) believe our prior was created by a special process that correlated it with the truth more than everyone else’s or (b) our prior must be the same as everyone else’s.

Meet the pre-Bayesians

How does that argument go? Roughly, Hanson describes a slightly more nuanced set of reasoners: the pre-Bayesians. The pre-Bayesians are not only uncertain about what world they’re in, but also about what everyone’s priors are.

These uncertainties can be tangled together (the joint distribution doesn’t have to factorise into their beliefs about everyone’s priors and their beliefs about worlds). Facts about the world can change their opinions about what prior assignments people have.

Hanson then imposes a pre-rationality condition: if you find out what priors everyone has, you should agree with your prior about how likely different worlds are. In other words, you should trust your prior in the future. Once you have this condition, it seems that it’s impossible to both (a) believe that some other people’s priors were generated in a way that makes them as likely to be good as yours and (b) have different priors from those people.

Let’s dig into the sort of things this pre-rationality condition commits you to.

Consider the class of worlds where you are generated by a machine that randomly generates a prior and sticks it in your head. The pre-rationality rule says that worlds where this randomly-generated prior describes the world well are more likely than worlds where it is a poor description.

So if I pop out with a very certain belief that I have eleven toes, such that no amount of visual evidence that I have ten toes can shake my faith, the pre-prior should indeed place more weight on those worlds where I have eleven toes and various optical trickery conspires to make it look like I have ten.

If this seems worrying to you, consider that you may be asking too much of this pre-rationality condition. After all, if you have a weird prior, you have a weird prior. In the machine-generating-random-priors world, you already believe that your prior is a good fit for the world. That’s what it is to have a prior. Yes, according to our actual posteriors it seems like there should be no correlation between these random priors and the world they’re in, but asking the pre-rationality condition to make our actual beliefs win out seems like a pretty illicit move.

Another worry is that it seems there’s some spooky action-at-a-distance going on between the pre-rationality condition and the assignment of priors. Once everyone has their priors, the pre-rationality condition is powerless to change them. So how is the pre-rationality condition making it so that everyone has the same prior?

I claim that actually, this presentation of the pre-Bayesian proof is not quite right. According to me, if I’m a Bayesian and believe our priors are equally good, then we must have the same priors. If I’m a pre-Bayesian and believe our priors are equally good, then I must believe that your prior averages out to mine. This latter move is open to the pre-Bayesian (who has uncertainty about priors) but not to the Bayesian (who knows the priors).

I’ll make an argument purely within Bayesianism for believing in equally good priors to having the same prior, and then we’ll see how belief in priors comes in for a pre-Bayesian.

Bayesian prior equality

To get this off the ground, I want to make precise the claim of believing someone’s priors are as good as yours. I’m going to look at 3 ways of doing this. Note that Hanson doesn’t suggest a particular one, so he doesn’t have to accept any of these as what he means, and that might change how well my argument works.

Let’s suppose my prior is p and yours is q. Note, these are fixed functions, not references pointing at my prior and your prior. In the Bayesian framework, we just have our priors, end of story. We don’t reason about cases where our priors were different.

Let’s suppose score is a strictly proper scoring rule (if you don’t know what that means, I’ll explain in a moment). score takes in a probability distribution over a random variable and an actual value for that random variable. It gives more points the more of the probability distribution’s mass is near the actual value. For it to be strictly proper, I uniquely maximise my expected score by reporting my true probability distribution. That is is uniquely maximised when f = p.

Let’s also suppose my posterior is p|B, that is (using notation a bit loosely) my prior probability conditioned on some background information B.

Here are some attempts to precisely claim someone’s prior is as good as mine:

  1. For all X, .

  2. For all X, .

  3. For all X, .

(1) says that, according to my prior, your prior is as good as mine. By the definition of a proper scoring rule, this means that your prior is the same as mine.

(2) says that, according to my posterior, the posterior you’d have with my current information is as good as the posterior I have. By the definition of the proper scoring rule, this means that your posterior is equal to my posterior. This is a bit broader than (1), and allows your prior to have already “priced in” some information that I now have.

(3) says that given what we know now, your prior was as good as mine.

That rules out q = p|B. That would be a prior that’s better than mine: it’s just what you get from mine when you’re already certain you’ll observe some evidence (like an apple falling in 1663). Observing that evidence doesn’t change your beliefs.

In general, it can’t be the case that you predicted B as more likely than me, which can be seen by taking X = B.

On future events, your prior can match my prior, or diverge from my posterior equally as far as my prior, but in the opposite direction.

I don’t really like 3, because while it accepts that your prior was as good as mine in the past, it can think that after you update your prior you’ll still be worse than me.

That leaves us with 1 and 2 then. If 1 or 2 are our precise notion, then it follows quickly that we have common priors.

This is just a notion of logical consistency though; I don’t have room for believing that our prior-generating processes make yours as likely to be true as mine. It’s just that if the probability distribution that happens to be your prior appears to me as good as the probability distribution that happens to be my prior, they are the same probability distribution.

Pre-Bayesian prior equality

How to make pre-Bayesian claim that your prior is as good as mine?

Here let, pᵢ be my prior as a reference, rather than as a concrete probability distribution. Claims about pᵢ are claims about my prior, no matter what function that actually ends up being. So for example, claiming that pᵢ scores well is claiming that as we look at different worlds, we see it is likely that my prior is a well-adapted prior for that specific world. In contrast, a claim that p scores well would be a claim that the actual world looks a lot like p.

Similarly, pⱼ is your prior as a reference. Let p be a vector assigning a prior to each agent.

Let f be my pre-prior. That is, my initial beliefs over combinations of worlds and prior assignments. Similarly to above, let f|B be my pre-posterior (a bit of an awkward term, I admit).

For ease of exposition (and I don’t think entirely unreasonably), I’m going to imagine that I know my prior precisely. That is f(w, p) = 0 if pᵢ ≠ p.

Here are some ways of making the belief that your prior is as good as mine precise in the pre-Bayesian framework.

  1. For all X, .

  2. For all X, .

  3. For all X, .

On the LHS, the expectation uses p rather than f, because of the pre-rationality condition. Knowing my prior, my updated pre-prior agrees with it about the probability of the ground events. But I still don’t know your prior, so I have to use f on the RHS to “expect” over the event and your prior itself.

(1) says that, according to my pre-prior, your prior is as good as mine in expectation. The proper scoring rule says that my prior is the unique maximum for a fixed function. But I could, in principle, believe that your prior is better adapted to each world than my prior, but I’m still not certain which world we’re in (or what your prior is), so I can’t update my beliefs.

Given the equality, I can’t want to switch priors with you in general, but I could think you have a prior that’s more correlated with truth than mine in some cases and less so in others.

(2) says that, according to my pre-posterior, your prior conditioned on my info is, in expectation, as good as my prior conditioned on my info.

I like this better than (1). Evidence in the real world leads me to beliefs about the prior production mechanisms (like genes, nurture and so on). These don’t seem to give a good reason for my innate beliefs to be better than anyone else’s. Therefore, I believe your prior is probably as good as mine on average.

But note, I don’t actually know what your prior is. It’s just that I believe we probably share similar priors. The spooky action-at-a-distance is eliminated. This is just (again) a claim about consistent beliefs: if I believe that your prior got generated in a way that made it as good as mine, then I must believe it’s not too divergent from mine.

  1. says that, given what we now know, I think your prior is no better or worse than mine in expectation. This is about as unpalatable in the pre-Bayesian as the Bayesian case.

So, on either (1) or (2), I believe that your prior will, on average, do as well as mine. I may not be sure what your prior is, but cases where it’s far better will be matched by cases where it’s far worse. Even knowing that your prior performs exactly as well as mine, I might not know exactly which prior you have. I know that all the places it does worse will be matched by an equal weight of places where it does better, so I can’t appeal to my prior as a good reason for us to diverge.