Aumann-agreement is common

Thank you to Justis Mills for proofreading and feedback. This post is also available on my substack.

Aumann’s agreement theorem is a family of theorems which say that if people trust each other and know each other’s opinions, then they agree with each other. Or phrased another way, if people maintain trust with each other, then they can reach agreement. (And some variants of the theorem, which take computational factors into consideration, suggest they can do so quite rapidly.)

The original proof is pretty formal and confusing, but a simpler heuristic argument is that for an honest, rational agent, the mere fact of them professing an opinion can be strong evidence to another rational agent, because if the speaker’s probabilities are higher than the speaker’s prior, then they must have seen corresponding evidence to justify that opinion.

Some people find this confusing, and feel like it must be wrong because it doesn’t apply to most disagreements. I think these people are wrong because they are not sufficiently expansive in what they think of as a disagreement. The notion of disagreement that Aumann’s agreement theorem applies to is when the people assign different probabilities to events; this is a quite inclusive notion which covers many things that we don’t typically think of as disagreements, including cases where one party has information about a topic and the other party has no information.

My vacation in Norway relied tons on Aumann agreements

Recently, I had a vacation in Norway with my wife.

In order to get there, and to get around, we needed transport. At first we disagreed with people who provided transport there, as we didn’t know of many specific means of transport, only vaguely that there would be some planes and ships, without knowing which ones. But my wife had heard that there was something called the “Oslo ferry”, so we Aumann-agreed that this was an option, and decided to investigate further.

We disagreed with the company that provided the Oslo ferry, as we didn’t know what their website is, so we asked Google, and it provided some options for what the ferry might be, and we Aumann-agreed with Google and then went investigating from there. One website we found claimed to sell tickets to the ferry; at first we disagreed with the website about when we could travel as we didn’t know the times of the ferry, but then we read which times it claimed was available, and Aumann-updated to that.

We also had to find some things to do in Norway. Luckily for us, some people at OpenAI had noticed that everyone had huge disagreements with the internet as nobody had really memorized the internet, and they thought that they could gain some value by resolving that disagreement, so they Aumann-agreed with the internet by stuffing it into a neural network called ChatGPT. At first, ChatGPT disagreed with us about what to visit in Norway and suggested some things we were not really interested in, but we informed it about our interests, and then it quickly Aumann-agreed with us and proposed some other things that were more interesting.

One of the things we visited was a museum for an adventurer who built a raft and sailed in the ocean. Prior to visiting the museum, we had numerous disagreements with it, as e.g. we didn’t know that one of the people on the raft had fallen in the ocean and had to be rescued. But the museum told us this was the case, so we Aumann-agreed to believe it. Presumably, the museum learnt about it through Aumann-agreeing with the people on the raft.

One example of an erroneous Aumann agreement was with the train company Vy. They had said that they could get us a train ticket on the Bergen train, and we had Aumann-agreed with that. However, due to a storm, their train tracks were broken, and the company website kept promising availability on the train until the last moment, so we didn’t get corrected by Vy.

But we were not saved by empirically seeing the damaged tracks, or by rationally reasoning that it was not available. Instead, we were saved because we told someone about our plans to take the Bergen train, expecting them to Aumann-agree to a belief that we would take the train, but instead they kept disagreeing, and told us that it was flooded and would be cancelled. This made us Aumann-agree that we had to find some other method, and we asked Google whether there were any flights, of which it suggested some that we Aumann-agreed to.

Later, I’ve told my dad and now also you about the trip. Prior to talking about it, I expect you disagreed as you didn’t know anything about it, but at least I’m pretty sure my dad Aumann-agreed to the things I told him, and I suspect you did so too.

Aumannian disagreements quickly disappear, and so “disagreement” connotes/​denotes non-Aumannian disagreements

The disagreements mentioned in my story all happened between parties with reasonable levels of trust, and they mostly involved one party lacking information and the other party having information, so they were quickly resolved by transferring one’s information. Even noticing the specifics of the disagreement is sufficient to transfer the information and resolve it.

Meanwhile, in politics, disagreements often occur between people who have conflicting goals, where they it is reasonable to suspect that one side is misrepresenting things because they care more about gaining power than accurately informing the people they talk to.

Because the preconditions for Aumannian agreement don’t hold when you suspect the counterparty to be biased, such disagreements won’t be resolved so quickly, and instead stick around long-term. But if we form our opinions about what disagreements are like from what disagreements stick around long-term, then that means we are filtering out the disagreements where Aumann’s conditions hold.

Thus, “disagreement” comes to connote (or maybe even denote), “difference in opinion between people who don’t trust each other” rather than simply “difference in opinion”.

Most Aumannian disagreements are a simple lack of awareness

The Bayesian paradigm doesn’t fundamentally[1] distinguish disbelief in a proposition due to having no information about it, versus due to having observed contradictory information. Consider e.g. picking two random people and making a statement such as “Marv Elsher is dating Abrielle Levine” about them. You have no idea who these people are, and most people are not dating each other, so you should rationally assign this a very low probability.

But that’s not because you actively disbelieve it from contradictory evidence! In fact you might not even think of yourself as having had a belief about it ahead of time. If there is in fact a Marv Elsher who is dating Abrielle Levine, then Marv assigns a very high probability to this statement, while you wouldn’t even have thought of it without this post.

If you consider all of the cases where people assign different probabilities to symbolically expressible propositions, then almost all of them will be something along these lines, because there’s tons of random local information which you simply don’t have access to. Thus, if you want to think of the typical case of a disagreement that Aumann’s agreement theorem refers to, you should think “Person A has observed X and Person B does not even have awareness of what’s going on around X, let alone any evidence on X itself”.

Aumann agreement is extremely efficient and powerful

For most of the updates that happened during the vacation, it would simply not be feasible to verify things by oneself. Often they concerned things that were very far away, both in space and time. Sometimes they concerned things that happened in the past where it wouldn’t even be physically possible to verify. But even for the things you could verify, it would take orders of magnitude more time and resources than to just Aumann-update.

Aumann agreement is about pooling, not moderation

In my examples, people generally didn’t converge to a compromise position; instead they adopted the counterparty’s positions wholesale. This is generally the correct picture to have in mind for Aumann agreement. While the exact way you update can vary depending on the prior and the evidence, one simple example I like is this:

You both start with having your log-odds being some vector x according to some shared prior (i.e. you start out agreeing). You then observe some evidence y, updating your log-odds to be x+y, while they observe some independent evidence z, updating their log-odds to be x+z. If you exchange all your information, then this updates your shared log-odds to be x+y+z, which is most likely going to be an even more radical departure from x than either x+y or x+z alone.

Aumann conditions information on trust

Surely sometimes it seems like Aumann agreement should cause people to moderate, right? Like in politics, if you have spent a lot of time absorbing one party’s ideology, and your interlocutor has spent a lot of time absorbing the other party’s ideology, but you then poke lots of holes in each other’s arguments?

I think in this case, learning that there are holes in the arguments your learned from your party may be reason to doubt the trustworthiness of your party, especially when they cannot fix those holes. Since you Aumann-updated a great deal on your party’s view specifically because you trusted them, this should also make you un-update away from their views, presumably moderating them.

(I think this has massive implications for collective epistemics, and I’ve gradually been developing a theory of collective rationality based on this, but it’s not finished yet and the purpose of this post is merely to grok the agreement theorem rather than to lay out that theory.)

There may also be less elaborate ways in which you might moderate due to Aumann agreement, e.g. if contradictory information cancels out.

A lot of Aumann-updates are on promises, history or universals

Many of the most obvious Aumann updates in my story were about promises; for instance that an interlocutor would provide me a certain transport at a certain time from one location to another.

One might think this suggests that promises have a unique link to Aumann’s agreement theorem, but I think this is actually because promises are an unusually prevalent type of information due to the combo of:

  • People’s capacity to make reliable claims about them.

  • Being useful enough in practice to be worth sharing.

  • Covering a diverse and open-ended set of possibilities.

For instance, if you promise me a sandwich in your kitchen, then you can ensure that your promise is true by paying rent to keep ownership of your kitchen, buying and storing ingredients for the sandwich so they are ready for assembly, and then assembling the sandwich for me when it is time.

Meanwhile, if you tell me that there is an available sandwich in someone else’s kitchen, then because you don’t maintain control over that kitchen, it might cease to be true once we actually reach the time when I need it, so you can’t reliably make claims about it. Furthermore, even if you could, I would probably not get away with taking it, so it would not be useful to me.

You could probably reasonably reliably make claims about certain things you’ve seen in the past, but most of those are not very useful because they happened in the past. For example, while I know from the museum that that guy on the raft fell in the water, I don’t have anything to use it for. That said, sometimes (e.g. to attribute outcomes to causes, or to make generalizations), they are useful.

You can read a physics textbook and do a lot of useful Aumann updates on this, but this is mainly because physics is a “universal” subject, but this also means that it is a closed subject with a bounded amount of information. There can’t be an “alternate physics” with alternate particles and strengths of attraction, unlike the same sense as there can be an “alternate plane company” with alternate flight times.

Promises, history and universals aren’t meant to be a complete taxonomy, it’s just something I’ve noticed.

  1. ^

    It is distinguished through the history of updating from prior to posterior, but the distinction is not “stored” anywhere in the probability distribution, so the beliefs themselves are treated the same, even if their history are different.