# Au­mann’s Agree­ment Theorem

TagLast edit: 15 Sep 2020 19:57 UTC by

Aumann’s agreement theorem, roughly speaking, says that two agents acting rationally (in a certain precise sense) and with common knowledge of each other’s beliefs cannot agree to disagree. More specifically, if two people are genuine Bayesians, share common priors, and have common knowledge of each other’s current probability assignments, then they must have equal probability assignments.

Related tags and wikis: Disagreement, Modesty, Modesty argument, Aumann agreement, The Aumann Game

# Au­mann-agree­ment is common

26 Aug 2023 20:22 UTC
64 points

# The Modesty Argument

10 Dec 2006 21:42 UTC
53 points

# Bayesian Judo

31 Jul 2007 5:53 UTC
88 points

# Au­mann Agree­ment Game

9 Oct 2015 17:14 UTC
32 points

# The Au­mann’s agree­ment the­o­rem game (guess 2/​3 of the av­er­age)

9 Jun 2009 7:29 UTC
17 points

# Scott Aaron­son: Com­mon knowl­edge and Au­mann’s agree­ment theorem

17 Aug 2015 8:41 UTC
23 points
(www.scottaaronson.com)

# Au­mann Agree­ment by Combat

5 Apr 2019 5:15 UTC
14 points
(sigbovik.org)

# Au­mann vot­ing; or, How to vote when you’re ignorant

2 Apr 2009 18:54 UTC
12 points

# An ex­pla­na­tion of Au­mann’s agree­ment theorem

7 Jul 2011 6:22 UTC
13 points

# Au­mann’s Agree­ment Revisited

27 Aug 2018 6:21 UTC
4 points

# Sta­tus Reg­u­la­tion and Anx­ious Underconfidence

16 Nov 2017 19:35 UTC
80 points

# Prob­a­bil­ity Space & Au­mann Agreement

10 Dec 2009 21:57 UTC
52 points

# The Me­chan­ics of Disagreement

10 Dec 2008 14:01 UTC
14 points

# Why is the sur­pris­ingly pop­u­lar an­swer cor­rect?

3 Feb 2017 16:24 UTC
43 points

# The Er­ror of Crowds

1 Apr 2007 21:50 UTC
32 points

# Hash­ing out long-stand­ing dis­agree­ments seems low-value to me

16 Feb 2023 6:20 UTC
126 points

# Fun­da­men­tal Uncer­tainty: Chap­ter 3 - Why don’t we agree on what’s right?

25 Jun 2022 17:50 UTC
26 points

# Don’t Dou­ble-Crux With Suicide Rock

1 Jan 2020 19:02 UTC
81 points

# Dangers of deference

8 Jan 2023 14:36 UTC
55 points

# [Question] What is a dis­agree­ment you have around AI safety?

12 Jan 2023 16:58 UTC
16 points

# [Question] How to build com­mon knowl­edge of ra­tio­nal­ity and hon­esty?

21 Feb 2021 6:07 UTC
5 points

# [Question] Trans­fer­ring cre­dence with­out trans­fer­ring ev­i­dence?

4 Feb 2022 8:11 UTC
11 points

# Am­bi­guity causes conflict

26 Feb 2022 16:53 UTC
24 points
• The external link “A write-up of the proof of Aumann’s agreement theorem (pdf) by Tyrrell McAllister” seems to be broken. At least, I get a 404 Error. I am not sure how to best fix this but I thought I may as well point this out.

• I feel like Aumann’s Agreement Theorem is one of those concepts which the community was originally excited about, which didn’t quite pan out. It’s valid as a piece of math, but people want to use it as a shorthand for “the fact that we disagree means one of us must be being irrational”, when that is not the case. The reason is because it’s not enough for both people to be Bayesian agents, not enough for each person to also know that the other is a Bayesian agent, not enough to know that each person knows that the other person is a Bayesian agent, etc; they need actual common knowledge. And then it turns out that people mostly aren’t Bayesian agents. And that’s before getting into the weird anthropic stuff, where there are weird facts and pieces of evidence that aren’t person-symmetric; eg, I may think that my subjective experience means futures in which I-in-particular am mass-copied are more likely, but someone else should not believe this.

• I think there’s work to be done here in each of this page and the Aumann Agreement page clarifying the relationship of the two each other. Also it would be nice if this page had some of the math on it.

• From the old Talk Page:

# Talk:Aumann’s agreement theorem

Re the reversion of 1736 7 September: on reflection, I do agree that there should be a separate page for the theorem itself (which is math) and the intuitive gloss and discussion of implications (which is not), but I’m not sure the offending text belongs in the “Disagreement” article, either—would it be a good idea to create a page for Aumann agreement referring to the state of agents coming to agree with each other in an Auman-esque fashion? I think that’s what I’m going to do. Z. M. Davis 23:51, 7 September 2009 (UTC)

Rationale here being that two agents coming to agree by updating on each other’s beliefs (which I’m calling “Aumann agreement”) is distinct from more general discussion of disagreements and why they are problematic amongst rationalists (because there’s actually a right answer in questions of fact). Z. M. Davis 23:55, 7 September 2009 (UTC)

You’ve created a duplicate piece of content for now—and that’s not good. If you create the new page, you should factor out its topic out the Disagreement page as well (but that will hurt that page now—so maybe a subsection on the disagreement page is a better solution for now, if there is a clear topic to divide it).

Did you watch Hanson’s talk on “are disagreements honest”? The link is in the article. In his model, agreement is not a process: you just tell me your opinion, I pronounce my conclusion, and we are done: you must agree with my conclusion immediately. --Vladimir Nesov 01:14, 8 September 2009 (UTC)

I’ve deleted the duplicate content on “Disagreement” for now, but will be sure to improve that page soon, as well as review Hanson’s talk. (It’s strange, I read the Cowen and Hanson “Are Disagreements Honest?” paper, and I don’t remember it saying anything about agreement being instantaneous—and the Aaronson paper certainly models disagreement as a process; I’ll have to look into this further.) Thanks for the help! --Z. M. Davis 01:33, 8 September 2009 (UTC)