The Mechanics of Disagreement

Two ideal Bayesi­ans can­not have com­mon knowl­edge of dis­agree­ment; this is a the­o­rem. If two ra­tio­nal­ist-wannabes have com­mon knowl­edge of a dis­agree­ment be­tween them, what could be go­ing wrong?

The ob­vi­ous in­ter­pre­ta­tion of these the­o­rems is that if you know that a cog­ni­tive ma­chine is a ra­tio­nal pro­ces­sor of ev­i­dence, its be­liefs be­come ev­i­dence them­selves.

If you de­sign an AI and the AI says “This fair coin came up heads with 80% prob­a­bil­ity”, then you know that the AI has ac­cu­mu­lated ev­i­dence with an like­li­hood ra­tio of 4:1 fa­vor­ing heads—be­cause the AI only emits that state­ment un­der those cir­cum­stances.

It’s not a mat­ter of char­ity; it’s just that this is how you think the other cog­ni­tive ma­chine works.

And if you tell an ideal ra­tio­nal­ist, “I think this fair coin came up heads with 80% prob­a­bil­ity”, and they re­ply, “I now think this fair coin came up heads with 25% prob­a­bil­ity”, and your sources of ev­i­dence are in­de­pen­dent of each other, then you should ac­cept this ver­dict, rea­son­ing that (be­fore you spoke) the other mind must have en­coun­tered ev­i­dence with a like­li­hood of 1:12 fa­vor­ing tails.

But this as­sumes that the other mind also thinks that you’re pro­cess­ing ev­i­dence cor­rectly, so that, by the time it says “I now think this fair coin came up heads, p=.25”, it has already taken into ac­count the full im­pact of all the ev­i­dence you know about, be­fore adding more ev­i­dence of its own.

If, on the other hand, the other mind doesn’t trust your ra­tio­nal­ity, then it won’t ac­cept your ev­i­dence at face value, and the es­ti­mate that it gives won’t in­te­grate the full im­pact of the ev­i­dence you ob­served.

So does this mean that when two ra­tio­nal­ists trust each other’s ra­tio­nal­ity less than com­pletely, then they can agree to dis­agree?

It’s not that sim­ple. Ra­tion­al­ists should not trust them­selves en­tirely, ei­ther.

So when the other mind ac­cepts your ev­i­dence at less than face value, this doesn’t say “You are less than a perfect ra­tio­nal­ist”, it says, “I trust you less than you trust your­self; I think that you are dis­count­ing your own ev­i­dence too lit­tle.”

Maybe your raw ar­gu­ments seemed to you to have a strength of 40:1, but you dis­counted for your own ir­ra­tional­ity to a strength of 4:1, but the other mind thinks you still over­es­ti­mate your­self and so it as­sumes that the ac­tual force of the ar­gu­ment was 2:1.

And if you be­lieve that the other mind is dis­count­ing you in this way, and is un­jus­tified in do­ing so, then when it says “I now think this fair coin came up heads with 25% prob­a­bil­ity”, you might bet on the coin at odds of 57% in fa­vor of heads—adding up your fur­ther-dis­counted ev­i­dence of 2:1 to the im­plied ev­i­dence of 1:6 that the other mind must have seen to give fi­nal odds of 2:6 - if you even fully trust the other mind’s fur­ther ev­i­dence of 1:6.

I think we have to be very care­ful to avoid in­ter­pret­ing this situ­a­tion in terms of any­thing like a re­cip­ro­cal trade, like two sides mak­ing equal con­ces­sions in or­der to reach agree­ment on a busi­ness deal.

Shift­ing be­liefs is not a con­ces­sion that you make for the sake of oth­ers, ex­pect­ing some­thing in re­turn; it is an ad­van­tage you take for your own benefit, to im­prove your own map of the world. I am, gen­er­ally speak­ing, a Millie-style al­tru­ist; but when it comes to be­lief shifts I es­pouse a pure and prin­ci­pled self­ish­ness: don’t be­lieve you’re do­ing it for any­one’s sake but your own.

Still, I once read that there’s a prin­ci­ple among con artists that the main thing is to get the mark to be­lieve that you trust them, so that they’ll feel obli­gated to trust you in turn.

And—even if it’s for com­pletely differ­ent the­o­ret­i­cal rea­sons—if you want to per­suade a ra­tio­nal­ist to shift be­lief to match yours, you ei­ther need to per­suade them that you have all of the same ev­i­dence they do and have already taken it into ac­count, or that you already fully trust their opinions as ev­i­dence, or that you know bet­ter than they do how much they them­selves can be trusted.

It’s that last one that’s the re­ally sticky point, for ob­vi­ous rea­sons of asym­me­try of in­tro­spec­tive ac­cess and asym­me­try of mo­tives for over­con­fi­dence—how do you re­solve that con­flict? (And if you started ar­gu­ing about it, then the ques­tion wouldn’t be which of these were more im­por­tant as a fac­tor, but rather, which of these fac­tors the Other had un­der- or over-dis­counted in form­ing their es­ti­mate of a given per­son’s ra­tio­nal­ity...)

If I had to name a sin­gle rea­son why two wannabe ra­tio­nal­ists wouldn’t ac­tu­ally be able to agree in prac­tice, it would be that, once you trace the ar­gu­ment to the meta-level where the­o­ret­i­cally ev­ery­thing can be and must be re­solved, the ar­gu­ment trails off into psy­cho­anal­y­sis and noise.

And if you look at what goes on in prac­tice be­tween two ar­gu­ing ra­tio­nal­ists, it would prob­a­bly mostly be trad­ing ob­ject-level ar­gu­ments; and the most meta it would get is try­ing to con­vince the other per­son that you’ve already taken their ob­ject-level ar­gu­ments into ac­count.

Still, this does leave us with three clear rea­sons that some­one might point to, to jus­tify a per­sis­tent dis­agree­ment—even though the frame of mind of jus­tifi­ca­tion and hav­ing clear rea­sons to point to in front of oth­ers, is it­self an­ti­thet­i­cal to the spirit of re­solv­ing dis­agree­ments—but even so:

  • Clearly, the Other’s ob­ject-level ar­gu­ments are flawed; no amount of trust that I can have for an­other per­son will make me be­lieve that rocks fall up­ward.

  • Clearly, the Other is not tak­ing my ar­gu­ments into ac­count; there’s an ob­vi­ous asym­me­try in how well I un­der­stand them and have in­te­grated their ev­i­dence, ver­sus how much they un­der­stand me and have in­te­grated mine.

  • Clearly, the Other is com­pletely bi­ased in how much they trust them­selves over oth­ers, ver­sus how I humbly and even­hand­edly dis­count my own be­liefs alongside theirs.

Since we don’t want to go around en­courag­ing dis­agree­ment, one might do well to pon­der how all three of these ar­gu­ments are used by cre­ation­ists to jus­tify their per­sis­tent dis­agree­ments with sci­en­tists.

That’s one rea­son I say clearly—if it isn’t ob­vi­ous even to out­side on­look­ers, maybe you shouldn’t be con­fi­dent of re­solv­ing the dis­agree­ment there. Failure at any of these lev­els im­plies failure at the meta-lev­els above it, but the higher-or­der failures might not be clear.