Causes of disagreements

You have a dis­agree­ment be­fore you. How do you han­dle it?

Causes of fake dis­agree­ments:

Is the dis­agree­ment real? The triv­ial case is an ap­par­ent dis­agree­ment oc­cur­ing over a noisy or low in­for­ma­tion chan­nel. In­ter­net chat is es­pe­cially li­able to fail this way be­cause of the lack of tone, body lan­guage, and rel­a­tive lo­ca­tion cues. Peo­ple can also dis­agree through the use of differ­ing defi­ni­tions with cor­re­spond­ing de­no­ta­tions and con­no­ta­tions. For­tu­nately, when rec­og­nized this cause of dis­agree­ment rarely pro­duces prob­lems; the topic at is­sue rarely is the defi­ni­tions them­selves. If there is a game the­o­retic rea­son the agents may also give the ap­pear­ance of dis­agree­ment even though they might well agree in pri­vate. The agents could also dis­agree if they are vic­tims of a Man-in-the-mid­dle at­tack where some­one is in­ter­cept­ing and al­ter­ing the mes­sages passed be­tween the two par­ties. Fi­nally, the agents could dis­agree sim­ply be­cause they are in differ­ent con­texts. Is the sun yel­low I ask? Yes, say you. No, say the aliens at Eta Cari­nae.

Causes of dis­agree­ments about pre­dic­tions:

As­sum­ing the dis­agree­ment is real what does that give us? Most com­monly the dis­agree­ment is about the facts that pred­i­cate our ac­tions. To han­dle these we must first con­sider our re­la­tion­ship to the other per­son and how they think (a la su­per­ra­tional­ity); ob­ser­va­tions made by oth­ers may not be given the same weight we would give those ob­ser­va­tions if we had made them our­selves. After con­sid­er­ing this we must then merge their ev­i­dence with our own in a con­trol­led way. With peo­ple this gets a bit tricky. Rarely do peo­ple give us in­for­ma­tion we can han­dle in a cleanly Bayesian way (a la Au­mann’s agree­ment the­o­rem). In­stead we must merge our ex­plicit ev­i­dence sets along with vague ab­stracted prob­a­bil­is­tic in­tu­itions that are half spec­u­la­tion and half par­tially for­got­ten mem­o­ries.

If we still have a dis­agree­ment af­ter con­sid­er­ing the ev­i­dence what now? The agents could have “started” at differ­ent lo­ca­tions in prior or in­duc­tion space. While it is true that a per­sons “start­ing” point and what ev­i­dence they’ve seen can be con­flated, it is also pos­si­ble that they re­ally did start at differ­ent lo­ca­tions.

Re­source limi­ta­tion­s
The dis­agree­ment could also be caused by re­source limi­ta­tions and im­ple­men­ta­tion de­tails. Cog­ni­tion could have sen­si­tive de­pen­dence on ini­tial con­di­tions. For in­stance, when an­swer­ing the ques­tion “is this red?” slight vari­a­tions in light­ing con­di­tions can make peo­ple re­spond differ­ently on bound­ary cases. This illus­trates both sen­si­tive de­pen­dence on ini­tial con­di­tions and also the fact that some types of in­for­ma­tion (what you saw ex­actly) just can­not be com­mu­ni­cated effec­tively. Our men­tal pro­cesses are also in­her­ently noisy lead­ing to differ­ing er­rors in pro­cess­ing the ev­i­dence and in­creas­ing the need to re­hash an ar­gu­ment mul­ti­ple times. We suffer from com­pu­ta­tional space and time limi­ta­tions mak­ing com­pu­ta­tional ap­prox­i­ma­tions nec­es­sary. We learn these ap­prox­i­ma­tions slowly across vary­ing situ­a­tions and so may dis­agree with some­one even if the pre­dic­tion rele­vant ev­i­dence is on hand, our other “ev­i­dence” used to de­velop these ap­prox­i­ma­tions may vary and in­ad­ver­tently leak into our an­swers. Our ap­prox­i­ma­tion meth­ods may differ. Fi­nally, it takes time to in­te­grate all of the ev­i­dence at hand and peo­ple differ on the amount of time and re­sources they have to do so.

Sys­tem­atic er­rors
Sadly, it is also pos­si­ble that one or the other party could sim­ply have a deeply flawed pre­dic­tion sys­tem. They could make sys­tem­atic er­rors and have bro­ken or miss­ing cor­rec­tive feed­back loops. They could have dis­rup­tive feed­back loops that drain the truth from pre­dic­tions. Their meth­ods of pre­dic­tion may in­val­idly vary with what is be­ing con­sid­ered; their thoughts may shy away from sub­jects such as death or dis­ease or flaws in their fa­vorite the­ory and their thoughts may be at­tracted to what will hap­pen af­ter they win the lot­tery. Ir­ra­tional­ity and bi­ases; emo­tions and in­abil­ity to ab­stract. Or even worse, how is it pos­si­ble to elimi­nate a dis­agree­ment with some­one who dis­agrees with him­self and pre­sents an in­con­sis­tent opinion?

Other causes of dis­agree­ment:
I say that dogs are in­ter­est­ing, you say they are bor­ing and yet we both agree on our pre­dic­tions. How is this pos­si­ble? This type of dis­agree­ment would fall un­der dis­agree­ment about what util­ity func­tion to ap­ply and be­tween util­i­tar­ian goal-pre­serv­ing agents it is ir­re­solv­able in a di­rect man­ner; how­ever, in­di­rect ways such as trad­ing bor­ing dogs for in­ter­est­ing cats works much of the time. Plus, we are not util­i­tar­ian agents (e.g. cir­cu­lar prefer­ences) ; per­haps there are strate­gies available to us for re­solv­ing con­flicts of this form that are not available to util­i­tar­ian ones?

Lastly, it is pos­si­ble for agents to agree on all ob­serv­able pre­dic­tions and yet dis­agree on un­ob­serv­able pre­dic­tions. Pre­dic­tions with­out con­se­quences aren’t pre­dic­tions at all, how could they be? If the dis­agree­ment still ex­ists af­ter re­al­iz­ing that there are no ob­serv­able con­se­quences look el­se­where for the cause, it can­not be here. Why dis­agree over things of no value? The dis­agree­ment must be caused by some­thing; look there not here.

How to use this tax­on­omy:
I tried to list the above sec­tions in the or­der one should check for each type of cause if you were to use the sec­tions as a de­ci­sion tree (ease of check­ing and fix­ing, fit to defi­ni­tion, prob­a­bil­ity of oc­cur­rence). This tax­on­omy is sym­met­ric be­tween the dis­agree­ing par­ties and many of the sec­tions lend them­selves nat­u­rally to loop­ing; merg­ing ev­i­dence piece by piece, re­fin­ing calcu­la­tions iter­a­tion by iter­a­tion, …. This tax­on­omy can also be ap­plied re­cur­sively to meta dis­agree­ments and dis­agree­ments found in the pro­cess of an­a­lyz­ing the origi­nal one. What are the ter­mi­na­tion con­di­tions for an­a­lyz­ing a dis­agree­ment? They come in five forms: com­plete agree­ment, satis­fy­ing agree­ment, im­pos­si­ble to agree, ac­knowl­edg­ment of con­flict, and dis­solv­ing the ques­tion. Be­ing a third party to a dis­agree­ment changes the anal­y­sis only in that you are no longer do­ing the sym­met­ric self anal­y­sis but rather look­ing in upon a dis­agree­ment with that ad­di­tional dis­tance that en­tails.

Many thanks to Eliezer Yud­kowsky, Robin Han­son, and the LessWrong com­mu­nity for much thought pro­vok­ing ma­te­rial.

(ps This is my first post and I would ap­pre­ci­ate any feed­back: what I did well, what I did badly, and what I can do to im­prove.)

1. http://​​less­​​lw/​​z/​​in­for­ma­tion_cas­cades/​​
2. http://​​less­​​lw/​​s0/​​where_re­cur­sive_jus­tifi­ca­tion_hits_bot­tom/​​