Causes of disagreements

You have a disagreement before you. How do you handle it?

Causes of fake disagreements:

Is the disagreement real? The trivial case is an apparent disagreement occuring over a noisy or low information channel. Internet chat is especially liable to fail this way because of the lack of tone, body language, and relative location cues. People can also disagree through the use of differing definitions with corresponding denotations and connotations. Fortunately, when recognized this cause of disagreement rarely produces problems; the topic at issue rarely is the definitions themselves. If there is a game theoretic reason the agents may also give the appearance of disagreement even though they might well agree in private. The agents could also disagree if they are victims of a Man-in-the-middle attack where someone is intercepting and altering the messages passed between the two parties. Finally, the agents could disagree simply because they are in different contexts. Is the sun yellow I ask? Yes, say you. No, say the aliens at Eta Carinae.

Causes of disagreements about predictions:

Evidence
Assuming the disagreement is real what does that give us? Most commonly the disagreement is about the facts that predicate our actions. To handle these we must first consider our relationship to the other person and how they think (a la superrationality); observations made by others may not be given the same weight we would give those observations if we had made them ourselves. After considering this we must then merge their evidence with our own in a controlled way. With people this gets a bit tricky. Rarely do people give us information we can handle in a cleanly Bayesian way (a la Aumann’s agreement theorem). Instead we must merge our explicit evidence sets along with vague abstracted probabilistic intuitions that are half speculation and half partially forgotten memories.

Priors
If we still have a disagreement after considering the evidence what now? The agents could have “started” at different locations in prior or induction space. While it is true that a persons “starting” point and what evidence they’ve seen can be conflated, it is also possible that they really did start at different locations.

Resource limitations
The disagreement could also be caused by resource limitations and implementation details. Cognition could have sensitive dependence on initial conditions. For instance, when answering the question “is this red?” slight variations in lighting conditions can make people respond differently on boundary cases. This illustrates both sensitive dependence on initial conditions and also the fact that some types of information (what you saw exactly) just cannot be communicated effectively. Our mental processes are also inherently noisy leading to differing errors in processing the evidence and increasing the need to rehash an argument multiple times. We suffer from computational space and time limitations making computational approximations necessary. We learn these approximations slowly across varying situations and so may disagree with someone even if the prediction relevant evidence is on hand, our other “evidence” used to develop these approximations may vary and inadvertently leak into our answers. Our approximation methods may differ. Finally, it takes time to integrate all of the evidence at hand and people differ on the amount of time and resources they have to do so.

Systematic errors
Sadly, it is also possible that one or the other party could simply have a deeply flawed prediction system. They could make systematic errors and have broken or missing corrective feedback loops. They could have disruptive feedback loops that drain the truth from predictions. Their methods of prediction may invalidly vary with what is being considered; their thoughts may shy away from subjects such as death or disease or flaws in their favorite theory and their thoughts may be attracted to what will happen after they win the lottery. Irrationality and biases; emotions and inability to abstract. Or even worse, how is it possible to eliminate a disagreement with someone who disagrees with himself and presents an inconsistent opinion?

Other causes of disagreement:
Goals
I say that dogs are interesting, you say they are boring and yet we both agree on our predictions. How is this possible? This type of disagreement would fall under disagreement about what utility function to apply and between utilitarian goal-preserving agents it is irresolvable in a direct manner; however, indirect ways such as trading boring dogs for interesting cats works much of the time. Plus, we are not utilitarian agents (e.g. circular preferences) ; perhaps there are strategies available to us for resolving conflicts of this form that are not available to utilitarian ones?

Epiphenomenal
Lastly, it is possible for agents to agree on all observable predictions and yet disagree on unobservable predictions. Predictions without consequences aren’t predictions at all, how could they be? If the disagreement still exists after realizing that there are no observable consequences look elsewhere for the cause, it cannot be here. Why disagree over things of no value? The disagreement must be caused by something; look there not here.


How to use this taxonomy:
I tried to list the above sections in the order one should check for each type of cause if you were to use the sections as a decision tree (ease of checking and fixing, fit to definition, probability of occurrence). This taxonomy is symmetric between the disagreeing parties and many of the sections lend themselves naturally to looping; merging evidence piece by piece, refining calculations iteration by iteration, …. This taxonomy can also be applied recursively to meta disagreements and disagreements found in the process of analyzing the original one. What are the termination conditions for analyzing a disagreement? They come in five forms: complete agreement, satisfying agreement, impossible to agree, acknowledgment of conflict, and dissolving the question. Being a third party to a disagreement changes the analysis only in that you are no longer doing the symmetric self analysis but rather looking in upon a disagreement with that additional distance that entails.



Many thanks to Eliezer Yudkowsky, Robin Hanson, and the LessWrong community for much thought provoking material.

(ps This is my first post and I would appreciate any feedback: what I did well, what I did badly, and what I can do to improve.)

Links:
1. http://​​lesswrong.com/​​lw/​​z/​​information_cascades/​​
2. http://​​lesswrong.com/​​lw/​​s0/​​where_recursive_justification_hits_bottom/​​