Navigating disagreement: How to keep your eye on the evidence

Heeding others’ impressions often increases accuracy. But “agreement” and “majoritarianism” are not magic; in a given circumstance, agreement is or isn’t useful for *intelligible* reasons.

You and four other contestants are randomly selected for a game show. The five of you walk into a room. Each of you is handed a thermometer drawn at random from a box; each of you, also, is tasked with guessing the temperature of a bucket of water. You’ll each write your guess at the temperature on the card; each person who is holding a card that is within 1° of the correct temperature will win $1000.

The four others walk to the bucket, place their thermometers in the water, and wait while their thermometers equilibrate. You follow suit. You can all see all of the thermometers’ read-outs: they’re fairly similar, but a couple are a degree or two off from the rest. You can also watch, as each of your fellow-contestants stares fixedly at his or her own thermometer and copies its reading (only) onto his or her card.

Should you:

  1. Write down the reading on your own thermometer, because it’s yours;

  2. Write down an average* thermometer reading, because probably the more accurate thermometer-readings will cluster;

  3. Write down an average of the answers on others’ cards, because rationalists should try not to disagree;

  4. Follow the procedure everyone else is following (and so stare only at your own thermometer) because rationalists should try not to disagree about procedures?

Choice 2, of course. Thermometers imperfectly indicate temperature; to have the best possible chance of winning the $1000, you should consider all the information you have, from all the (randomly allocated, and so informationally symmetric) thermometers. It doesn’t matter who was handed which thermometer.

Forming accurate beliefs is *normally* about this simple. If you want the most accurate beliefs you can get, you’ll need to pay attention to the evidence. All of the evidence. Evenly. Whether you find the evidence in your hand or mind, or in someone else’s. And whether weighing all the evidence evenly leaves you with an apparently high-status social claim (“My thermometer is better than yours!”), or an apparently deferential social claim (“But look—I’m trying to agree with all of you!”), or anywhere else.

I’ll try to spell out some of what this looks like, and to make it obvious why certain belief-forming methods give you more accurate beliefs.

Principle 1: Truth is not person-dependent.

There’s a right haircut for me, and a different right haircut for you. There’s a right way for me to eat cookies if I want to maximize my enjoyment, and a different right way for you to eat cookies, if you want to maximize your enjoyment. But, in the context of the game-show, there isn’t a right temperature for me to put on my card, and a different right temperature for you to put on your card. The game-show host hands $1000 to cards with the right temperature—he doesn’t care who is holding the card. If a card with a certain answer will make you money, that same card and answer will make me money. And if a certain answer won’t make me money, it won’t make you money either.

Truth, or accuracy, is like the game show in this sense. “Correct prediction” or “incorrect prediction” applies to beliefs, not to people with beliefs. Nature doesn’t care what your childhood influences were, or what kind of information you did or didn’t have to work with, when it deems your predictions “accurate!” or “inaccurate!”. So, from the point of view of accuracy, it doesn’t make any sense to say “I think the temperature is 73°, but you, given the thermometer you were handed, should think it 74°”. Nor “I think X, but given your intuitions you should think Y” in any other purely predictive context.

That is: while “is a good haircut” is a property of the (person, haircut) pair, “is an accurate belief” is a property of the belief only.

Principle 2: Watch the mechanisms that create your beliefs. Ask if they’re likely to lead to accurate beliefs.

It isn’t because of magic that you should use the median thermometer’s output. It’s because, well, thermometers noisily reflect the temperature, and so the central cluster of the thermometers is more likely to be accurate. You can see why this is the accuracy-producing method.

Sometimes you’ll produce better answers by taking an average over many peoples’ impressions, or by updating from other peoples’ beliefs, or by taking disagreement between yourself and someone else as a sign that you should debug your belief-forming process. And sometimes (e.g., if the people around you are choosing their answers by astrology), you won’t.

But in any of these circumstances, if you actually ask yourself “What belief-forming process is really, actually likely to pull the most juice from the evidence?”, you’ll see what the answer is, and you’ll see why the answer is that. It won’t be “agree with others, because agreement is a mysterious social ritual that rationalists aim for”, or “agree with others, because then others will socially reciprocate by agreeing with you”. It won’t be routed through the primate social system at all. It’ll be routed through seeing where evidence can be found (seeing what features of the world should look different if the world is in one state rather than another—the way thermometer-readings should look different if the bucket is one temperature rather than another) and then seeing how to best and most thoroughly and evenly gather up all that evidence.

Principle 2b: Ask if you are weighing all similarly truth-indicative mechanisms evenly.

Even when the processes that create our beliefs are truth-indicative, they generally aren’t fully, thoroughly, and evenly truth-indicative. Let’s say I want to know whether it’s safe for my friend to bike to work. My own memories are truth indicative, but so are my friends’ and neighbors’ memories, and so are the memories of the folk in surveys I can find on line. The trouble is that my own memories arrive in my head with extreme salience, and move my automatic anticipations a lot; while my friend’s have less automatic impact, and those of the surveyed neighbors still less. So if I just go with the impressions that land in my head, my predictions will overweight a few samples of evidence at the expense of all the others.

That is: our automatic cognition tends not to weigh the evidence evenly *at all*. It takes conscious examination and compensation.

Principle 3: Ask what an outside observer would say.

Since truth doesn’t depend on who is asking—and since our feelings about the truth often do depend—it can help to ask what an outside observer would say. Instead of asking “Am I right in this dispute with my friend?” ask: “If I observed this from the outside, and saw someone with my track record and skillset, and someone else with my friend’s track record and skillset, disagreeing in this manner—who would I think was probably right?”.

(See also Cached Selves.)

Common pitfall: Idolatry

We’re humans. Give us a good idea, and we’ll turn it into an idol and worship its (perhaps increasingly distorted) image. Tell us about the Aumann Agreement Theorem, and we’re liable to make up nonsense rituals about how one must always agree with the majority.

The solution is to remove the technical terms and ask *why* each belief-forming method works. Where is the evidence? What observations would you expect to see, if the universe were one way rather than another? What method of aggregating the evidence most captures the relevant data?

That is: don’t memorize the idea that “agreement”, the “scientific method”, or any other procedure is “what rationalists do”. Or, at least, don’t *just* memorize it. Think it through every time. Be able to see why it works.

Common pitfall: Primate social intuitions

Again: we’re humans. Give us a belief-forming method, and we’ll make primate politics out of it. We’ll say “I should agree with the majority, so that religious or political nuts will also agree with the majority via social precedent effects”. Or: “I should believe some of my interlocutor’s points, so that my interlocutor will believe mine”. And we’ll cite “rationality” while doing this.

But accurate beliefs have nothing to do with game theory. Yes, in an argument, you may wish to cede a point in order to manipulate your interlocutor. But that social manipulation has nothing to do with truth. And social manipulation isn’t why you’ll get better predictions if you include others’ thermometers in your average, instead of just paying attention to your own thermometer.

Example problems: To make things concrete, consider the following examples. My take on the answers appears in the comments. Please treat these as real examples; if you think real situations diverge from my idealization, say so.

Problem 1: Jelly-beans

You’re asked to estimate the number of jelly-beans in a jar. You have a group of friends with you. Each friend privately writes down her estimate, then all of the estimates are revealed, and then each person has the option of changing her estimate.

How should you weigh: (a) your own initial, solitary estimate; (b) the initial estimates of each of your friends; (c) the estimates your friends write down on paper, after hearing some of the others’ answers?

Problem 2: Housework splitting

You get into a dispute with your roommate about what portion of the housework you’ve each been doing. He says you’re being biased, and that you always get emotional about this sort of thing. You can see in his eyes that he’s upset and biased; you feel strongly that you could never have such biases. What to believe?

Problem 3: Christianity vs. atheism

You get in a dispute with your roommate about religion. He says you’re being biased, and that your “rationalism” is just another religion, and that according to his methodology, you get the right answer by feeling Jesus in your heart. You can see in his eyes that he’s upset and biased you feel strongly that you could never have such biases. What to believe?

Problem 4: Honest Bayesian wannabes

Two similarly rational people, Alfred and Betty, estimate the length of Lake L. Alfred estimates “50 km”; Betty simultaneously estimates “10 km”. Both realize that Betty knows more geography than Alfred. Before exchanging any additional information, the two must again utter simultaneous estimates regarding the answer to G. Is it true that if Alfred and Betty are estimating optimally, it is as likely that Betty’s answer will now be larger than Alfred’s as the other way round? Is it true that if these rounds are repeated, Alfred and Betty will eventually stabilize on the same answer? Why?