The Mechanics of Disagreement

Two ideal Bayesians cannot have common knowledge of disagreement; this is a theorem. If two rationalist-wannabes have common knowledge of a disagreement between them, what could be going wrong?

The obvious interpretation of these theorems is that if you know that a cognitive machine is a rational processor of evidence, its beliefs become evidence themselves.

If you design an AI and the AI says “This fair coin came up heads with 80% probability”, then you know that the AI has accumulated evidence with an likelihood ratio of 4:1 favoring heads—because the AI only emits that statement under those circumstances.

It’s not a matter of charity; it’s just that this is how you think the other cognitive machine works.

And if you tell an ideal rationalist, “I think this fair coin came up heads with 80% probability”, and they reply, “I now think this fair coin came up heads with 25% probability”, and your sources of evidence are independent of each other, then you should accept this verdict, reasoning that (before you spoke) the other mind must have encountered evidence with a likelihood of 1:12 favoring tails.

But this assumes that the other mind also thinks that you’re processing evidence correctly, so that, by the time it says “I now think this fair coin came up heads, p=.25”, it has already taken into account the full impact of all the evidence you know about, before adding more evidence of its own.

If, on the other hand, the other mind doesn’t trust your rationality, then it won’t accept your evidence at face value, and the estimate that it gives won’t integrate the full impact of the evidence you observed.

So does this mean that when two rationalists trust each other’s rationality less than completely, then they can agree to disagree?

It’s not that simple. Rationalists should not trust themselves entirely, either.

So when the other mind accepts your evidence at less than face value, this doesn’t say “You are less than a perfect rationalist”, it says, “I trust you less than you trust yourself; I think that you are discounting your own evidence too little.”

Maybe your raw arguments seemed to you to have a strength of 40:1, but you discounted for your own irrationality to a strength of 4:1, but the other mind thinks you still overestimate yourself and so it assumes that the actual force of the argument was 2:1.

And if you believe that the other mind is discounting you in this way, and is unjustified in doing so, then when it says “I now think this fair coin came up heads with 25% probability”, you might bet on the coin at odds of 57% in favor of heads—adding up your further-discounted evidence of 2:1 to the implied evidence of 1:6 that the other mind must have seen to give final odds of 2:6 - if you even fully trust the other mind’s further evidence of 1:6.

I think we have to be very careful to avoid interpreting this situation in terms of anything like a reciprocal trade, like two sides making equal concessions in order to reach agreement on a business deal.

Shifting beliefs is not a concession that you make for the sake of others, expecting something in return; it is an advantage you take for your own benefit, to improve your own map of the world. I am, generally speaking, a Millie-style altruist; but when it comes to belief shifts I espouse a pure and principled selfishness: don’t believe you’re doing it for anyone’s sake but your own.

Still, I once read that there’s a principle among con artists that the main thing is to get the mark to believe that you trust them, so that they’ll feel obligated to trust you in turn.

And—even if it’s for completely different theoretical reasons—if you want to persuade a rationalist to shift belief to match yours, you either need to persuade them that you have all of the same evidence they do and have already taken it into account, or that you already fully trust their opinions as evidence, or that you know better than they do how much they themselves can be trusted.

It’s that last one that’s the really sticky point, for obvious reasons of asymmetry of introspective access and asymmetry of motives for overconfidence—how do you resolve that conflict? (And if you started arguing about it, then the question wouldn’t be which of these were more important as a factor, but rather, which of these factors the Other had under- or over-discounted in forming their estimate of a given person’s rationality...)

If I had to name a single reason why two wannabe rationalists wouldn’t actually be able to agree in practice, it would be that, once you trace the argument to the meta-level where theoretically everything can be and must be resolved, the argument trails off into psychoanalysis and noise.

And if you look at what goes on in practice between two arguing rationalists, it would probably mostly be trading object-level arguments; and the most meta it would get is trying to convince the other person that you’ve already taken their object-level arguments into account.

Still, this does leave us with three clear reasons that someone might point to, to justify a persistent disagreement—even though the frame of mind of justification and having clear reasons to point to in front of others, is itself antithetical to the spirit of resolving disagreements—but even so:

  • Clearly, the Other’s object-level arguments are flawed; no amount of trust that I can have for another person will make me believe that rocks fall upward.

  • Clearly, the Other is not taking my arguments into account; there’s an obvious asymmetry in how well I understand them and have integrated their evidence, versus how much they understand me and have integrated mine.

  • Clearly, the Other is completely biased in how much they trust themselves over others, versus how I humbly and evenhandedly discount my own beliefs alongside theirs.

Since we don’t want to go around encouraging disagreement, one might do well to ponder how all three of these arguments are used by creationists to justify their persistent disagreements with scientists.

That’s one reason I say clearly—if it isn’t obvious even to outside onlookers, maybe you shouldn’t be confident of resolving the disagreement there. Failure at any of these levels implies failure at the meta-levels above it, but the higher-order failures might not be clear.