Conflict Theory of Bounded Distrust

Scott Alexander once wrote about the difference between “mistake theorists” who treat politics as an engineering discipline (a symmetrical collaboration in which everyone ultimately just wants the best ideas to win) and “conflict theorists” who treat politics as war (an asymmetrical conflict between sides with fundamentally different interests). Essentially, “[m]istake theorists naturally think conflict theorists are making a mistake”; “[c]onflict theorists naturally think mistake theorists are the enemy in their conflict.”

More recently, Alexander considered the phenomenon of “bounded distrust”: science and media authorities aren’t completely honest, but are only willing to bend the truth so far, and can be trusted on the things they wouldn’t lie about. Fox News wants to fuel xenophobia, but they wouldn’t make up a terrorist attack out of whole cloth; liberal academics want to combat xenophobia, but they wouldn’t outright fabricate crime statistics.

Alexander explains that savvy people who can figure out what kinds of dishonesty an authority will engage in, end up mostly trusting the authority, whereas clueless people become more distrustful. Sufficiently savvy people end up inhabiting a mental universe where the authority is trustworthy, as when Dan Quayle denied that characterizing tax increases as “revenue enhancements” constituted fooling the public—because “no one was fooled”.

Alexander concludes with a characteristically mistake-theoretic plea for mutual understanding:

The savvy people need to realize that the clueless people aren’t always paranoid, just less experienced than they are at dealing with a hostile environment that lies to them all the time.

And the clueless people need to realize that the savvy people aren’t always gullible, just more optimistic about their ability to extract signal from same.

But “a hostile environment that lies to them all the time” is exactly the kind of situation where we would expect a conflict theory to be correct and mistake theories to be wrong!—or at least very incomplete. To speak as if the savvy merely have more skills to extract signal from a “naturally” occurring source of lies, obscures the critical question of what all the lying is for.

In a paper on “the logic of indirect speech”, Pinker, Nowak, and Lee give the example of a pulled-over motorist telling a police officer, “Gee, officer, is there some way we could take care of the ticket here?”

This is, of course, a bribery attempt. The reason the driver doesn’t just say that (“Can I bribe you into not giving me a ticket?”), is because the driver doesn’t know whether this is a corrupt police officer that accepts bribes, or an honest officer who will charge the driver with attempted bribery. The indirect language lets the driver communicate to the corrupt cop (in the possible world where this cop is corrupt), without being arrested by the honest cop who doesn’t think he can make an attempted-bribery charge stick in court on the evidence of such vague language (in the possible world where this cop is honest).

We need a conflict theory to understand this type of situation. Someone who assumed that all police officers had the same utility function would be fundamentally out of touch with reality: it’s not that the corrupt cops are just “savvier”, better able to “extract signal” from the driver’s speech. The honest cops can probably do that, too. Rather, corrupt and honest cops are trying to do different things, and the driver’s speech is optimized to help the corrupt cops in a way that honest cops can’t interfere with (because the honest cops’ objective requires working with a court system that is less savvy).

This kind of analysis carries over to Alexander’s discussion of government lies—maybe even isomorphically. When a government denies tax increases but announces “revenue enhancements”, and supporters of the regime effortlessly know what they mean, while dissidents consider it a lie, it’s not that regime supporters are just savvier. The dissidents can probably figure it out, too. Rather, regime supporters and dissidents are trying to do different things. Dissidents want to create common knowledge of the regime’s shortcomings: in order to organize a revolt, it’s not enough for everyone to hate the government; everyone has to know that everyone else hates the government in order to confidently act in unison, rather than fear being crushed as an individual. The regime’s proclamations are optimized to communicate to its supporters in a way that doesn’t give moral support to the dissident cause (because the dissidents’ objective requires common knowledge, not just savvy individual knowledge, and common knowledge requires unobfuscated language).

This kind of analysis is about behavior, information, and the incentives that shape them. Conscious subjectivity or any awareness of the game dynamics are irrelevant. In the minds of regime supporters, “no one was fooled”, because if you were fooled, then you aren’t anyone: failing to be complicit with the reigning Power’s law would be as insane as trying to defy the law of gravity.

On the other side, if blindness to Power has the same input–output behavior as conscious service to Power, then opponents of the reigning Power have no reason to care about the distinction. In the same way, when a predator firefly sends the mating signal of its prey species, we consider it deception, even if the predator is acting on instinct and can’t consciously “intend” to deceive.

Thus, supporters of the regime naturally think dissidents are making a mistake; dissidents naturally think regime supporters are the enemy in their conflict.