In the presence of disinformation, collective epistemology requires local modeling

In Inadequacy and Modesty, Eliezer describes modest epistemology:

How likely is it that an entire country—one of the world’s most advanced countries—would forego trillions of dollars of real economic growth because their monetary controllers—not politicians, but appointees from the professional elite—were doing something so wrong that even a non-professional could tell? How likely is it that a non-professional could not just suspect that the Bank of Japan was doing something badly wrong, but be confident in that assessment?
Surely it would be more realistic to search for possible reasons why the Bank of Japan might not be as stupid as it seemed, as stupid as some econbloggers were claiming. Possibly Japan’s aging population made growth impossible. Possibly Japan’s massive outstanding government debt made even the slightest inflation too dangerous. Possibly we just aren’t thinking of the complicated reasoning going into the Bank of Japan’s decision.
Surely some humility is appropriate when criticizing the elite decision-makers governing the Bank of Japan. What if it’s you, and not the professional economists making these decisions, who have failed to grasp the relevant economic considerations?
I’ll refer to this genre of arguments as “modest epistemology.”

I see modest epistemology as attempting to defer to a canonical perspective: a way of making judgments that is a Schelling point for coordination. In this case, the Bank of Japan has more claim to canonicity than Eliezer does regarding claims about Japan’s economy. I think deferring to a canonical perspective is key to how modest epistemology functions and why people find it appealing.

In social groups such as effective altruism, canonicity is useful when it allows for better coordination. If everyone can agree that charity X is the best charity, then it is possible to punish those who do not donate to charity X. This is similar to law: if a legal court makes a judgment that is not overturned, that judgment must be obeyed by anyone who does not want to be punished. Similarly, in discourse, it is often useful to punish crackpots by requiring deference to a canonical scientific judgment.

It is natural that deferring to a canonical perspective would be psychologically appealing, since it offers a low likelihood of being punished for deviating while allowing deviants to be punished, creating a sense of unity and certainty.

An obstacle to canonical perspectives is that epistemology requires using local information. Suppose I saw Bob steal my wallet. I have information about whether he actually stole my wallet (namely, my observation of the theft) that no one else has. If I tell others that Bob stole my wallet, they might or might not believe me depending on how much they trust me, as there is some chance I am lying to them. Constructing a more canonical perspective (e.g. a in a court of law) requires integrating this local information: for example, I might tell the judge that Bob stole my wallet, and my friends might vouch for my character.

If humanity formed a collective superintelligence that integrated local information into a canonical perspective at the speed of light using sensible rules (e.g. something similar to Bayesianism), then there would be little need to exploit local information except to transmit it to this collective superintelligence. Obviously, this hasn’t happened yet. Collective superintelligences made of humans must transmit information at the speed of human communication rather than the speed of light.

In addition to limits on communication speed, collective superintelligences made of humans have another difficulty: they must prevent and detect disinformation. People on the internet sometimes lie, as do people off the internet. Self-deception is effectively another form of deception, and is extremely common as explained in The Elephant in the Brain.

Mostly because of this, current collective superintelligences leave much to be desired. As Jordan Greenhall writes in this post:

Take a look at Syria. What exactly is happening? With just a little bit of looking, I’ve found at least six radically different and plausible narratives:
• Assad used poison gas on his people and the United States bombed his airbase in a measured response.
• Assad attacked a rebel base that was unexpectedly storing poison gas and Trump bombed his airbase for political reasons.
• The Deep State in the United States is responsible for a “false flag” use of poison gas in order to undermine the Trump Insurgency.
• The Russians are responsible for a “false flag” use of poison gas in order to undermine the Deep State.
• Putin and Trump collaborated on a “false flag” in order to distract from “Russiagate.”
• Someone else (China? Israel? Iran?) is responsible for a “false flag” for purposes unknown.
And, just to make sure we really grasp the level of non-sense:
• There was no poison gas attack, the “white helmets” are fake news for purposes unknown and everyone who is in a position to know is spinning their own version of events for their own purposes.
Think this last one is implausible? Are you sure? Are you sure you know the current limits of the war on sensemaking? Of sock puppets and cognitive hacking and weaponized memetics?
All I am certain of about Syria is that I really have no fucking idea what is going on. And that this state of affairs — this increasingly generalized condition of complete disorientation — is untenable.

We are in a collective condition of fog of war. Acting effectively under fog of war requires exploiting local information before it has been integrated into a canonical perspective. In military contexts, units must make decisions before contacting a central base using information and models only available to them. Syrians must decide whether to flee based on their own observations, observations of those they trust, and trustworthy local media. Americans making voting decisions based on Syria must decide which media sources they trust most, or actually visit Syria to gain additional info.

While I have mostly discussed differences in information between people, there are also differences in reasoning ability and willingness to use reason. Most people most of the time aren’t even modeling things for themselves, but are instead parroting socially acceptable opinions. The products of reasoning could perhaps be considered as a form of logical information and treated similar to other information.

In the past, I have found modest epistemology aesthetically appealing on the basis that sufficient coordination would lead to a single canonical perspective that you can increase your average accuracy by deferring to (as explained in this post). Since then, aesthetic intuitions have led me to instead think of the problem of collective epistemology as one of decentralized coordination: how can good-faith actors reason and act well as a collective superintelligence in conditions of fog of war, where deception is prevalent and creation of common knowledge is difficult? I find this framing of collective epistemology more beautiful than the idea of a immediately deferring to a canonical perspective, and it is a better fit for the real world.

I haven’t completely thought through the implications of this framing (that would be impossible), but so far my thinking has suggested a number of heuristics for group epistemology:

  • Think for yourself. When your information sources are not already doing a good job of informing you, gathering your own information and forming your own models can improve your accuracy and tell you which information sources are most trustworthy. Outperforming experts often doesn’t require complex models or extraordinary insight; see this review of Superforecasting for a description of some of what good amateur forecasters do.

  • Share the products of your thinking. Where possible, share not only opinions but also the information or model that caused you to form the opinion. This allows others to verify and build on your information and models rather than just memorizing “X person believes Y”, resulting in more information transfer. For example, fact posts will generally be better for collective epistemology than a similar post with fewer facts; they will let readers form their own models based on the info and have higher confidence in these models.

  • Fact-check information people share by cross-checking it against other sources of information and models. The more this shared information is fact-checked, the more reliably true it will be. (When someone is wrong on the internet, this is actually a problem worth fixing).

  • Try to make information and models common knowledge among a group when possible, so they can be integrated into a canonical perspective. This allows the group to build on this, rather than having to re-derive or re-state it repeatedly. Contributing to a written canon that some group of people is expected to have read is a great way to do this.

  • When contributing to a canon, seek strong and clear evidence where possible. This can result in a question being definitively settled, which is great for the group’s ability to reliably get the right answer to the question, rather than having a range of “acceptable” answers that will be chosen from based on factors other than accuracy.

  • When taking actions (e.g. making bets), use local information available only to you or a small number of others, not only canonical information. For example, when picking organizations to support, use information you have about these organizations (e.g. information about the competence of people working at this charity) even if not everyone else has this info. (For a more obvious example to illustrate the principle: if I saw Bob steal my wallet, then it’s in my interest to guard my possessions more closely around Bob than I otherwise would, even if I can’t convince everyone that Bob stole my wallet).