In the presence of disinformation, collective epistemology requires local modeling

In Inad­e­quacy and Modesty, Eliezer de­scribes mod­est episte­mol­ogy:

How likely is it that an en­tire coun­try—one of the world’s most ad­vanced coun­tries—would forego trillions of dol­lars of real eco­nomic growth be­cause their mon­e­tary con­trol­lers—not poli­ti­ci­ans, but ap­poin­tees from the pro­fes­sional elite—were do­ing some­thing so wrong that even a non-pro­fes­sional could tell? How likely is it that a non-pro­fes­sional could not just sus­pect that the Bank of Ja­pan was do­ing some­thing badly wrong, but be con­fi­dent in that as­sess­ment?
Surely it would be more re­al­is­tic to search for pos­si­ble rea­sons why the Bank of Ja­pan might not be as stupid as it seemed, as stupid as some econ­blog­gers were claiming. Pos­si­bly Ja­pan’s ag­ing pop­u­la­tion made growth im­pos­si­ble. Pos­si­bly Ja­pan’s mas­sive out­stand­ing gov­ern­ment debt made even the slight­est in­fla­tion too dan­ger­ous. Pos­si­bly we just aren’t think­ing of the com­pli­cated rea­son­ing go­ing into the Bank of Ja­pan’s de­ci­sion.
Surely some hu­mil­ity is ap­pro­pri­ate when crit­i­ciz­ing the elite de­ci­sion-mak­ers gov­ern­ing the Bank of Ja­pan. What if it’s you, and not the pro­fes­sional economists mak­ing these de­ci­sions, who have failed to grasp the rele­vant eco­nomic con­sid­er­a­tions?
I’ll re­fer to this genre of ar­gu­ments as “mod­est episte­mol­ogy.”

I see mod­est episte­mol­ogy as at­tempt­ing to defer to a canon­i­cal per­spec­tive: a way of mak­ing judg­ments that is a Schel­ling point for co­or­di­na­tion. In this case, the Bank of Ja­pan has more claim to canon­ic­ity than Eliezer does re­gard­ing claims about Ja­pan’s econ­omy. I think defer­ring to a canon­i­cal per­spec­tive is key to how mod­est episte­mol­ogy func­tions and why peo­ple find it ap­peal­ing.

In so­cial groups such as effec­tive al­tru­ism, canon­ic­ity is use­ful when it al­lows for bet­ter co­or­di­na­tion. If ev­ery­one can agree that char­ity X is the best char­ity, then it is pos­si­ble to pun­ish those who do not donate to char­ity X. This is similar to law: if a le­gal court makes a judg­ment that is not over­turned, that judg­ment must be obeyed by any­one who does not want to be pun­ished. Similarly, in dis­course, it is of­ten use­ful to pun­ish crack­pots by re­quiring defer­ence to a canon­i­cal sci­en­tific judg­ment.

It is nat­u­ral that defer­ring to a canon­i­cal per­spec­tive would be psy­cholog­i­cally ap­peal­ing, since it offers a low like­li­hood of be­ing pun­ished for de­vi­at­ing while al­low­ing de­viants to be pun­ished, cre­at­ing a sense of unity and cer­tainty.

An ob­sta­cle to canon­i­cal per­spec­tives is that episte­mol­ogy re­quires us­ing lo­cal in­for­ma­tion. Sup­pose I saw Bob steal my wallet. I have in­for­ma­tion about whether he ac­tu­ally stole my wallet (namely, my ob­ser­va­tion of the theft) that no one else has. If I tell oth­ers that Bob stole my wallet, they might or might not be­lieve me de­pend­ing on how much they trust me, as there is some chance I am ly­ing to them. Con­struct­ing a more canon­i­cal per­spec­tive (e.g. a in a court of law) re­quires in­te­grat­ing this lo­cal in­for­ma­tion: for ex­am­ple, I might tell the judge that Bob stole my wallet, and my friends might vouch for my char­ac­ter.

If hu­man­ity formed a col­lec­tive su­per­in­tel­li­gence that in­te­grated lo­cal in­for­ma­tion into a canon­i­cal per­spec­tive at the speed of light us­ing sen­si­ble rules (e.g. some­thing similar to Bayesi­anism), then there would be lit­tle need to ex­ploit lo­cal in­for­ma­tion ex­cept to trans­mit it to this col­lec­tive su­per­in­tel­li­gence. Ob­vi­ously, this hasn’t hap­pened yet. Col­lec­tive su­per­in­tel­li­gences made of hu­mans must trans­mit in­for­ma­tion at the speed of hu­man com­mu­ni­ca­tion rather than the speed of light.

In ad­di­tion to limits on com­mu­ni­ca­tion speed, col­lec­tive su­per­in­tel­li­gences made of hu­mans have an­other difficulty: they must pre­vent and de­tect dis­in­for­ma­tion. Peo­ple on the in­ter­net some­times lie, as do peo­ple off the in­ter­net. Self-de­cep­tion is effec­tively an­other form of de­cep­tion, and is ex­tremely com­mon as ex­plained in The Elephant in the Brain.

Mostly be­cause of this, cur­rent col­lec­tive su­per­in­tel­li­gences leave much to be de­sired. As Jor­dan Green­hall writes in this post:

Take a look at Syria. What ex­actly is hap­pen­ing? With just a lit­tle bit of look­ing, I’ve found at least six rad­i­cally differ­ent and plau­si­ble nar­ra­tives:
• As­sad used poi­son gas on his peo­ple and the United States bombed his air­base in a mea­sured re­sponse.
• As­sad at­tacked a rebel base that was un­ex­pect­edly stor­ing poi­son gas and Trump bombed his air­base for poli­ti­cal rea­sons.
• The Deep State in the United States is re­spon­si­ble for a “false flag” use of poi­son gas in or­der to un­der­mine the Trump In­sur­gency.
• The Rus­si­ans are re­spon­si­ble for a “false flag” use of poi­son gas in or­der to un­der­mine the Deep State.
• Putin and Trump col­lab­o­rated on a “false flag” in or­der to dis­tract from “Rus­si­a­gate.”
• Some­one else (China? Is­rael? Iran?) is re­spon­si­ble for a “false flag” for pur­poses un­known.
And, just to make sure we re­ally grasp the level of non-sense:
• There was no poi­son gas at­tack, the “white helmets” are fake news for pur­poses un­known and ev­ery­one who is in a po­si­tion to know is spin­ning their own ver­sion of events for their own pur­poses.
Think this last one is im­plau­si­ble? Are you sure? Are you sure you know the cur­rent limits of the war on sense­mak­ing? Of sock pup­pets and cog­ni­tive hack­ing and weaponized memet­ics?
All I am cer­tain of about Syria is that I re­ally have no fuck­ing idea what is go­ing on. And that this state of af­fairs — this in­creas­ingly gen­er­al­ized con­di­tion of com­plete di­s­ori­en­ta­tion — is un­ten­able.

We are in a col­lec­tive con­di­tion of fog of war. Act­ing effec­tively un­der fog of war re­quires ex­ploit­ing lo­cal in­for­ma­tion be­fore it has been in­te­grated into a canon­i­cal per­spec­tive. In mil­i­tary con­texts, units must make de­ci­sions be­fore con­tact­ing a cen­tral base us­ing in­for­ma­tion and mod­els only available to them. Syr­i­ans must de­cide whether to flee based on their own ob­ser­va­tions, ob­ser­va­tions of those they trust, and trust­wor­thy lo­cal me­dia. Amer­i­cans mak­ing vot­ing de­ci­sions based on Syria must de­cide which me­dia sources they trust most, or ac­tu­ally visit Syria to gain ad­di­tional info.

While I have mostly dis­cussed differ­ences in in­for­ma­tion be­tween peo­ple, there are also differ­ences in rea­son­ing abil­ity and will­ing­ness to use rea­son. Most peo­ple most of the time aren’t even mod­el­ing things for them­selves, but are in­stead par­rot­ing so­cially ac­cept­able opinions. The prod­ucts of rea­son­ing could per­haps be con­sid­ered as a form of log­i­cal in­for­ma­tion and treated similar to other in­for­ma­tion.

In the past, I have found mod­est episte­mol­ogy aes­thet­i­cally ap­peal­ing on the ba­sis that suffi­cient co­or­di­na­tion would lead to a sin­gle canon­i­cal per­spec­tive that you can in­crease your av­er­age ac­cu­racy by defer­ring to (as ex­plained in this post). Since then, aes­thetic in­tu­itions have led me to in­stead think of the prob­lem of col­lec­tive episte­mol­ogy as one of de­cen­tral­ized co­or­di­na­tion: how can good-faith ac­tors rea­son and act well as a col­lec­tive su­per­in­tel­li­gence in con­di­tions of fog of war, where de­cep­tion is preva­lent and cre­ation of com­mon knowl­edge is difficult? I find this fram­ing of col­lec­tive episte­mol­ogy more beau­tiful than the idea of a im­me­di­ately defer­ring to a canon­i­cal per­spec­tive, and it is a bet­ter fit for the real world.

I haven’t com­pletely thought through the im­pli­ca­tions of this fram­ing (that would be im­pos­si­ble), but so far my think­ing has sug­gested a num­ber of heuris­tics for group episte­mol­ogy:

  • Think for your­self. When your in­for­ma­tion sources are not already do­ing a good job of in­form­ing you, gath­er­ing your own in­for­ma­tion and form­ing your own mod­els can im­prove your ac­cu­racy and tell you which in­for­ma­tion sources are most trust­wor­thy. Out­perform­ing ex­perts of­ten doesn’t re­quire com­plex mod­els or ex­traor­di­nary in­sight; see this re­view of Su­perfore­cast­ing for a de­scrip­tion of some of what good am­a­teur fore­cast­ers do.

  • Share the prod­ucts of your think­ing. Where pos­si­ble, share not only opinions but also the in­for­ma­tion or model that caused you to form the opinion. This al­lows oth­ers to ver­ify and build on your in­for­ma­tion and mod­els rather than just mem­o­riz­ing “X per­son be­lieves Y”, re­sult­ing in more in­for­ma­tion trans­fer. For ex­am­ple, fact posts will gen­er­ally be bet­ter for col­lec­tive episte­mol­ogy than a similar post with fewer facts; they will let read­ers form their own mod­els based on the info and have higher con­fi­dence in these mod­els.

  • Fact-check in­for­ma­tion peo­ple share by cross-check­ing it against other sources of in­for­ma­tion and mod­els. The more this shared in­for­ma­tion is fact-checked, the more re­li­ably true it will be. (When some­one is wrong on the in­ter­net, this is ac­tu­ally a prob­lem worth fix­ing).

  • Try to make in­for­ma­tion and mod­els com­mon knowl­edge among a group when pos­si­ble, so they can be in­te­grated into a canon­i­cal per­spec­tive. This al­lows the group to build on this, rather than hav­ing to re-de­rive or re-state it re­peat­edly. Con­tribut­ing to a writ­ten canon that some group of peo­ple is ex­pected to have read is a great way to do this.

  • When con­tribut­ing to a canon, seek strong and clear ev­i­dence where pos­si­ble. This can re­sult in a ques­tion be­ing defini­tively set­tled, which is great for the group’s abil­ity to re­li­ably get the right an­swer to the ques­tion, rather than hav­ing a range of “ac­cept­able” an­swers that will be cho­sen from based on fac­tors other than ac­cu­racy.

  • When tak­ing ac­tions (e.g. mak­ing bets), use lo­cal in­for­ma­tion available only to you or a small num­ber of oth­ers, not only canon­i­cal in­for­ma­tion. For ex­am­ple, when pick­ing or­ga­ni­za­tions to sup­port, use in­for­ma­tion you have about these or­ga­ni­za­tions (e.g. in­for­ma­tion about the com­pe­tence of peo­ple work­ing at this char­ity) even if not ev­ery­one else has this info. (For a more ob­vi­ous ex­am­ple to illus­trate the prin­ci­ple: if I saw Bob steal my wallet, then it’s in my in­ter­est to guard my pos­ses­sions more closely around Bob than I oth­er­wise would, even if I can’t con­vince ev­ery­one that Bob stole my wallet).