Comment on “Endogenous Epistemic Factionalization”

In “En­doge­nous Epistemic Fac­tion­al­iza­tion” (due in a forth­com­ing is­sue of the philos­o­phy-of-sci­ence jour­nal Syn­these), James Owen Weather­all and Cailin O’Con­nor pro­pose a pos­si­ble an­swer to the ques­tion of why peo­ple form fac­tions that dis­agree on mul­ti­ple sub­jects.

The ex­is­tence of per­sis­tent dis­agree­ments is already kind of a puz­zle from a Bayesian per­spec­tive. There’s only one re­al­ity. If ev­ery­one is hon­estly try­ing to get the right an­swer and we can all talk to each other, then we should con­verge on the right an­swer (or an an­swer that is less wrong given the ev­i­dence we have). The fact that we can’t do it is, or should be, an em­bar­rass­ment to our species. And the ex­is­tence of cor­re­lated per­sis­tent dis­agree­ments—when not only do I say “top” when you say “bot­tom” even af­ter we’ve gone over all the ar­gu­ments for whether it is in fact the case that top or bot­tom, but fur­ther­more, the fact that I said “top” lets you pre­dict that I’ll prob­a­bly say “cold” rather than “hot” even be­fore we go over the ar­gu­ments for that, is an atroc­ity. (Not hy­per­bole. Thou­sands of peo­ple are dy­ing hor­rible suffo­ca­tion deaths be­cause we can’t figure out the op­ti­mal re­sponse to a new kind of coro­n­avirus.)

Cor­re­la­tions be­tween be­liefs are of­ten at­tributed to ide­ol­ogy or trib­al­ism: if I be­lieve that Mar­kets Are the An­swer, I’m likely to pro­pose Mar­ket-based solu­tions to all sorts of seem­ingly-un­re­lated so­cial prob­lems, and if I’m loyal to the Green tribe, I’m likely to se­lec­tively cen­sor my thoughts in or­der to fit the Green party line. But ide­ol­ogy can’t ex­plain cor­re­lated dis­agree­ments on un­re­lated top­ics that the con­tent of the ide­ol­ogy is silent on, and trib­al­ism can’t ex­plain cor­re­lated dis­agree­ments on nar­row, tech­ni­cal top­ics that aren’t tribal shib­bo­leths.

In this pa­per, Weather­all and O’Con­nor ex­hibit a toy model that pro­poses a sim­ple mechanism that can ex­plain cor­re­lated dis­agree­ment: if agents dis­be­lieve in ev­i­dence pre­sented by those with suffi­ciently dis­similar be­liefs, fac­tions emerge, even though ev­ery­one is hon­estly re­port­ing their ob­ser­va­tions and up­dat­ing on what they are told (to the ex­tent that they be­lieve it). The pa­per didn’t seem to provide source code for the simu­la­tions it de­scribes, so I fol­lowed along in Python. (Repli­ca­tion!)

In each round of the model, our lit­tle Bayesian agents choose be­tween re­peat­edly perform­ing one of two ac­tions, A or B, that can “suc­ceed” or “fail.” A is a fair coin: it suc­ceeds ex­actly half the time. As far as our agents know, B is ei­ther slightly bet­ter or slightly worse: the per-ac­tion prob­a­bil­ity of suc­cess is ei­ther 0.5 + ɛ or 0.5 − ɛ, for some ɛ (a pa­ram­e­ter to the simu­la­tion). But se­cretly, we the simu­la­tion au­thors know that B is bet­ter.

im­port ran­dom

ε = 0.01

def b():
    re­turn ran­dom.ran­dom() < 0.5 + ε

The agents start out with a uniformly ran­dom prob­a­bil­ity that B is bet­ter. The ones who cur­rently be­lieve that A is bet­ter, re­peat­edly do A (and don’t learn any­thing, be­cause they already know that A is ex­actly a coin­flip). The ones who cur­rently be­lieve that B is bet­ter, re­peat­edly do B, but keep track of and pub­lish their re­sults in or­der to help ev­ery­one figure out whether B is slightly bet­ter or slightly worse than a coin­flip.

class Agent:
    …

    def ex­per­i­ment(self):
        re­sults = [b() for _ in range(self.trial_count)]
        re­turn re­sults

If rep­re­sents the hy­poth­e­sis that B is bet­ter than A, and rep­re­sents the hy­poth­e­sis that B is worse, then Bayes’s the­o­rem says

where E is the record of how many suc­cesses we got in how many times we tried ac­tion B. The like­li­hoods and can be calcu­lated from the prob­a­bil­ity mass func­tion of the bino­mial dis­tri­bu­tion, so the agents have all the in­for­ma­tion they need to up­date their be­liefs based on ex­per­i­ments with B.

from math im­port fac­to­rial

def bino­mial(p, n, k):
    re­turn (
        fac­to­rial(n) /​ (fac­to­rial(k) * fac­to­rial(n—k)) *
        p**k * (1 - p)**(n—k)
    )

class Agent:
    …

    def pure_up­date(self, cre­dence, hits, tri­als):
        raw_pos­te­rior_good = bino­mial(0.5 + ε, tri­als, hits) * cre­dence
        raw_pos­te­rior_bad = bino­mial(0.5 - ε, tri­als, hits) * (1 - cre­dence)
        nor­mal­iz­ing_fac­tor = raw_pos­te­rior_good + raw_pos­te­rior_bad
        re­turn raw_pos­te­rior_good /​ nor­mal­iz­ing_fac­tor

Ex­cept in or­der to study the emer­gence of clus­ter­ing among mul­ti­ple be­liefs, we should ac­tu­ally have our agents face mul­ti­ple “A or B” dilem­mas, rep­re­sent­ing be­liefs about un­re­lated ques­tions. (In each case, B will again be bet­ter, but the agents don’t start out know­ing that.) I chose three ques­tions/​be­liefs, be­cause that’s all I can fit in a pretty 3D scat­ter­plot.

If all the agents up­date on the ex­per­i­men­tal re­sults pub­lished by the agents who do B, they quickly learn that B is bet­ter for all three ques­tions. If we make a pretty 3D scat­ter­plot where each di­men­sion rep­re­sents the prob­a­bil­ity that B is bet­ter for one of the dilem­mas, then the points con­verge over time to the [1.0, 1.0, 1.0] “cor­ner of Truth”, even though they started out uniformly dis­tributed all over the space.

But sup­pose the agents don’t trust each other’s re­ports. (“Sure, she says she performed 50 times and ob­served 26 suc­cesses, but she also be­lieves that is bet­ter than , which is crazy. Are we sure she didn’t just make up those 50 tri­als of ?”) Speci­fi­cally, our agents as­sign a prob­a­bil­ity that a re­port is made-up (and there­fore should not be up­dated on) in pro­por­tion to their dis­tance from the re­porter in our three-di­men­sional be­liefs­pace, and a “mis­trust fac­tor” (a pa­ram­e­ter to the simu­la­tion).

from math im­port sqrt

def eu­clidean_dis­tance(v, w):
    re­turn sqrt(sum((v[i] - w[i]) ** 2 for i in range(len(v))))

class Agent:
    …

    def dis­count_fac­tor(self, re­porter_cre­dences):
        re­turn min(
            1, self.mis­trust * eu­clidean_dis­tance(self.cre­dences, re­porter_cre­dences)
        )

    def up­date(self, ques­tion, hits, tri­als, re­porter_cre­dences):
        dis­count = self.dis­count_fac­tor(re­porter_cre­dences)
        pos­te­rior = self.pure_up­date(self.cre­dences[ques­tion], hits, tri­als)
        self.cre­dences[ques­tion] = (
            dis­count * self.cre­dences[ques­tion] + (1 - dis­count) * pos­te­rior
        )

(Um, the pa­per it­self ac­tu­ally uses a slightly more com­pli­cated mis­trust calcu­la­tion that also takes into ac­count the agent’s prior prob­a­bil­ity of the ev­i­dence, but I didn’t quite un­der­stand the mo­ti­va­tion for that, so I’m go­ing with my ver­sion. I don’t think the grand moral is af­fected.)

Then we can simu­late what hap­pens if the dis­trust­ful agents do many rounds of ex­per­i­ments and talk to each other—

def sum­ma­rize_ex­per­i­ment(re­sults):
    re­turn (len([r for r in re­sults if r]), len(re­sults))

def simu­la­tion(
    agent_count,  # num­ber of agents
    ques­tion_count,  # num­ber of ques­tions
    round_count,  # num­ber of rounds
    trial_count,  # num­ber of tri­als per round
    mis­trust,  # mis­trust fac­tor
):
    agents = [
        Agent(
            [ran­dom.ran­dom() for _ in range(ques­tion_count)],
            trial_count=trial_count,
            mis­trust=mis­trust,
        )
        for i in range(agent_count)
    ]

    for _ in range(round_count):
        for ques­tion in range(ques­tion_count):
            ex­per­i­ments = []
            for agent in agents:
                if agent.cre­dences[ques­tion] >= 0.5:
                    ex­per­i­ments.ap­pend(
                        (sum­ma­rize_ex­per­i­ment(agent.ex­per­i­ment()), agent.cre­dences)
                    )
            for agent in agents:
                for ex­per­i­ment, re­porter_cre­dences in ex­per­i­ments:
                    hits, tri­als = ex­per­i­ment
                    agent.up­date(
                        ques­tion,
                        hits,
                        tri­als,
                        re­porter_cre­dences,
                    )

    re­turn agents

Depend­ing on the ex­act pa­ram­e­ters, we’re likely to get a re­sult that “looks like” this agent_count=200, round_count=20, ques­tion_count=3, trial_count=50, mis­trust=2 run—

Some of the agents (de­picted in red) have suc­cess­fully con­verged on the cor­ner of Truth, but the oth­ers have po­larized into fac­tions that are all wrong about some­thing. (The col­ors in the pretty 3D scat­ter­plot are a k-means clus­ter­ing for k := 8.) On av­er­age, ev­i­dence pushes our agents to­wards Truth—note the lin­ear­ity of the blue and pur­ple points, illus­trat­ing con­ver­gence on two out of the three prob­lems—but agents who er­ro­neously be­lieve that A is bet­ter (due to some com­bi­na­tion of a bad ini­tial cre­dence and un­lucky ex­per­i­men­tal re­sults that failed to re­veal B’s ε “edge” in the sam­ple size al­lot­ted) can end up too far away to trust those who are gath­er­ing ev­i­dence for, and cor­rectly con­verg­ing on, the su­pe­ri­or­ity of B.

Our au­thors wrap up:

[T]his re­sult is es­pe­cially no­table be­cause there is some­thing rea­son­able about ig­nor­ing ev­i­dence gen­er­ated by those you do not trust—par­tic­u­larly if you do not trust them on ac­count of their past epistemic failures. It would be ir­re­spon­si­ble for sci­en­tists to up­date on ev­i­dence pro­duced by known quacks. And fur­ther­more, there is some­thing rea­son­able about de­cid­ing who is trust­wor­thy by look­ing at their be­liefs. From my point of view, some­one who has reg­u­larly come to hold be­liefs that di­verge from mine looks like an un­re­li­able source of in­for­ma­tion. In other words, the up­dat­ing strat­egy used by our agents is defen­si­ble. But, when used on the com­mu­nity level, it se­ri­ously un­der­mines the ac­cu­racy of be­liefs.

I think the moral here is slightly off. The spe­cific some­thing rea­son­able about ig­nor­ing ev­i­dence gen­er­ated by those you do not trust on ac­count of their be­liefs, is the as­sump­tion that those who have be­liefs you dis­agree with are fol­low­ing a pro­cess that pro­duces sys­tem­at­i­cally mis­lead­ing ev­i­dence. In this model, that as­sump­tion is just wrong. The prob­lem isn’t that the up­dat­ing strat­egy used by our agents is in­di­vi­d­u­ally “defen­si­ble” (what does that mean?) but pro­duces in­ac­cu­racy “when used on the com­mu­nity level” (what does that mean?); the prob­lem is that you get the wrong an­swer if your de­gree of trust doesn’t match agents’ ac­tual trust­wor­thi­ness. Still, it’s en­light­en­ingly dis­turb­ing to see speci­fi­cally how the “dis­trust those who dis­agree” heuris­tic de­scends into the mad­ness of fac­tions.

(Full source code.)