Bad intent is a disposition, not a feeling

It’s com­mon to think that some­one else is ar­gu­ing in bad faith. In a re­cent blog post, Nate Soares claims that this in­tu­ition is both wrong and harm­ful:

I be­lieve that the abil­ity to ex­pect that con­ver­sa­tion part­ners are well-in­ten­tioned by de­fault is a pub­lic good. An ex­tremely valuable pub­lic good. When crit­i­cism turns to at­tack­ing the in­ten­tions of oth­ers, I per­ceive that to be burn­ing the com­mons. Com­mu­ni­ties of­ten have to deal with ac­tors that in fact have ill in­ten­tions, and in that case it’s of­ten worth the dam­age to pre­vent an even greater ex­ploita­tion by mal­i­cious ac­tors. But dam­age is dam­age in ei­ther case, and I sus­pect that young com­mu­ni­ties are prone to de­stroy­ing this par­tic­u­lar com­mons based on false premises.

To be clear, I am not claiming that well-in­ten­tioned ac­tions tend to have good con­se­quences. The road to hell is paved with good in­ten­tions. Whether or not some­one’s ac­tions have good con­se­quences is an en­tirely sep­a­rate is­sue. I am only claiming that, in the par­tic­u­lar case of small high-trust com­mu­ni­ties, I be­lieve al­most ev­ery­one is al­most always at­tempt­ing to do good by their own lights. I be­lieve that prop­a­gat­ing doubt about that fact is nearly always a bad idea.

It would be sur­pris­ing, if bad in­tent were so rare in the rele­vant sense, that peo­ple would be so quick to jump to the con­clu­sion that it is pre­sent. Why would that be adap­tive?

What rea­son do we have to be­lieve that we’re sys­tem­at­i­cally over­es­ti­mat­ing this? If we’re sys­tem­at­i­cally over­es­ti­mat­ing it, why should we be­lieve that it’s adap­tive to sup­press this?

There are plenty of rea­sons why we might make sys­tem­atic er­rors on things that are too in­fre­quent or too in­con­se­quen­tial to yield a lot of rele­vant-feel­ing train­ing data or mat­ter much for re­pro­duc­tive fit­ness, but so­cial in­tu­itions are a cen­tral case of the sort of things I would ex­pect hu­mans to get right by de­fault. I think the bur­den of ev­i­dence is on the side dis­agree­ing with the in­tu­itions be­hind this ex­tremely com­mon defen­sive re­sponse, to ex­plain what bad ac­tors are, why we are on such a hair-trig­ger against them, and why we should re­lax this.

Nate con­tinues:

My mod­els of hu­man psy­chol­ogy al­low for peo­ple to pos­sess good in­ten­tions while ex­e­cut­ing adap­ta­tions that in­crease their sta­tus, in­fluence, or pop­u­lar­ity. My mod­els also don’t deem peo­ple poor al­lies merely on ac­count of their hav­ing in­stinc­tual mo­ti­va­tions to achieve sta­tus, power, or pres­tige, any more than I deem peo­ple poor al­lies if they care about things like money, art, or good food. […]

One more clar­ifi­ca­tion: some of my friends have in­sinu­ated (but not said out­right as far as I know) that the ex­e­cu­tion of ac­tions with bad con­se­quences is just as bad as hav­ing ill in­ten­tions, and we should treat the two similarly. I think this is very wrong: erod­ing trust in the judge­ment or dis­cern­ment of an in­di­vi­d­ual is very differ­ent from erod­ing trust in whether or not they are pur­su­ing the com­mon good.

Nate’s ar­gu­ment is al­most en­tirely about mens rea—about sub­jec­tive in­tent to make some­thing bad hap­pen. But mens rea is not re­ally a thing. He con­trasts this with ac­tions that have bad con­se­quences, which are com­mon. But there’s some­thing in the mid­dle: fol­low­ing an in­cen­tive gra­di­ent that re­wards dis­tor­tions. For in­stance, if you rigor­ously A/​B test your mar­ket­ing un­til it gen­er­ates the pre­sen­ta­tion that at­tracts the most cus­tomers, and don’t bother to in­spect why they re­spond pos­i­tively to the re­sult, then you’re sim­ply say­ing what­ever words get you the most cus­tomers, re­gard­less of whether they’re true. In such cases, whether or not you ever formed a con­scious in­tent to mis­lead, your strat­egy is to tell whichever lie is most con­ve­nient; there was noth­ing in your op­ti­miza­tion tar­get that forced your words to be true ones, and most pos­si­ble claims are false, so you ended up mak­ing false claims.

More gen­er­ally, if you try to con­trol oth­ers’ ac­tions, and don’t limit your­self to do­ing that by hon­estly in­form­ing them, then you’ll end up with a strat­egy that dis­torts the truth, whether or not you meant to. The de­fault state for any given con­straint is that it has not been ap­plied to some­one’s be­hav­ior. To say that some­one has the hon­est in­tent to in­form is a pos­i­tive claim about their in­tent. It’s clear to me that we should ex­pect this to some­times be the case—some­times peo­ple per­ceive a con­ver­gent in­cen­tive to in­form one an­other, rather than a di­ver­gent in­cen­tive to grab con­trol. But, if you do not defend your­self and your com­mu­nity against di­ver­gent strate­gies un­less there is un­am­bigu­ous ev­i­dence, then you make your­self vuln­er­a­ble to those strate­gies, and should ex­pect to get more of them.The de­fault hy­poth­e­sis should be that any given con­straint has not been ap­plied to some­one’s be­hav­ior. To say that some­one has the hon­est in­tent to in­form is a pos­i­tive claim about their in­tent. It’s clear to me that we should ex­pect this to some­times be the case—some­times peo­ple have a con­ver­gent in­cen­tive to in­form one an­other, rather than a di­ver­gent in­cen­tive to grab con­trol.

I’ve been crit­i­ciz­ing EA or­ga­ni­za­tions a lot for de­cep­tive or oth­er­wise dis­tor­tionary prac­tices (see here and here), and one re­sponse I of­ten get is, in effect, “How can you say that? After all, I’ve per­son­ally as­sured you that my or­ga­ni­za­tion never had a se­cret meet­ing in which we overtly re­solved to lie to peo­ple!”

Aside from the ob­vi­ous prob­lems with as­sur­ing some­one that you’re tel­ling the truth, this is gen­er­ally some­thing of a non­se­quitur. Your pub­lic com­mu­ni­ca­tion strat­egy can be pub­li­cly ob­served. If it tends to cre­ate dis­tor­tions, then I can rea­son­able in­fer that you’re fol­low­ing some sort of in­cen­tive gra­di­ent that re­wards some kinds of dis­tor­tions. I don’t need to know about your sub­jec­tive ex­pe­riences to draw this con­clu­sion. I don’t need to know your in­ner nar­ra­tive. I can just look, as a mem­ber of the pub­lic, and re­port what I see.

Act­ing in bad faith doesn’t make you in­trin­si­cally a bad per­son, be­cause there’s no such thing. And be­sides, it wouldn’t be so com­mon if it re­quired an ex­cep­tion­ally bad char­ac­ter. But it has to be OK to point out when peo­ple are not just mis­taken, but fol­low­ing pat­terns of be­hav­ior that are sys­tem­at­i­cally dis­tort­ing the dis­course—and to point this out pub­li­cly so that we can learn to do bet­ter, to­gether.

(Cross-posted at my per­sonal blog.)

[EDITED 1 May 2017 - changed word­ing of ti­tle from “be­hav­ior” to “dis­po­si­tion”]