Which cognitive biases should we trust in?

There have been (at least) a cou­ple of at­tempts on LW to make Anki flash­cards from Wikipe­dia’s fa­mous List of Cog­ni­tive Bi­ases, here and here. How­ever, stylis­ti­cally they are not my type of flash­card, with too much info in the “an­swer” sec­tion.

Fur­ther, and more trou­blingly, I’m not sure whether all of the bi­ases in the flash­cards are real, gen­er­al­iz­able effects; or, if they are real, whether they have effect sizes large enough to be worth the effort to learn & dis­sem­i­nate. Psy­chol­ogy is an aca­demic dis­ci­pline with all of the bag­gage that en­tails. Psy­chol­ogy is also one of the least tan­gible sci­ences, which is not helpful.

There are stud­ies show­ing that Wikipe­dia is no less re­li­able than more con­ven­tional sources, but this is in ag­gre­gate, and it seems plau­si­ble (though difficult to de­tect with­out dili­gently check­ing sources) that the set of cog­ni­tive bias ar­ti­cles on Wikipe­dia has high var­i­ance in qual­ity.

We do have some knowl­edge of how many of them were made, in that LW user nerfham­mer wrote a bunch. But, as far as I can tell, s/​he didn’t dis­cuss how s/​he se­lected bi­ases to in­clude. (Though, s/​he is ob­vi­ously quite knowl­edgable on the sub­ject, see e.g. here.)

As the ar­ti­cles stand to­day, many (e.g., here, here, here, here, and here) only cite re­search from one study/​lab. I do not want to come across as whin­ing: the au­thors who wrote these on Wikipe­dia are awe­some. But, as a con­sumer the lack of in­de­pen­dent repli­ca­tion makes me ner­vous. I don’t want to con­tribute to in­for­ma­tion cas­cades.

Nev­er­the­less, I do still want to make flash­cards for at least some of these bi­ases, be­cause I am rel­a­tively sure that there are some strong, im­por­tant, wide­spread bi­ases out there.

So, I am ask­ing LW whether you all have any ideas about, on the meta level,

1) how we should go about de­cid­ing/​in­dex­ing which ar­ti­cles/​bi­ases cap­ture le­git effects worth know­ing,

and, on the ob­ject level,

2) which of the bi­ases/​heuris­tics/​fal­la­cies are ac­tu­ally le­git (like, a list).

Here are some of my ideas. First, for how to de­cide:

- Only in­clude bi­ases that are men­tioned by pres­ti­gious sources like Kah­ne­man in his new book. Up­side: au­thor­i­ta­tive. Down­side: po­ten­tially throw­ing out some good info and putting too much faith in one source.

- Only in­clude bi­ases whose Wikipe­dia ar­ti­cles cite at least two pri­mary ar­ti­cles that share none of the same au­thors. Up­side: es­tab­lishes some de­gree of con­sen­sus in the field. Down­side: won’t ac­tu­ally vet the ar­ti­cles for qual­ity, and a pre­sum­ably false as­sump­tion that the Wikipe­dia pages will re­flect the state of knowl­edge in the field.

- Search for the name of the bias (or any bold, al­ter­na­tive names on Wikipe­dia) on Google scholar, and only ac­cept those with, say, >30 cita­tions. Up­side: less of a sam­pling bias of what is in­cluded on Wikipe­dia, which is likely to be some­what ar­bi­trary. Down­side: in­for­ma­tion cas­cades oc­cur in academia too, and this method doesn’t filter for ac­tual ex­per­i­men­tal ev­i­dence (e.g., there could be lots of re­views dis­cussing the idea).

- Make some sort of a vot­ing sys­tem where ex­perts (surely some fre­quent this site) can weigh in on what they think of the pri­mary ev­i­dence for a given bias. Up­side: rather than count­ing ar­ti­cles, eval­u­ates ac­tual ev­i­dence for the bias. Down­side: seems hard to get the scale (~ 8 − 12 + peo­ple vot­ing) to make this use­ful.

- Build some ar­bi­trar­ily weighted rat­ing scale that takes into ac­count some or all of the above. Up­side: meta. Down­side: garbage in, garbage out, and the first three fea­tures seem highly cor­re­lated any­way.

Se­cond, for which bi­ases to in­clude. I’m just go­ing off of which ones I have heard of and/​or look le­git on a fairly quick run through. Note that those an­no­tated with a (?) are ones I am es­pe­cially un­sure about.

- anchoring

- availability

- band­wagon effect

- base rate neglect

- choice-sup­port­ive bias

- clus­ter­ing illusion

- con­fir­ma­tion bias

- con­junc­tion fal­lacy (is sub­ad­di­tivity a sub­set of this?)

- con­ser­vatism (?)

- con­text effect (aka state-de­pen­dent mem­ory)

- curse of knowl­edge (?)

- con­trast effect

- de­coy effect (aka in­de­pen­dence of ir­rele­vant al­ter­na­tives)

- Dun­ning–Kruger effect (?)

- du­ra­tion neglect

- em­pa­thy gap

- ex­pec­ta­tion bias

- framing

- gam­bler’s fallacy

- halo effect

- hind­sight bias

- hy­per­bolic dis­count­ing

- illu­sion of control

- illu­sion of transparency

- illu­sory correlation

- illu­sory superiority

- illu­sion of val­idity (?)

- im­pact bias

- in­for­ma­tion bias (? aka failure to con­sider value of in­for­ma­tion)

- in-group bias (this is also clearly real, but I’m also not sure I’d call it a bias)

- es­ca­la­tion of com­mit­ment (aka sunk cost/​loss aver­sion/​en­dow­ment effect; note, con­tra Gw­ern, that I do think this is a use­ful fal­lacy to know about, if over­rated)

- false con­sen­sus (re­lated to pro­jec­tion bias)

- Forer effect

- fun­da­men­tal at­tri­bu­tion er­ror (re­lated to the just-world hy­poth­e­sis)

- fa­mil­iar­ity prin­ci­ple (aka mere ex­po­sure effect)

- moral li­cens­ing (aka moral cre­den­tial)

- nega­tivity bias (seems con­tro­ver­sial & it’s trou­bling that there is also a pos­i­tivity bias)

- nor­malcy bias (re­lated to ex­is­ten­tial risk?)

- omis­sion bias

- op­ti­mism bias (re­lated to over­con­fi­dence)

- out­come bias (aka moral luck)

- out­group ho­mo­gene­ity bias

- peak-end rule

- primacy

- plan­ning fallacy

- re­ac­tance (aka con­trar­i­anism)

- recency

- representativeness

- self-serv­ing bias

- so­cial de­sir­a­bil­ity bias

- sta­tus quo bias

Happy to hear any thoughts!