What cognitive biases feel like from the inside

Build­ing on the re­cent SSC post Why Doc­tors Think They’re The Best...

What it feels like for meHow I see oth­ers who feel the same
There is con­tro­versy on the sub­ject but there shouldn’t be be­cause the side I am on is ob­vi­ously right.They have taken one side in a de­bate that is un­re­solved for good rea­son that they are strug­gling to un­der­stand
I have been study­ing this care­ful­lyThey prefer­en­tially seek out con­form­ing ev­i­dence
The ar­gu­ments for my side make ob­vi­ous sense, they’re al­most bor­ing.They’re very ready to ac­cept any and all ar­gu­ments for their side.
The ar­gu­ments for the op­pos­ing side are con­tra­dic­tory, su­perfi­cial, illog­i­cal or de­bunked.They dis­miss ar­gu­ments for the op­pos­ing side at the ear­liest op­por­tu­nity.
The peo­ple on the op­pos­ing side be­lieve these ar­gu­ments mostly be­cause they are un­in­formed, have not thought about it enough or are be­ing ac­tively mis­led by peo­ple with bad mo­tives.The flawed way they per­ceive the op­pos­ing side makes them con­fused about how any­one could be on that side. They re­solve that con­fu­sion by mak­ing strong as­sump­tions that can ap­proach con­spir­acy the­o­ries.

The sci­en­tific term for this mis­match is: con­fir­ma­tion bias

What it feels like for meHow I see oth­ers who feel the same
My cus­tomers/​friends/​re­la­tion­ships love me, so I am good for them, so I am prob­a­bly just gen­er­ally good.They ne­glect the cus­tomers /​ friends /​ re­la­tion­ships that did not love them and have left, so they over­es­ti­mate how good they are.
When cus­tomers /​ friends /​ re­la­tion­ships switch to me, they tell hor­ror sto­ries of who I’m re­plac­ing for them, so I’m bet­ter than those.They don’t see the peo­ple who are happy with who they have and there­fore never be­come their cus­tomers /​ friends /​ re­la­tion­ships.

The sci­en­tific term for this mis­match is: se­lec­tion bias

What it feels like for meHow I see oth­ers who feel the same
Although I am smart and friendly, peo­ple don’t listen to me.Although they are smart and friendly, they are hard to un­der­stand.
I have a deep un­der­stand­ing of the is­sue that peo­ple are too stupid or too dis­in­ter­ested to come to share.They are failing to com­mu­ni­cate their un­der­stand­ing, or to give un­am­bigu­ous ev­i­dence they even have it.
This lack of be­ing listened to af­fects sev­eral ar­eas of my life but it is par­tic­u­larly jar­ring on top­ics that are very im­por­tant to me.This bad com­mu­ni­ca­tion af­fects all ar­eas of their life, but on the unim­por­tant ones they don’t even un­der­stand that oth­ers don’t un­der­stand them.

The sci­en­tific term for this mis­match is: illu­sion of transparency

What it feels like for meHow I see oth­ers who feel the same
I knew at the time this would not go as planned.They did not pre­dict what was go­ing to hap­pen.
The plan was bad and we should have known it was bad.They fail to ap­pre­ci­ate how hard pre­dic­tion is, so the mis­take seems more ob­vi­ous to them than it was.
I knew it was bad, I just didn’t say it, for good rea­sons (e.g. out of po­lite­ness or too much trust in those who made the bad plan) or be­cause it is not my re­spon­si­bil­ity or be­cause no­body listens to me any­way.In or­der to avoid blame for the seem­ingly ob­vi­ous mis­take, they are mak­ing up ex­cuses.

The sci­en­tific term for this mis­match is: hind­sight bias

What it feels like for meHow I see oth­ers who feel the same
I have a good in­tu­ition; even de­ci­sions I make based on in­suffi­cient in­for­ma­tion tend to turn out to be right.They tend to re­call their own suc­cesses and for­get their own failures, lead­ing to an in­flated sense of past suc­cess.
I know early on how well cer­tain pro­jects are go­ing to go or how well I will get along with cer­tain peo­ple.They make self-fulfilling prophe­cies that di­rectly in­fluence how much effort they put into a pro­ject or re­la­tion­ship.
Com­pared to oth­ers, I am un­usu­ally suc­cess­ful in my de­ci­sions.They eval­u­ate the de­ci­sions of oth­ers more level-head­edly than their own.
I am there­fore com­fortable rely­ing on my quick de­ci­sions.They there­fore over­es­ti­mate the qual­ity of their de­ci­sions.
This is more true for life de­ci­sions that are very im­por­tant to me.Yes, this is more true for life de­ci­sions that are very im­por­tant to them.

The sci­en­tific term for this mis­match is: op­ti­mism bias

Why this is bet­ter than how we usu­ally talk about biases

Com­mu­ni­ca­tion in ab­stracts is very hard. (See: Illu­sion of Trans­parency: Why No One Un­der­stands You) There­fore, it of­ten fails. (See: Ex­plain­ers Shoot High. Aim Low!) It is hard to even no­tice com­mu­ni­ca­tion has failed. (See: Dou­ble Illu­sion of Trans­parency) There­fore it is hard to ap­pre­ci­ate how rarely com­mu­ni­ca­tion in ab­stracts ac­tu­ally suc­ceeds.

Ra­tion­al­ists have no­ticed this. (Ex­am­ple) Scott Alexan­der uses a lot of con­crete ex­am­ples and that should be a ma­jor rea­son why he’s our best com­mu­ni­ca­tor. Eliezer’s Se­quences work partly be­cause he uses ex­am­ples and even fic­tion to illus­trate. But when the rest of us talk about ra­tio­nal­ity we still mostly talk in ab­stracts.

For ex­am­ple, this re­cent video was praised by many for be­ing com­par­a­tively ap­proach­able. And it does do many things right, such as em­pha­size and re­peat that ev­i­dence alone should not gen­er­ate prob­a­bil­ities, but should only ever up­date prior prob­a­bil­ities. But it still spends more than half of its run­time dis­play­ing math­e­mat­i­cal no­ta­tion that no more than 3% of the pop­u­la­tion can even read. For the vast ma­jor­ity of peo­ple, only the ex­am­ple it uses can pos­si­bly “stick”. Yet the video uses its sin­gle ex­am­ple as no more than a means for get­ting to the ab­stract ex­pla­na­tion.

This is a mis­take. I be­lieve a video with three to five vivid ex­am­ples of how to ap­ply Bayes’ The­o­rem, prefer­ably funny or sexy ones, would leave a much more last­ing im­pres­sion on most peo­ple.

Our highly de­mand­ing style of com­mu­ni­ca­tion cor­rectly pre­dicts that LessWron­gians are, on av­er­age, much smarter, much more STEM-ed­u­cated and much younger than the gen­eral pop­u­la­tion. You have to be that way to even be able to drink the Kool Aid! This makes us ho­mo­ge­neous, which is prob­a­bly a big part of what makes LW feel tribal, which is emo­tion­ally satis­fy­ing. But it leaves most of the world with their bad de­ci­sions. We need to be Rais­ing the San­ity Water­line and we can’t do that by con­tin­u­ing to com­mu­ni­cate largely in ab­stracts.

The ta­bles above show one way to do bet­ter that does the fol­low­ing.

  • It aims low—merely to help peo­ple no­tice the flaws in their think­ing. It will not, and does not need to, en­able read­ers to write sci­en­tific pa­pers on the sub­ject.

  • It re­duces bi­ases into mis­matches be­tween In­side View and Out­side View. It lists con­crete ob­ser­va­tions from both views and jux­ta­poses them.

  • Th­ese ob­ser­va­tions are writ­ten in a way that is hope­fully gen­eral enough for most peo­ple to find they match their own ex­pe­riences.

  • It trusts read­ers to in­fer from these jux­ta­posed ob­ser­va­tions their own un­der­stand­ing of the phe­nom­ena. After all, gen­er­al­iz­ing over par­tic­u­lars is much eas­ier than in­te­grat­ing gen­er­al­iza­tions and ap­ply­ing them to par­tic­u­lars. The un­der­stand­ing gained this way will be im­pre­cise, but it has the ad­van­tage of ac­tu­ally ar­riv­ing in­side the reader’s mind.

  • It is nearly jar­gon free; it only names the bi­ases for the benefit of that small minor­ity who might want to learn more.

What do you think about this? Should we com­mu­ni­cate more con­cretely? If so, should we do it in this way or what would you do differ­ently?

Would you like to cor­rect these ta­bles? Would you like to pro­pose more analo­gous ob­ser­va­tions or other bi­ases?

Thanks to Si­mon, miniBill and oth­ers for helping with the draft of this post.