[Link] Ignorance, a skilled practice



The Global Knowl­edge Game

To illus­trate that global knowl­edge is a game, con­sider a story about Alexan­der Luria, who stud­ied illiter­ate Rus­sian peas­ants and their semi-liter­ate chil­dren. Con­sider es­pe­cially this ver­sion of the story, pre­pared in the 1970s to provide morale and con­text to read­ing teach­ers (John Guthrie, 1977). Essen­tially, Luria dis­cov­ered that the illiter­ate, un­schooled peas­ants were highly re­sis­tant to syl­l­o­gisms and word games. The adult peas­ants would only an­swer ques­tions based on their own knowl­edge, and stub­bornly re­fused to make de­duc­tions from given premises. “All bears are white where it is snowy. It is snowy in Nova Zem­bla. What color are the bears in Nova Zem­bla?” “I don’t know, I have never been to Nova Zem­bla.” Chil­dren with only a year or two of ed­u­ca­tion, how­ever, were eas­ily able to en­gage in such ab­stract rea­son­ing. They quickly an­swered the syl­l­o­gisms and drew in­fer­ences from hy­po­thet­i­cal facts out­side of their own ob­ser­va­tion.

In this story, I ar­gue, Luria’s peas­ants are in­dex­i­cal ge­niuses, who re­fuse to en­gage in un­proven syl­l­o­gis­tic games. They are not in­ter­ested in a global, uni­ver­sal game. Their chil­dren, how­ever, are eas­ily in­tro­duced to this game by the pro­cess of school­ing and liter­acy.

In­ter­est­ingly, a more re­cent group of re­searchers claim that illiter­ate peo­ple do fine at mak­ing in­fer­ences against ex­pe­rience, if the con­text is given as a dis­tant planet (Dias et al., 2005). I am not offer­ing this as true, but as a story about how ex­pect­ing peo­ple to op­er­ate in the “global knowl­edge game” might por­tray them as stupi­der than they re­ally are, if they sim­ply choose not to play in that game. This is to segue into the next hermeneu­tic pass, in which we are told that the hype sur­round­ing “cog­ni­tive bias” is re­ally a sort of sci­ence magic trick, an illu­sion de­signed to por­tray in­dex­i­cal ge­niuses, like Luria’s peas­ants and our­selves, as global fools.

The pa­per is “The Bias Bias in Be­hav­ioral Eco­nomics,” by Gerd Gigeren­zer (2018). If you, like me, have ever been fas­ci­nated by cog­ni­tive bias re­search, this is a bru­tal pa­per to come to terms with. Gigeren­zer ex­am­ines sev­eral pur­ported bi­ases in what I would call an­a­lytic rea­son­ing or the global knowl­edge game, and finds ex­pla­na­tions for these pur­ported bi­ases in the in­dex­i­cal re­al­ity of hu­mans.

For in­stance, some ap­par­ent “bi­ases” that peo­ple dis­play about prob­a­bil­ity are not ac­tu­ally er­rors. For the small (and in most cases, merely finite) sam­ples that re­al­ity has to offer, peo­ple’s “bi­ased” in­tu­itions are more ac­cu­rate than a “globally cor­rect” an­swer would be (that is, the cor­rect an­swer if the sam­ple were in­finite). In toss­ing fair coins, peo­ple tend to in­tuit that ir­reg­u­lar strings are more prob­a­ble than more reg­u­lar strings (e.g. that HHHT is more prob­a­ble than HHHH in a se­quence of coin flips). This sim­ple in­tu­ition can’t be cor­rect, though, be­cause given in­finite coin flips, each string is as likely as any other, and if the se­quence is only four flips, af­ter HHH, each out­come is equally likely. But for small, finite num­bers of flips greater than the string length, Gigeren­zer ar­gues, it is the hu­man in­tu­ition that is cor­rect, not the naive global solu­tion: HHHT does take less time to show up than HHHH in re­peated simu­la­tions, and is more com­monly en­coun­tered in small sam­ples. To drive home his point, he offers a bet:

If you are still not con­vinced, try this bet (Hahn and War­ren, 2010), which I will call the law-of-small-num­bers bet:

You flip a fair coin 20 times. If this se­quence con­tains at least one HHHH, I pay you $100. If it con­tains at least one HHHT, you pay me $100. If it con­tains nei­ther, no­body wins.

More broadly, cog­ni­tive bias pro­po­nents find fault with their sub­jects for treat­ing “log­i­cally equiv­a­lent” lan­guage state­ments as hav­ing differ­ent mean­ings, when con­text re­veals that these “log­i­cally ir­rele­vant” cues fre­quently do re­veal rich mean­ing in prac­tice. For in­stance, peo­ple re­act differ­ently to the “same” in­for­ma­tion pre­sented nega­tively vs. pos­i­tively (10% like­li­hood of death vs. 90% like­li­hood of sur­vival). Cog­ni­tive bias pro­po­nents frame this as an er­ror, but Gigeren­zer ar­gues that when peo­ple make this “er­ror,” they are mak­ing use of mean­ingful con­text that a “bias-free” robot would miss.