What Intelligence Tests Miss: The psychology of rational thought

This is the fourth and fi­nal part in a mini-se­quence pre­sent­ing Keith E. Stanovich’s ex­cel­lent book What In­tel­li­gence Tests Miss: The psy­chol­ogy of ra­tio­nal thought.

If you want to give peo­ple a sin­gle book to in­tro­duce peo­ple to the themes and ideas dis­cussed on Less Wrong, What In­tel­li­gence Tests Miss is prob­a­bly the best cur­renty ex­ist­ing book for do­ing so. It does have a some­what differ­ent view on the study of bias than we on LW: while Eliezer con­cen­trated on the idea of the map and the ter­ri­tory and as­piring to the ideal of a perfect de­ci­sion-maker, Stanovich’s per­spec­tive is more akin to bias as a thing that pre­vents peo­ple from tak­ing full ad­van­tage of their in­tel­li­gence. Re­gard­less, for some­one less eas­ily per­suaded by LW’s some­what ab­stract ideals, read­ing Stanovich’s con­crete ex­am­ples first and then pro­ceed­ing to the Se­quences is likely to make the con­tent pre­sented in the se­quences much more in­ter­est­ing. Even some of our ter­minol­ogy such as “carv­ing re­al­ity at the joints” and the in­stru­men­tal/​epistemic ra­tio­nal­ity dis­tinc­tion will be more fa­mil­iar to some­body who was first read What In­tel­li­gence Tests Miss.

Below is a chap­ter-by-chap­ter sum­mary of the book.

In­side Ge­orge W. Bush’s Mind: Hints at What IQ Tests Miss is a brief in­tro­duc­tory chap­ter. It starts with the ex­am­ple of pres­i­dent Ge­orge W. Bush, men­tion­ing that the pres­i­dent’s op­po­nents fre­quently ar­gued against his in­tel­li­gence, and even his sup­port­ers im­plic­itly con­ceded the point by ar­gu­ing that even though he didn’t have “school smarts” he did have “street smarts”. Both groups were pur­port­edly sur­prised when it was re­vealed that the pres­i­dent’s IQ was around 120, roughly the same as his 2004 pres­i­den­tial can­di­date op­po­nent John Kerry. Stanovich then goes on to say that this should not be sur­pris­ing, for IQ tests do not tap into the ten­dency to ac­tu­ally think in an an­a­lyt­i­cal man­ner, and that IQ had been over­val­ued as a con­cept. For in­stance, uni­ver­sity ad­mis­sions fre­quently de­pend on tests such as the SAT, which are pretty much pure IQ tests. The chap­ter ends by a dis­claimer that the book is not an at­tempt to say that IQ tests mea­sure noth­ing im­por­tant, or that there would be many kinds of in­tel­li­gence. IQ does mea­sure some­thing real and im­por­tant, but that doesn’t change the fact that peo­ple over­value it and are gen­er­ally con­fused about what it ac­tu­ally does mea­sure.

Dys­ra­tiona­lia: Separat­ing Ra­tion­al­ity and In­tel­li­gence talks about the phe­nomenon in­for­mally de­scribed as “smart but act­ing stupid”. Stanovich notes that if we used a broad defi­ni­tion of in­tel­li­gence, where in­tel­li­gence only meant act­ing in an op­ti­mal man­ner, then this ex­pres­sion wouldn’t make any sense. Rather, it’s a sign that peo­ple are in­tu­itively aware of IQ and ra­tio­nal­ity as mea­sur­ing two sep­a­rate qual­ities. Stanovich then brings up the con­cept of dyslexia, which the DSM IV defines as “read­ing achieve­ment that falls sub­stan­tially be­low that ex­pected given the in­di­vi­d­ual’s chronolog­i­cal age, mea­sured in­tel­li­gence, and age-ap­pro­pri­ate ed­u­ca­tion”. Similarly, the di­ag­nos­tic crite­rion for math­e­mat­ics di­s­or­der (dyscal­cu­lia) is “math­e­mat­i­cal abil­ity that falls sub­stan­tially be­low that ex­pected for the in­di­vi­d­ual’s chronolog­i­cal age, mea­sured in­tel­li­gence, and age-ap­pro­pri­ate ed­u­ca­tion”. He ar­gues that since we have a prece­dent for cre­at­ing new dis­abil­ity cat­e­gories when some­one’s abil­ity in an im­por­tant skill do­main is be­low what would be ex­pected for their in­tel­li­gence, it would make sense to also have a cat­e­gory for “dys­ra­tiona­lia”:

Dys­ra­tiona­lia is the in­abil­ity to think and be­have ra­tio­nally de­spite ad­e­quate in­tel­li­gence. It is a gen­eral term that refers to a het­eroge­nous group of di­s­or­ders man­i­fested by sig­nifi­cant difficul­ties in be­lief for­ma­tion, in the as­sess­ment of be­lief con­sis­tency, and/​or in the de­ter­mi­na­tion of ac­tion to achieve one’s goals. Although dys­ra­tiona­lia may oc­cur con­comi­tantly with other hand­i­cap­ping con­di­tions (e.g. sen­sory im­pair­ment), dys­ra­tiona­lia is not the re­sult of those con­di­tions. The key di­ag­nos­tic crite­rion for dys­ra­tiona­lia is a level of ra­tio­nal­ity, as demon­strated in think­ing and be­hav­ior, that is sig­nifi­cantly be­low the level of the in­di­vi­d­ual’s in­tel­lec­tual ca­pac­ity (as de­ter­mined by an in­di­vi­d­u­ally ad­ministered IQ test).

The Reflec­tive Mind, the Al­gorith­mic Mind, and the Au­tonomous Mind pre­sents a three-level model of the mind, which I mostly cov­ered in A Tax­on­omy of Bias: The Cog­ni­tive Miser. At the end, we re­turn to the ex­am­ple of Ge­orge W. Bush, and are shown a bunch of quotes from the pres­i­dent’s sup­port­ers de­scribing him. His speech­writer called him “some­times glib, even dog­matic; of­ten un­cu­ri­ous and as a re­sult ill-in­formed”; John McCain said Bush never asks for his opinion and that the pres­i­dent “wasn’t in­tel­lec­tu­ally cu­ri­ous”. The same sen­ti­ment was echoed by a se­nior offi­cial in Iraq who had ob­served Bush in var­i­ous video­con­fer­ences and said that the pres­i­dent’s “ob­vi­ous lack of in­ter­est in long, de­tailed dis­cus­sions, had a chilling effect”. On the other hand, other peo­ple were quoted as say­ing that Bush was “ex­traor­di­nar­ily in­tel­li­gent, but was not in­ter­ested in learn­ing un­less it had prac­ti­cal value”. Tony Blair re­peat­edly told his as­so­ci­ates that Bush was “very bright”. This is taken as ev­i­dence that while Bush is in­deed in­tel­li­gent, he does not have think­ing dis­po­si­tions that would have make him make use of his in­tel­li­gence: he has dys­ra­tiona­lia.

Cut­ting In­tel­li­gence Down to Size fur­ther crit­i­cizes the trend of treat­ing the word “in­tel­li­gence” in a man­ner that is too broad. Stanovich points out that even crit­ics of the IQ con­cept who in­tro­duce terms such as “so­cial in­tel­li­gence” and “bod­ily-kines­thetic in­tel­li­gence” are prob­a­bly shoot­ing them­selves in the foot. By giv­ing ev­ery­thing valuable the la­bel of in­tel­li­gence, these crit­ics are ac­tu­ally in­creas­ing the es­teem of IQ tests, and there­fore mak­ing peo­ple think that IQ mea­sures more than it does.

Con­sider a thought ex­per­i­ment. Imag­ine that some­one ob­jected to the em­pha­sis given to horse­power (en­g­ine power) when eval­u­at­ing au­to­mo­biles. They feel that horse­power looms too large in peo­ple’s think­ing. In an at­tempt to deem­pha­size horse­power, they then be­ing to term the other fea­tures of the car things like “brak­ing horse­power” and “cor­ner­ing horse­power” and “com­fort horse­power”. Would such a strat­egy make peo­ple less likely to look to en­g­ine power as an in­di­ca­tor of the “good­ness” of a car? I think not. [...] Just as call­ing “all good car things” horse­power would em­pha­size horse­power, I would ar­gue that call­ing “all good cog­ni­tive things” in­tel­li­gence will con­tribute to the de­ifi­ca­tion of MAMBIT [Men­tal Abil­ities Mea­sured By In­tel­li­gence Tests].

Stanovich then con­tinues to ar­gue in fa­vor of sep­a­rat­ing ra­tio­nal­ity and in­tel­li­gence, cit­ing sur­veys that sug­gest that folk psy­chol­ogy does already dis­t­in­guish be­tween the two. He also brings up the chilling effect that de­ify­ing in­tel­li­gence seems to be hav­ing on so­ciety. Re­views about a book dis­cussing the maltreat­ment of boys la­beled fee­ble­minded seemed to con­cen­trate on the sto­ries of the boys who were later found to have nor­mal IQs, im­ply­ing that abu­sive treat­ment of boys who ac­tu­ally did have a low IQ was okay. Var­i­ous par­ents seem to take a di­ag­no­sis of low men­tal abil­ity as much more shock­ing than a di­ag­no­sis such as ADHD or learn­ing dis­abil­ity that stresses the pres­ence of nor­mal IQ, even though the life prob­lems as­so­ci­ated with some emo­tional and be­hav­ior di­s­or­ders are much more se­vere than those as­so­ci­ated with many forms of mod­er­ate or mild in­tel­lec­tual dis­abil­ity.

Why In­tel­li­gent Peo­ple Do­ing Fool­ish Things Is No Sur­prise briefly in­tro­duces the con­cept of the cog­ni­tive miser, ex­plain­ing that con­serv­ing en­ergy and not think­ing about things too much is a perfectly un­der­stand­able ten­dency given our evolu­tion­ary past.

The Cog­ni­tive Miser: Ways to Avoid Think­ing dis­cusses the cog­ni­tive miser fur­ther, start­ing with the “Jack is look­ing at Anne but Anne is look­ing at Ge­orge” prob­lem, not­ing that one could ar­rive at the cor­rect an­swer via dis­junc­tive rea­son­ing (“ei­ther Anne is mar­ried, in which case the an­swer is yes, or Anne is un­mar­ried, in which case the an­swer is also yes”) but most peo­ple won’t bother. It then dis­cusses at­tribute sub­sti­tu­tion (in­stead of di­rectly eval­u­at­ing X, con­sider the cor­re­lated and eas­ier to eval­u­ate qual­ity Y), vivid­ness/​salience/​ac­cessibil­ity effects, an­chor­ing effects and the recog­ni­tion heuris­tic. Stanovich em­pha­sizes that he does not say that heuris­tics are always bad, but rather that one shouldn’t always rely on them.

Fram­ing and the Cog­ni­tive Miser ex­ten­sively dis­cusses var­i­ous fram­ing effects, and at the end notes that high-IQ peo­ple are not usu­ally any more likely to avoid pro­duc­ing in­con­sis­tent an­swers to var­i­ous fram­ings un­less they are speci­fi­cally in­structed to try to be con­sis­tent. This is men­tioned to be a gen­eral phe­nomenon: if in­tel­li­gent peo­ple have to no­tice them­selves that an is­sue of ra­tio­nal­ity is in­volved, they do lit­tle bet­ter than their coun­ter­parts of lower in­tel­li­gence.

Myside Pro­cess­ing: Heads I Win—Tails I Win Too! dis­cusses “myside bias”, peo­ple eval­u­at­ing situ­a­tions in terms of their own per­spec­tive. Amer­i­cans will provide much stronger sup­port for the USA ban­ning an un­safe Ger­man car than for Ger­many ban­ning an un­safe Amer­i­can car. Peo­ple will much more eas­ily pick up on in­con­sis­ten­cies in the ac­tions of their poli­ti­cal op­po­nents than the poli­ti­ci­ans they sup­port. They will also be gen­er­ally over­con­fi­dent, be ap­palled at oth­ers ex­hibit­ing the same un­safe be­hav­iors they them­selves ex­hibit, un­der­es­ti­mate the de­gree to which bi­ases in­fluence our own think­ing, and as­sume peo­ple un­der­stand their mes­sages bet­ter than they ac­tu­ally do. The end of the chap­ter sur­veys re­search on the link­age be­tween in­tel­li­gence and the ten­dency to fall prey to these bi­ases. It notes that in­tel­li­gent peo­ple again do mod­er­ately bet­ter, but only when speci­fi­cally in­structed to avoid bias.

A Differ­ent Pit­fall of the Cog­ni­tive Miser: Think­ing a Lot, but Los­ing takes up the prob­lem of failing to over­ride your au­tonomous pro­cess­ing even when it would be nec­es­sary. Most of this chap­ter is cov­ered by my pre­vi­ous dis­cus­sion of over­ride failures in the Cog­ni­tive Miser post.

Mind­ware Gaps in­tro­duces in more de­tail a differ­ent failure mode: that of mind­ware gaps. It also in­tro­duces and ex­plains the con­cepts of Bayes’ the­o­rem, falsifi­a­bil­ity, base rates and the con­junc­tion er­ror as cru­cial mind­ware for avoid­ing many failures of ra­tio­nal­ity. It notes that think­ing dis­po­si­tions for ac­tu­ally ac­tively an­a­lyz­ing things could be called “strate­gic mind­ware”. The chap­ter con­cludes by not­ing that the use­ful mind­ware dis­cussed in the chap­ter is not widely and sys­tem­at­i­cally taught, leav­ing even in­tel­li­gent peo­ple gaps in their mind­ware that makes them sub­ject to failures of ra­tio­nal­ity.

I mostly cov­ered the con­tents of Con­tam­i­nated Mind­ware in my post about mind­ware prob­lems.

How Many Ways Can Think­ing Go Wrong? A Tax­on­omy of Ir­ra­tional Think­ing Ten­den­cies and Their Re­la­tion to In­tel­li­gence sum­ma­rizes the con­tent of the pre­vi­ous chap­ters and or­ga­nizes the var­i­ous bi­ases into a tax­on­omy of bi­ases that has the main cat­e­gories of the Cog­ni­tive Miser, Mind­ware Prob­lems, and Mr. Spock Syn­drome. I did not pre­vi­ously cover Mr. Spock Syn­drome be­cause as Stanovich says, it is not a fully cog­ni­tive cat­e­gory. Peo­ple with the syn­drome have a re­duced abil­ity to feel emo­tions, which messes up their abil­ity to be­have ap­pro­pri­ately in var­i­ous situ­a­tions even though their in­tel­li­gence re­mains in­tact. Stanovich notes that the syn­drome is most ob­vi­ous with peo­ple who have suffered se­vere brain dam­age, but difficul­ties of emo­tional reg­u­la­tion and aware­ness do seem to also cor­re­late nega­tively with some tests of ra­tio­nal­ity, as well as pos­i­tive life out­comes, even when in­tel­li­gence is con­trol­led for.

The So­cial Benefits of In­creas­ing Hu­man Ra­tion­al­ity—and Me­lio­rat­ing Ir­ra­tional­ity con­cludes the book by ar­gu­ing that while in­creas­ing the av­er­age in­tel­li­gence of peo­ple would have only small if any effects on gen­eral well-be­ing, we could reap vast so­cial benefits if we ac­tu­ally tried to make peo­ple more ra­tio­nal. There’s ev­i­dence that ra­tio­nal­ity would be much more malle­able than in­tel­li­gence. Disjunc­tive rea­son­ing, the ten­dency to con­sider all pos­si­ble states of the world when de­cid­ing among op­tions, is noted to be a ra­tio­nal think­ing skill of high gen­er­al­ity that can be taught. There also don’t seem to be strong in­tel­li­gence-re­lated limi­ta­tions on the abil­ity to think dis­junc­tively. Much other use­ful mind­ware, like that of sci­en­tific and prob­a­bil­is­tic rea­son­ing. While these might be challeng­ing to peo­ple with a lower IQ, tech­niques such as im­ple­men­ta­tion in­ten­tion may be eas­ier to learn.

An im­ple­men­ta­tion in­ten­tion is formed when the in­di­vi­d­ual marks the cue-ac­tion se­quence with the con­scious, ver­bal dec­la­ra­tion of “when X oc­curs, I will do Y.” Often with the aid of the con­text-fix­ing prop­er­ties of lan­guage, the trig­ger­ing of this cue-ac­tion se­quence on just a few oc­ca­sions is enough to es­tab­lish it in the au­tonomous mind. Fi­nally, re­search has shown that an even more min­i­mal­ist cog­ni­tive strat­egy of form­ing men­tal goals (whether or not they have im­ple­men­ta­tion in­ten­tions) can be effi­ca­cious. For ex­am­ple, peo­ple perform bet­ter at a task when they are told to form a men­tal goal (“set a spe­cific, challeng­ing goal for your­self”) for their perfor­mance as op­posed to be­ing given the generic mo­ti­va­tional in­struc­tions (“do your best”).

Stanovich also ar­gues in fa­vor of liber­tar­ian pa­ter­nal­ism: shap­ing the en­vi­ron­ment so that peo­ple are still free to choose what they want, but so that the de­fault choice is gen­er­ally the best one. For in­stance, coun­tries with an opt-out policy for or­gan dona­tion have far more donors than the coun­tries with an opt-in policy. This is not be­cause the peo­ple in one coun­try would be any more or less self­ish than those in other coun­tries, but be­cause peo­ple in gen­eral tend to go with the de­fault op­tion. He also ar­gues that it would be perfectly pos­si­ble though ex­pen­sive to de­velop gen­eral ra­tio­nal­ity tests that would be akin to in­tel­li­gence tests, and that also us­ing RQ prox­ies for things such as col­lege ad­mis­sion would have great so­cial benefits.

In stud­ies cited in this book, it has been shown that:
  • Psy­chol­o­gists have found ways of pre­sent­ing statis­ti­cal in­for­ma­tion so that we can make more ra­tio­nal de­ci­sions re­lated to med­i­cal mat­ters and in any situ­a­tion where statis­tics are in­volved.

  • Cog­ni­tive psy­chol­o­gists have shown that a few sim­ple changes in pre­sent­ing in­for­ma­tion in ac­cord with de­fault bi­ases could vastly in­crease the fre­quency of or­gan dona­tions, thus sav­ing thou­sands of lives.

  • Amer­i­cans an­nu­ally pay mil­lions of dol­lars for ad­vice on how to in­vest their money in the stock mar­ket, when fol­low­ing a few sim­ple prin­ci­ples from de­ci­sion the­ory would lead to re­turns on their in­vest­ments su­pe­rior to any of this ad­vice. Th­ese prin­ci­ples would help peo­ple avoid the cog­ni­tive bi­ases that lead them to re­duce their re­turns—over­re­act­ing to chance events, over­con­fi­dence, wish­ful think­ing, hind­sight bias, mi­s­un­der­stand­ing of prob­a­bil­ity.

  • De­ci­sion sci­en­tists have found that peo­ple are ex­tremely poor at as­sess­ing en­vi­ron­men­tal risks. This is mainly be­cause vivid­ness bi­ases dom­i­nate peo­ple’s judg­ment to an in­or­di­nate ex­tent. Peo­ple could im­prove, and this would make a huge differ­ence be­cause these poor as­sess­ments come to af­fect pub­lic policy (caus­ing policy mak­ers to im­ple­ment policy A, which saves one life for each $3.2 mil­lion spent, in­stead of policy B, which would have saved one life for ev­ery $220,000 spent, for ex­am­ple).

  • Psy­chol­o­gists from var­i­ous spe­cialty ar­eas are be­gin­ning to pin­point the cog­ni­tive illu­sions that sus­tain patholog­i­cal gam­bling be­hav­ior—pseu­do­di­ag­nos­tic­ity, be­lief per­se­ver­ance, over-re­act­ing to chance events, cog­ni­tive im­pul­sivity, mi­s­un­der­stand­ing prob­a­bil­ity—be­hav­ior that de­stroys thou­sands of lives each year.

  • Cog­ni­tive psy­chol­o­gists have stud­ied the over­con­fi­dence effect in hu­man judg­ment—that peo­ple mis­cal­ibrate their fu­ture perfor­mance, usu­ally by mak­ing overop­ti­mistic pre­dic­tions. Psy­chol­o­gists have stud­ied ways to help peo­ple avoid these prob­lems in self-mon­i­tor­ing, mak­ing it eas­ier for peo­ple to plan for the fu­ture (over­con­fi­dent peo­ple get more un­pleas­ant sur­prises).

  • So­cial psy­cholog­i­cal re­search has found that con­trol­ling the ex­plo­sion of choices in our lives is one of the keys to hap­piness—that con­strain­ing choice of­ten makes peo­ple hap­pier.

  • Sim­ple changes in the way that pen­sion plans are or­ga­nized and ad­ministered could make re­tire­ment more com­fortable for mil­lions of peo­ple.

  • Prob­a­bil­is­tic rea­son­ing is per­haps the most stud­ied topic in the de­ci­sion-mak­ing field, and many of the cog­ni­tive re­forms that have been ex­am­ined—for ex­am­ple, elimi­nat­ing base-rate ne­glect—could im­prove prac­tices in court­rooms, where poor think­ing about prob­a­bil­ities have been shown to im­pede jus­tice.

Th­ese are just a small sam­pling of the teach­able rea­son­ing strate­gies and en­vi­ron­men­tal fixes that could make a differ­ence in peo­ple’s lives, and they are more re­lated to ra­tio­nal­ity than in­tel­li­gence. They are ex­am­ples of the types of out­comes that would re­sult if we all be­came more ra­tio­nal thinkers and de­ci­sion mak­ers. They are the types of out­comes that would be mul­ti­plied if schools, busi­nesses, and gov­ern­ment fo­cused on the parts of cog­ni­tion that in­tel­li­gence tests miss. In­stead, we con­tinue to pay far more at­ten­tion to in­tel­li­gence than to ra­tio­nal think­ing. It is as if in­tel­li­gence has be­come totemic in our cul­ture, and we choose to pur­sue it rather than the rea­son­ing strate­gies that could trans­form our world.