Improving Human Rationality Through Cognitive Change (intro)

This is the in­tro­duc­tion to a pa­per I started writ­ing long ago, but have since given up on. The pa­per was go­ing to be an overview of meth­ods for im­prov­ing hu­man ra­tio­nal­ity through cog­ni­tive change. Since it con­tains lots of handy refer­ences on ra­tio­nal­ity, I figured I’d pub­lish it, in case it’s helpful to oth­ers.

1. Introduction

Dur­ing the last half-cen­tury, cog­ni­tive sci­en­tists have cat­a­logued dozens of com­mon er­rors in hu­man judg­ment and de­ci­sion-mak­ing (Griffin et al. 2012; Gilovich et al. 2002). Stanovich (1999) pro­vides a sober­ing in­tro­duc­tion:

For ex­am­ple, peo­ple as­sess prob­a­bil­ities in­cor­rectly, they dis­play con­fir­ma­tion bias, they test hy­pothe­ses in­effi­ciently, they vi­o­late the ax­ioms of util­ity the­ory, they do not prop­erly cal­ibrate de­grees of be­lief, they over­pro­ject their own opinions onto oth­ers, they al­low prior knowl­edge to be­come im­pli­cated in de­duc­tive rea­son­ing, they sys­tem­at­i­cally un­der­weight in­for­ma­tion about nonoc­cur­rence when eval­u­at­ing co­vari­a­tion, and they dis­play nu­mer­ous other in­for­ma­tion-pro­cesses bi­ases...

The good news is that re­searchers have also be­gun to un­der­stand the cog­ni­tive mechanisms which pro­duce these er­rors (Kah­ne­man 2011; Stanovich 2010), they have found sev­eral “de­bi­as­ing” tech­niques that groups or in­di­vi­d­u­als may use to par­tially avoid or cor­rect these er­rors (Lar­rick 2004), and they have dis­cov­ered that en­vi­ron­men­tal fac­tors can be used to help peo­ple to ex­hibit fewer er­rors (Thaler and Sun­stein 2009; Trout 2009).

This “heuris­tics and bi­ases” re­search pro­gram teaches us many les­sons that, if put into prac­tice, could im­prove hu­man welfare. De­bi­as­ing tech­niques that im­prove hu­man ra­tio­nal­ity may be able to de­crease rates of vi­o­lence caused by ide­olog­i­cal ex­trem­ism (Lilien­feld et al. 2009). Knowl­edge of hu­man bias can help ex­ec­u­tives make more prof­itable de­ci­sions (Kah­ne­man et al. 2011). Scien­tists with im­proved judg­ment and de­ci­sion-mak­ing skills (“ra­tio­nal­ity skills”) may be more apt to avoid ex­per­i­menter bias (Sack­ett 1979). Un­der­stand­ing the na­ture of hu­man rea­son­ing can also im­prove the prac­tice of philos­o­phy (Knobe et al. 2012; Talbot 2009; Bishop and Trout 2004; Muehlhauser 2012), which has too of­ten made false as­sump­tions about how the mind rea­sons (Wein­berg et al. 2001; Lakoff and John­son 1999; De Paul and Ram­sey 1999). Fi­nally, im­proved ra­tio­nal­ity could help de­ci­sion mak­ers to choose bet­ter poli­cies, es­pe­cially in do­mains likely by their very na­ture to trig­ger bi­ased think­ing, such as in­vest­ing (Burn­ham 2008), mil­i­tary com­mand (Lang 2011; Willi­ams 2010; Janser 2007), in­tel­li­gence anal­y­sis (Heuer 1999), or the study of global catas­trophic risks (Yud­kowsky 2008a).

But is it pos­si­ble to im­prove hu­man ra­tio­nal­ity? The an­swer, it seems, is “Yes.” Lo­vallo and Sibony (2010) showed that when or­ga­ni­za­tions worked to re­duce the effect of bias on their in­vest­ment de­ci­sions, they achieved re­turns of 7% or higher. Mul­ti­ple stud­ies sug­gest that a sim­ple in­struc­tion to “think about al­ter­na­tive hy­pothe­ses” can coun­ter­act over­con­fi­dence, con­fir­ma­tion bias, and an­chor­ing effects, lead­ing to more ac­cu­rate judg­ments (Muss­weiler et al. 2000; Koehler 1994; Ko­riat et al. 1980). Merely warn­ing peo­ple about bi­ases can de­crease their prevalence, at least with re­gard to fram­ing effects (Cheng and Wu 2010), hind­sight bias (Hasher et al. 1981; Reimers and But­ler 1992), and the out­come effect (Clark­son et al. 2002). Sev­eral other meth­ods have been shown to me­lio­rate the effects of com­mon hu­man bi­ases (Lar­rick 2004). Judg­ment and de­ci­sion-mak­ing ap­pear to be skills that can be learned and im­proved with prac­tice (Dhami et al. 2012).

In this ar­ti­cle, I first ex­plain what I mean by “ra­tio­nal­ity” as a nor­ma­tive con­cept. I then re­view the state of our knowl­edge con­cern­ing the causes of hu­man er­rors in judg­ment and de­ci­sion-mak­ing (JDM). The largest sec­tion of our ar­ti­cle sum­ma­rizes what we cur­rently know about how to im­prove hu­man ra­tio­nal­ity through cog­ni­tive change (e.g. “ra­tio­nal­ity train­ing”). We con­clude by as­sess­ing the prospects for im­prov­ing hu­man ra­tio­nal­ity through cog­ni­tive change, and by recom­mend­ing par­tic­u­lar av­enues for fu­ture re­search.

2. Nor­ma­tive Rationality

In cog­ni­tive sci­ence, ra­tio­nal­ity is a nor­ma­tive con­cept (Stanovich 2011). As Stanovich (2012) ex­plains, “When a cog­ni­tive sci­en­tist terms a be­hav­ior ir­ra­tional he/​she means that the be­hav­ior de­parts from the op­ti­mum pre­scribed by a par­tic­u­lar nor­ma­tive model.”

This nor­ma­tive model of ra­tio­nal­ity con­sists in logic, prob­a­bil­ity the­ory, and ra­tio­nal choice the­ory. In their open­ing chap­ter for The Oxford Hand­book of Think­ing and Rea­son­ing, Chater and Oaks­ford (2012) ex­plain:

Is it mean­ingful to at­tempt to de­velop a gen­eral the­ory of ra­tio­nal­ity at all? We might ten­ta­tively sug­gest that it is a prima fa­cie sign of ir­ra­tional­ity to be­lieve in alien ab­duc­tion, or to will a sports team to win in or­der to in­crease their chance of vic­tory. But these views or ac­tions might be en­tirely ra­tio­nal, given suit­ably non­stan­dard back­ground be­liefs about other alien ac­tivity and the gen­eral effi­cacy of psy­chic pow­ers. Ir­ra­tional­ity may, though, be as­cribed if there is a clash be­tween a par­tic­u­lar be­lief or be­hav­ior and such back­ground as­sump­tions. Thus, a thor­ough-go­ing phys­i­cal­ist may, per­haps, be ac­cused of ir­ra­tional­ity if she si­mul­ta­neously be­lieves in psy­chic pow­ers. A the­ory of ra­tio­nal­ity can­not, there­fore, be viewed as clar­ify­ing ei­ther what peo­ple should be­lieve or how peo­ple should act—but it can de­ter­mine whether be­liefs and be­hav­iors are com­pat­i­ble. Similarly, a the­ory of ra­tio­nal choice can­not de­ter­mine whether it is ra­tio­nal to smoke or to ex­er­cise daily; but it might clar­ify whether a par­tic­u­lar choice is com­pat­i­ble with other be­liefs and choices.

From this view­point, nor­ma­tive the­o­ries can be viewed as clar­ify­ing con­di­tions of con­sis­tency… Logic can be viewed as study­ing the no­tion of con­sis­tency over be­liefs. Prob­a­bil­ity… stud­ies con­sis­tency over de­grees of be­lief. Ra­tional choice the­ory stud­ies the con­sis­tency of be­liefs and val­ues with choices.

There are many good tu­to­ri­als on logic (Schechter 2005), prob­a­bil­ity the­ory (Kol­ler and Fried­man 2009), and ra­tio­nal choice the­ory (Alling­ton 2002; Parmi­giani and Inoue 2009), so I will make only two quick points here. First, by “prob­a­bil­ity” I mean the sub­jec­tive or Bayesian in­ter­pre­ta­tion of prob­a­bil­ity, be­cause that is the in­ter­pre­ta­tion which cap­tures de­grees of be­lief (Oaks­ford and Chater 2007; Jaynes 2003; Cox 1946). Se­cond, in ra­tio­nal choice the­ory I am of course en­dors­ing the nor­ma­tive prin­ci­ple of ex­pected util­ity max­i­miza­tion (Grant & Zandt 2009).

Ac­cord­ing to this con­cept of ra­tio­nal­ity, then, an agent is ra­tio­nal if its be­liefs are con­sis­tent with the laws of logic and prob­a­bil­ity the­ory and its de­ci­sions are con­sis­tent with the laws of ra­tio­nal choice the­ory. An agent is ir­ra­tional to the de­gree that its be­liefs vi­o­late the laws of logic or prob­a­bil­ity the­ory, or its de­ci­sions vi­o­late the laws of ra­tio­nal choice the­ory.1

Re­searchers work­ing in the heuris­tics and bi­ases tra­di­tion have shown that hu­mans reg­u­larly vi­o­late the norms of ra­tio­nal­ity (Mank­telow 2012; Pohl 2005). Th­ese re­searchers tend to as­sume that hu­man rea­son­ing could be im­proved, and thus they have been called “Me­liorists” (Stanovich 1999, 2004), and their pro­gram of us­ing psy­cholog­i­cal find­ings to make recom­men­da­tions for im­prov­ing hu­man rea­son­ing has been called “ame­lio­ra­tive psy­chol­ogy” (Bishop and Trout 2004).

Another group of re­searchers, termed the “Pan­glos­si­ans,”2 ar­gue that hu­man perfor­mance is gen­er­ally “ra­tio­nal” be­cause it man­i­fests an evolu­tion­ary adap­ta­tion for op­ti­mal in­for­ma­tion pro­cess­ing (Gigeren­zer et al. 1999).

I dis­agree with the Pan­glos­sian view for rea­sons de­tailed el­se­where (Griffiths et al. 2012:27; Stanovich 2010, ch. 1; Stanovich and West 2003; Stein 1996), though I also be­lieve the origi­nal dis­pute be­tween Me­liorists and Pan­glos­si­ans has been ex­ag­ger­ated (Sa­muels et al. 2002). In any case, a ver­bal dis­pute over what counts as “nor­ma­tive” for hu­man JDM need not de­tain us here.3 I have stipu­lated my defi­ni­tion of nor­ma­tive ra­tio­nal­ity — for the pur­poses of cog­ni­tive psy­chol­ogy — above. MY con­cern is with the ques­tion of whether cog­ni­tive change can im­prove hu­man JDM in ways that en­able hu­mans to achieve their goals more effec­tively than with­out cog­ni­tive change, and it seems (as I demon­strate be­low) that the an­swer is “yes.”

MY view of nor­ma­tive ra­tio­nal­ity does not im­ply, how­ever, that hu­mans ought to ex­plic­itly use the laws of ra­tio­nal choice the­ory to make ev­ery de­ci­sion. Nei­ther hu­mans nor ma­chines have the knowl­edge and re­sources to do so (Van Rooij 2008; Wang 2011). Thus, in or­der to ap­prox­i­mate nor­ma­tive ra­tio­nal­ity as best we can, we of­ten (ra­tio­nally) en­gage in a “bounded ra­tio­nal­ity” (Si­mon 1957) or “ecolog­i­cal ra­tio­nal­ity” (Gigeren­zer and Todd 2012) or “grounded ra­tio­nal­ity” (Elqayam 2011) that em­ploys sim­ple heuris­tics to im­perfectly achieve our goals with the limited knowl­edge and re­sources at our dis­posal (Vul 2010; Vul et al. 2009; Kah­ne­man and Fred­er­ick 2005). Thus, the best pre­scrip­tion for hu­man rea­son­ing is not nec­es­sar­ily to always use the nor­ma­tive model to gov­ern one’s think­ing (Grant & Zandt 2009; Stanovich 1999; Baron 1985). Baron (2008, ch. 2) ex­plains:

In short, nor­ma­tive mod­els tell us how to eval­u­ate judg­ments and de­ci­sions in terms of their de­par­ture from an ideal stan­dard. De­scrip­tive mod­els spec­ify what peo­ple in a par­tic­u­lar cul­ture ac­tu­ally do and how they de­vi­ate from the nor­ma­tive mod­els. Pre­scrip­tive mod­els are de­signs or in­ven­tions, whose pur­pose is to bring the re­sults of ac­tual think­ing into closer con­for­mity to the nor­ma­tive model. If pre­scrip­tive recom­men­da­tions de­rived in this way are suc­cess­ful, the study of think­ing can help peo­ple to be­come bet­ter thinkers.

[next, I was go­ing to dis­cuss the prob­a­ble causes of JDM er­rors, tested meth­ods for ame­lio­ra­tion, and promis­ing av­enues for fur­ther re­search]

Notes

1 For a sur­vey of other con­cep­tions of ra­tio­nal­ity, see Nick­er­son (2007). Note also that our con­cept of ra­tio­nal­ity is per­sonal, not sub­per­sonal (Frank­ish 2009; Davies 2000; Stanovich 2010:5).

2 The ad­jec­tive “Pan­glos­sian” was origi­nally ap­plied by Steven Jay Gould and Richard Le­won­tin (1979), who used it to de­scribe knee-jerk ap­peals to nat­u­ral se­lec­tion as the force that ex­plains ev­ery trait. The term comes from Voltaire’s char­ac­ter Dr. Pan­gloss, who said that “our noses were made to carry spec­ta­cles” (Voltaire 1759).

3 To re­solve such ver­bal dis­putes we can em­ploy the “method of elimi­na­tion” (Chalmers 2011) or, as Yud­kowsky (2008) put it, we can “re­place the sym­bol with the sub­stance.”