Shifting Load to Explicit Reasoning

Re­lated to: Which Parts Are “Me”?, Mak­ing your ex­plicit rea­son­ing trust­wor­thy, The 5-Se­cond Level.

What’s dam­ag­ing about mor­al­iz­ing that we wish to avoid, what use­ful pur­pose does mor­al­iz­ing usu­ally serve, and what al­lows to avoid the dam­age while re­tain­ing the use­ful­ness? It en­gages psy­cholog­i­cal adap­ta­tions that pro­mote con­flict (by play­ing on so­cial sta­tus), which are un­pleas­ant to ex­pe­rience and can lead to un­de­sir­able con­se­quences in the long run (such as feel­ing sys­tem­at­i­cally un­com­fortable in­ter­act­ing with a per­son, and so not be­ing able to live or work or be friends with them). It serves the pur­pose of im­print­ing your val­ues, which you feel to be right, on the peo­ple you in­ter­act with. Con­se­quen­tial­ist elu­ci­da­tion of rea­sons for ap­prov­ing or dis­ap­prov­ing of a given policy (virtue) is an effec­tive per­sua­sion tech­nique if your val­ues are ac­tu­ally right (for the peo­ple you try to con­fer them on), and it doesn’t en­gage the same parts of your brain that make mor­al­iz­ing un­de­sir­able.

What hap­pens here is trans­fer of re­spon­si­bil­ity for im­por­tant tasks from the im­perfect ma­chin­ery that his­tor­i­cally used to man­age them (with sys­tem­atic prob­lems in any given con­text that hu­mans but not evolu­tion can no­tice), to ex­plicit rea­son­ing.

Tak­ing ad­van­tage of this re­quires in­clud­ing those tasks in the scope of things that can be rea­soned about (in­stead of ig­nor­ing them as not fal­ling into your area of ex­per­tise; for ex­am­ple flinch­ing from rea­son­ing about nor­ma­tive ques­tions or in­tu­ition as “not sci­en­tific”, or “not ob­jec­tive”), and de­vel­op­ing enough un­der­stand­ing to ac­tu­ally do bet­ter than the origi­nal heuris­tics (in some cases by not ig­nor­ing what they say), mak­ing your ex­plicit rea­son­ing worth trust­ing.

This calls for iden­ti­fy­ing other ex­am­ples of prob­le­matic modes of rea­son­ing that en­gage crude psy­cholog­i­cal adap­ta­tions, and de­vel­op­ing tech­niques for do­ing bet­ter (and mak­ing sure they are ac­tu­ally bet­ter be­fore trust­ing them). Th­ese ex­am­ples come to mind: ra­tio­nal ar­gu­ment (don’t use as ar­gu­ments things that you ex­pect other per­son dis­agrees with, seek a path where ev­ery step will be ac­cepted), al­lo­ca­tion of re­spon­si­bil­ity (don’t leave it to un­voiced ten­den­cies to do things, dis­cuss effort and mo­ti­va­tion ex­plic­itly), de­vel­op­ment of emo­tional as­so­ci­a­tions with a given situ­a­tion/​per­son/​thought (take it in your own hands, ex­plic­itly train your emo­tion to be what you pre­fer it to be, to the ex­tent pos­si­ble), learn­ing of facts (don’t rely on the stupid mem­ory mechanisms which don’t un­der­stand com­mands like “this is re­ally im­por­tant, re­mem­ber it”, use spaced rep­e­ti­tion sys­tems).

And the list goes on. What other cog­ni­tive tools can sig­nifi­cantly benefit from trans­fer­ring them to ex­plicit rea­son­ing? Should there be a list of prob­lems and solu­tions? Which un­solved prob­lems on such a list are par­tic­u­larly worth work­ing on? Which prob­lems with known solu­tions should be fixed (in any given per­son) as soon as pos­si­ble? How do we bet­ter fa­cil­i­tate train­ing?