Three ways CFAR has changed my view of rationality

The Cen­ter for Ap­plied Ra­tion­al­ity’s per­spec­tive on ra­tio­nal­ity is quite similar to Less Wrong’s. In par­tic­u­lar, we share many of Less Wrong’s differ­ences from what’s some­times called “tra­di­tional” ra­tio­nal­ity, such as Less Wrong’s in­clu­sion of Bayesian prob­a­bil­ity the­ory and the sci­ence on heuris­tics and bi­ases.

But af­ter spend­ing the last year and a half with CFAR as we’ve de­vel­oped, tested, and at­tempted to teach hun­dreds of differ­ent ver­sions of ra­tio­nal­ity tech­niques, I’ve no­ticed that my pic­ture of what ra­tio­nal­ity looks like has shifted some­what from what I per­ceive to be the most com­mon pic­ture of ra­tio­nal­ity on Less Wrong. Here are three ways I think CFAR has come to see the land­scape of ra­tio­nal­ity differ­ently than Less Wrong typ­i­cally does – not dis­agree­ments per se, but differ­ences in fo­cus or ap­proach. (Dis­claimer: I’m not speak­ing for the rest of CFAR here; these are my own im­pres­sions.)

1. We think less in terms of epistemic ver­sus in­stru­men­tal ra­tio­nal­ity.

For­mally, the meth­ods of nor­ma­tive epistemic ver­sus in­stru­men­tal ra­tio­nal­ity are dis­tinct: Bayesian in­fer­ence and ex­pected util­ity max­i­miza­tion. But meth­ods like “use Bayes’ The­o­rem” or “max­i­mize ex­pected util­ity” are usu­ally too ab­stract and high-level to be helpful for a hu­man be­ing try­ing to take man­age­able steps to­wards im­prov­ing her ra­tio­nal­ity. And when you zoom in from that high-level de­scrip­tion of ra­tio­nal­ity down to the more con­crete level of “What five-sec­ond men­tal habits should I be train­ing?” the dis­tinc­tion be­tween epistemic and in­stru­men­tal ra­tio­nal­ity be­comes less helpful.

Here’s an anal­ogy: epistemic ra­tio­nal­ity is like physics, where the goal is to figure out what’s true about the world, and in­stru­men­tal ra­tio­nal­ity is like en­g­ineer­ing, where the goal is to ac­com­plish some­thing you want as effi­ciently and effec­tively as pos­si­ble. You need physics to do en­g­ineer­ing; or I sup­pose you could say that do­ing en­g­ineer­ing is do­ing physics, but with a prac­ti­cal goal. How­ever, there’s plenty of physics that’s done for its own sake, and doesn’t have ob­vi­ous prac­ti­cal ap­pli­ca­tions, at least not yet. (String the­ory, for ex­am­ple.) Similarly, you need a fair amount of epistemic ra­tio­nal­ity in or­der to be in­stru­men­tally ra­tio­nal, though there are parts of epistemic ra­tio­nal­ity that many of us prac­tice for their own sake, and not as a means to an end. (For ex­am­ple, I ap­pre­ci­ate clar­ify­ing my think­ing about free will even though I don’t ex­pect it to change any of my be­hav­ior.)

In this anal­ogy, many skills we fo­cus on at CFAR are akin to es­sen­tial math, like lin­ear alge­bra or differ­en­tial equa­tions, which com­pose the fabric of both physics and en­g­ineer­ing. It would be fool­ish to ex­pect some­one who wasn’t com­fortable with math to suc­cess­fully calcu­late a planet’s tra­jec­tory or de­sign a bridge. And it would be similarly fool­ish to ex­pect you to suc­cess­fully up­date like a Bayesian or max­i­mize your util­ity if you lacked cer­tain un­der­ly­ing skills. Like, for in­stance: Notic­ing your emo­tional re­ac­tions, and be­ing able to shift them if it would be use­ful. Do­ing thought ex­per­i­ments. Notic­ing and over­com­ing learned hel­pless­ness. Vi­su­al­iz­ing in con­crete de­tail. Prevent­ing your­self from flinch­ing away from a thought. Re­ward­ing your­self for men­tal habits you want to re­in­force.

Th­ese and other build­ing blocks of ra­tio­nal­ity are es­sen­tial both for reach­ing truer be­liefs, and for get­ting what you value; they don’t fall cleanly into ei­ther an “epistemic” or an “in­stru­men­tal” cat­e­gory. Which is why, when I con­sider what pieces of ra­tio­nal­ity CFAR should be de­vel­op­ing, I’ve been think­ing less in terms of “How can we be more epistem­i­cally ra­tio­nal?” or “How can we be more in­stru­men­tally ra­tio­nal?” and in­stead us­ing queries like, “How can we be more metacog­ni­tive?”

2. We think more in terms of a mod­u­lar mind.

The hu­man mind isn’t one co­or­di­nated, unified agent, but rather a col­lec­tion of differ­ent pro­cesses that of­ten aren’t work­ing in sync, or even aware of what each other is up to. Less Wrong cer­tainly knows this; see, for ex­am­ple, dis­cus­sions of an­ti­ci­pa­tions ver­sus pro­fes­sions, aliefs, and metawant­ing. But in gen­eral we gloss over that fact, be­cause it’s so much sim­pler and more nat­u­ral to talk about “what I be­lieve” or “what I want,” even if tech­ni­cally there is no sin­gle “I” do­ing the be­liev­ing or want­ing. And for many pur­poses that kind of ap­prox­i­ma­tion is fine.

But a ra­tio­nal­ity-for-hu­mans usu­ally can’t rely on that short­hand. Any at­tempt to change what “I” be­lieve, or op­ti­mize for what “I” want, forces a con­fronta­tion of the fact that there are mul­ti­ple, con­tra­dic­tory things that could rea­son­ably be called “be­liefs,” or “wants,” co­ex­ist­ing in the same mind. So a large part of ap­plied ra­tio­nal­ity turns out to be about notic­ing those con­tra­dic­tions and try­ing to achieve co­her­ence, in some fash­ion, be­fore you can even be­gin to up­date on ev­i­dence or plan an ac­tion.

Many of the tech­niques we’re de­vel­op­ing at CFAR fall roughly into the tem­plate of co­or­di­nat­ing be­tween your two sys­tems of cog­ni­tion: im­plicit-rea­son­ing Sys­tem 1 and ex­plicit-rea­son­ing Sys­tem 2. For ex­am­ple, know­ing when each sys­tem is more likely to be re­li­able. Or know­ing how to get Sys­tem 2 to con­vince Sys­tem 1 of some­thing (“We’re not go­ing to die if we go talk to that stranger”). Or know­ing what kinds of ques­tions Sys­tem 2 should ask of Sys­tem 1 to find out why it’s un­easy about the con­clu­sion at which Sys­tem 2 has ar­rived.

This is all, of course, with the dis­claimer that the an­thro­po­mor­phiz­ing of the sys­tems of cog­ni­tion, and imag­in­ing them talk­ing to each other, is merely a use­ful metaphor. Even the clas­sifi­ca­tion of hu­man cog­ni­tion into Sys­tems 1 and 2 is prob­a­bly not strictly true, but it’s true enough to be use­ful. And other metaphors prove use­ful as well – for ex­am­ple, some difficul­ties with what feels like akra­sia be­come more tractable when you model your fu­ture selves as differ­ent en­tities, as we do in the cur­rent ver­sion of our “Del­e­gat­ing to your­self” class.

3. We’re more fo­cused on emo­tions.

There’s rel­a­tively lit­tle dis­cus­sion of emo­tions on Less Wrong, but they oc­cupy a cen­tral place in CFAR’s cur­ricu­lum and or­ga­ni­za­tional cul­ture.

It used to frus­trate me when peo­ple would say some­thing that re­vealed they held a Straw Vul­can-es­que be­lief that “ra­tio­nal­ist = emo­tion­less robot”. But now when I en­counter that mis­con­cep­tion, it just makes me want to smile, be­cause I’m think­ing to my­self: “If you had any idea how much time we spend at CFAR talk­ing about our feel­ings…”

Be­ing able to put your­self into par­tic­u­lar emo­tional states seems to make a lot of pieces of ra­tio­nal­ity eas­ier. For ex­am­ple, for most of us, it’s in­stru­men­tally ra­tio­nal to ex­plore a wider set of pos­si­ble ac­tions – differ­ent ways of study­ing, hold­ing con­ver­sa­tions, try­ing to be happy, and so on – be­yond what­ever our de­faults hap­pen to be. And for most of us, in­er­tia and aver­sions get in the way of that ex­plo­ra­tion. But get­ting your­self into “playful” mode (one of the hy­poth­e­sized pri­mary emo­tional cir­cuits com­mon across mam­mals) can make it eas­ier to branch out into a wider swath of Pos­si­ble-Ac­tion Space. Similarly, be­ing able to call up a feel­ing of cu­ri­os­ity or of “seek­ing” (an­other can­di­date for a pri­mary emo­tional cir­cuit) can help you con­quer mo­ti­vated cog­ni­tion and learned blank­ness.

And sim­ply be­ing able to no­tice your emo­tional state is rarer and more valuable than most peo­ple re­al­ize. For ex­am­ple, if you’re in fight-or-flight mode, you’re go­ing to feel more com­pel­led to re­ject ar­gu­ments that feel like a challenge to your iden­tity. Be­ing at­tuned to the signs of sym­pa­thetic ner­vous sys­tem ac­ti­va­tion – that you’re tens­ing up, or that your heart rate is in­creas­ing – means you get cues to dou­ble-check your rea­son­ing, or to coax your­self into an­other emo­tional state.

We also use emo­tions as sources of data. You can learn to tap into feel­ings of sur­prise or con­fu­sion to get a sense of how prob­a­ble you im­plic­itly ex­pect some event to be. Or prac­tice simu­lat­ing hy­po­thet­i­cals (“What if I knew that my novel would never sell well?”) and ob­serv­ing your re­sul­tant emo­tions, to get a clearer pic­ture of your util­ity func­tion.

And emo­tions-as-data can be a valuable check on your Sys­tem 2′s con­clu­sions. One of our stan­dard classes is “Goal Fac­tor­ing,” which en­tails find­ing some al­ter­nate set of ac­tions through which you can pur­chase the goods you want more cheaply. So you might rea­son, “I’m do­ing mar­tial arts for the ex­er­cise and self-defense benefits… but I could pur­chase both of those things for less time in­vest­ment by jog­ging to work and car­ry­ing Mace.” If you listened to your emo­tional re­ac­tion to that pro­posal, how­ever, you might no­tice you still feel sad about giv­ing up mar­tial arts even if you were get­ting the same amount of ex­er­cise and self-defense benefits some­how else.

Which prob­a­bly means you’ve got other rea­sons for do­ing mar­tial arts that you haven’t yet ex­plic­itly ac­knowl­edged—for ex­am­ple, maybe you just think it’s cool. If so, that’s im­por­tant, and de­serves a place in your de­ci­sion­mak­ing. Listen­ing for those emo­tional cues that your ex­plicit rea­son­ing has missed some­thing is a cru­cial step, and to the ex­tent that as­piring ra­tio­nal­ists some­times for­get it, I sup­pose that’s a Steel-Manned Straw Vul­can (Steel Vul­can?) that ac­tu­ally is worth wor­ry­ing about.

Conclusion

I’ll name one more trait that unites, rather than di­vides, CFAR and Less Wrong. We both di­verge from “tra­di­tional” ra­tio­nal­ity in that we’re con­cerned with de­ter­min­ing which gen­eral meth­ods sys­tem­at­i­cally perform well, rather than defend­ing some set of meth­ods as “ra­tio­nal” on a pri­ori crite­ria alone. So CFAR’s pic­ture of what ra­tio­nal­ity looks like, and how to be­come more ra­tio­nal, will and should change over the com­ing years as we learn more about the effects of our ra­tio­nal­ity train­ing efforts.