When is rationality useful?

In ad­di­tion to my skep­ti­cism about the foun­da­tions of epistemic ra­tio­nal­ity, I’ve long had doubts about the effec­tive­ness of in­stru­men­tal ra­tio­nal­ity. In par­tic­u­lar, I’m in­clined to at­tribute the suc­cesses of highly com­pe­tent peo­ple pri­mar­ily to traits like in­tel­li­gence, per­son­al­ity and work ethic, rather than spe­cific habits of thought. But I’ve been un­sure how to rec­on­cile that with the fact that ra­tio­nal­ity tech­niques have proved use­ful to many peo­ple (in­clud­ing me).

Here’s one very sim­ple (and very leaky) ab­strac­tion for do­ing so. We can model suc­cess as a com­bi­na­tion of do­ing use­ful things and avoid­ing mak­ing mis­takes. As a par­tic­u­lar ex­am­ple, we can model in­tel­lec­tual suc­cess as a com­bi­na­tion of com­ing up with good ideas and avoid­ing bad ideas. I claim that ra­tio­nal­ity helps us avoid mis­takes and bad ideas, but doesn’t help much in gen­er­at­ing good ideas and use­ful work.

Here I’m us­ing a fairly in­tu­itive and fuzzy no­tion of the seek­ing good/​avoid­ing bad di­chotomy. Ob­vi­ously if you spend all your time think­ing about bad ideas, you won’t have time to come up with good ideas. But I think the men­tal mo­tion of dis­miss­ing bad ideas is quite dis­tinct from that of gen­er­at­ing good ones. As an­other ex­am­ple, if you pro­cras­ti­nate all day, that’s a mis­take, and ra­tio­nal­ity can help you avoid it. If you aim to work pro­duc­tively for 12 hours a day, I think there’s very lit­tle ra­tio­nal­ity can do to help you man­age that, com­pared with hav­ing a strong work ethic and a pas­sion for the topic. More gen­er­ally, a mis­take is do­ing un­usu­ally badly at some­thing, but not failing to do un­usu­ally well at it.

This frame­work tells us when ra­tio­nal­ity is most and least use­ful. It’s least use­ful in do­mains where mak­ing mis­takes is a more effec­tive way to learn than rea­son­ing things out in ad­vance, and so there’s less ad­van­tage in avoid­ing them. This might be be­cause mis­takes are very cheap (as in learn­ing how to play chess) or be­cause you have to en­gage with many un­pre­dictable com­plex­ities of the real world (as in be­ing an en­trepreneur). It’s also less use­ful in do­mains where suc­cess re­quires a lot of ded­i­cated work, and so hav­ing in­trin­sic mo­ti­va­tion for that work is cru­cial. Be­ing a mu­si­cian is one ex­treme of this; more rele­vantly, get­ting deep ex­per­tise in a field of­ten also looks like this.

It’s most use­ful in do­mains where there’s very lit­tle feed­back ei­ther from other peo­ple or from re­al­ity, so you can’t tell whether you’re mak­ing a mis­take ex­cept by analysing your own ideas. Philos­o­phy is one of these—my re­cent post de­tails how as­tron­omy was thrown off track for mil­len­nia by a few bad philo­soph­i­cal as­sump­tions. It’s also most use­ful in do­mains where there’s high down­side risk, such that you want to avoid mak­ing any mis­takes. You might think that a field like AI safety re­search is one of the lat­ter, but ac­tu­ally I think that in al­most all re­search, the qual­ity of your few best ideas is the cru­cial thing, and it doesn’t re­ally mat­ter how many other mis­takes you make. This ar­gu­ment is less ap­pli­ca­ble to AI safety re­search to the ex­tent that it re­lies on long chains of rea­son­ing about ex­treme hy­po­thet­i­cals (i.e. to the ex­tent that it’s philos­o­phy) but I still think that the claim is broadly true.

Another lens through which to think about when ra­tio­nal­ity is most use­ful is that it’s a (par­tial) sub­sti­tute for be­long­ing to a com­mu­nity. In a knowl­edge-seek­ing com­mu­nity, be­ing forced to ar­tic­u­late our ideas makes it clearer what their weak spots are, and al­lows oth­ers to crit­i­cise them. We are gen­er­ally much harsher on other peo­ple’s ideas than our own, due to bi­ases like an­chor­ing and con­fir­ma­tion bias (for more on this, see The Enigma of Rea­son). The main benefit I’ve gained from ra­tio­nal­ity has been the abil­ity to in­ter­nally repli­cate that pro­cess, by get­ting into the habit of notic­ing when I slip into dan­ger­ous pat­terns of thought. How­ever, that usu­ally doesn’t help me gen­er­ate novel ideas, or ex­pand them into use­ful work. In a work­ing com­mu­nity (such as a com­pany), there’s ex­ter­nal pres­sure to be pro­duc­tive, and feed­back loops to help keep peo­ple mo­ti­vated. Pro­duc­tivity tech­niques can sub­sti­tute for those when they’re not available.

Lastly, we should be care­ful to break down do­mains into their con­stituent re­quire­ments where pos­si­ble. For ex­am­ple, the effec­tive al­tru­ism move­ment is about do­ing the most good. Part of that re­quires philos­o­phy—and EA is in­deed very effec­tive at iden­ti­fy­ing im­por­tant cause ar­eas. How­ever, I don’t think this tells us very much about its abil­ity to ac­tu­ally do use­ful things in those cause ar­eas, or or­ganise it­self and ex­pand its in­fluence. This may seem like an ob­vi­ous dis­tinc­tion, but in cases like these I think it’s quite easy to trans­fer con­fi­dence about the philo­soph­i­cal step of de­cid­ing what to do to con­fi­dence about the prac­ti­cal step of ac­tu­ally do­ing it.