Philosophy Needs to Trust Your Rationality Even Though It Shouldn’t

Part of the se­quence: Ra­tion­al­ity and Philosophy

Philos­o­phy is no­table for the ex­tent to which dis­agree­ments with re­spect to even those most ba­sic ques­tions per­sist among its most able prac­ti­tion­ers, de­spite the fact that the ar­gu­ments thought rele­vant to the dis­puted ques­tions are typ­i­cally well-known to all par­ties to the dis­pute.

Thomas Kelly

The goal of philos­o­phy is to un­cover cer­tain truths… [But] philos­o­phy con­tinu­ally leads ex­perts with the high­est de­gree of epistemic virtue, do­ing the very best they can, to ac­cept a wide ar­ray of in­com­pat­i­ble doc­trines. There­fore, philos­o­phy is an un­re­li­able in­stru­ment for find­ing truth. A per­son who en­ters the field is highly un­likely to ar­rive at true an­swers to philo­soph­i­cal ques­tions.

Ja­son Brennan

After mil­len­nia of de­bate, philoso­phers re­main heav­ily di­vided on many core is­sues. Ac­cord­ing to the largest-ever sur­vey of philoso­phers, they’re split 25-24-18 on de­on­tol­ogy /​ con­se­quen­tial­ism /​ virtue ethics, 35-27 on em­piri­cism vs. ra­tio­nal­ism, and 57-27 on phys­i­cal­ism vs. non-phys­i­cal­ism.

Some­times, they are even di­vided on psy­cholog­i­cal ques­tions that psy­chol­o­gists have already an­swered: Philoso­phers are split evenly on the ques­tion of whether it’s pos­si­ble to make a moral judg­ment with­out be­ing mo­ti­vated to abide by that judg­ment, even though we already know that this is pos­si­ble for some peo­ple with dam­age to their brain’s re­ward sys­tem, for ex­am­ple many Park­in­son’s pa­tients, and pa­tients with dam­age to the ven­tro­me­dial frontal cor­tex (Schroeder et al. 2012).1

Why are physi­cists, biol­o­gists, and psy­chol­o­gists more prone to reach con­sen­sus than philoso­phers?2 One stan­dard story is that “the method of sci­ence is to amass such an enor­mous moun­tain of ev­i­dence that… sci­en­tists can­not ig­nore it.” Hence, re­li­gion­ists might still ar­gue that Earth is flat or that evolu­tion­ary the­ory and the Big Bang the­ory are “lies from the pit of hell,” and philoso­phers might still be di­vided about whether some­body can make a moral judg­ment they aren’t them­selves mo­ti­vated by, but sci­en­tists have reached con­sen­sus about such things.

In its de­pen­dence on masses of ev­i­dence and defini­tive ex­per­i­ments, sci­ence doesn’t trust your ra­tio­nal­ity:

Science is built around the as­sump­tion that you’re too stupid and self-de­ceiv­ing to just use [prob­a­bil­ity the­ory]. After all, if it was that sim­ple, we wouldn’t need a so­cial pro­cess of sci­ence… [Stan­dard sci­en­tific method] doesn’t trust your ra­tio­nal­ity, and it doesn’t rely on your abil­ity to use prob­a­bil­ity the­ory as the ar­biter of truth. It wants you to set up a defini­tive ex­per­i­ment.

Some­times, you can an­swer philo­soph­i­cal ques­tions with moun­tains of ev­i­dence, as with the ex­am­ple of moral mo­ti­va­tion given above. But or many philo­soph­i­cal prob­lems, over­whelming ev­i­dence sim­ply isn’t available. Or maybe you can’t af­ford to wait a decade for defini­tive ex­per­i­ments to be done. Thus, “if you would rather not waste ten years try­ing to prove the wrong the­ory,” or if you’d like to get the right an­swer with­out over­whelming ev­i­dence, “you’ll need to [tackle] the vastly more difficult prob­lem: listen­ing to ev­i­dence that doesn’t shout in your ear.”

This is why philoso­phers need ra­tio­nal­ity train­ing even more des­per­ately than sci­en­tists do. Philos­o­phy asks you to get the right an­swer with­out ev­i­dence that shouts in your ear. The less ev­i­dence you have, or the harder it is to in­ter­pret, the more ra­tio­nal­ity you need to get the right an­swer. (As like­li­hood ra­tios get smaller, your pri­ors need to be bet­ter and your up­dates more ac­cu­rate.)

Be­cause it tack­les so many ques­tions that can’t be an­swered by masses of ev­i­dence or defini­tive ex­per­i­ments, philos­o­phy needs to trust your ra­tio­nal­ity even though it shouldn’t: we gen­er­ally are as “stupid and self-de­ceiv­ing” as sci­ence as­sumes we are. We’re “pre­dictably ir­ra­tional” and all that.

But hey! Maybe philoso­phers are pre­pared for this. Since philos­o­phy is so much more de­mand­ing of one’s ra­tio­nal­ity, per­haps the field has built top-notch ra­tio­nal­ity train­ing into the stan­dard philos­o­phy cur­ricu­lum?

Alas, it doesn’t seem so. I don’t see much Kah­ne­man & Tver­sky in philos­o­phy syl­labi — just light-weight “crit­i­cal think­ing” classes and lists of in­for­mal fal­la­cies. But even classes in hu­man bias might not im­prove things much due to the so­phis­ti­ca­tion effect: some­one with a so­phis­ti­cated knowl­edge of fal­la­cies and bi­ases might just have more am­mu­ni­tion with which to at­tack views they don’t like. So what’s re­ally needed is reg­u­lar habits train­ing for gen­uine cu­ri­os­ity, mo­ti­vated cog­ni­tion miti­ga­tion, and so on.

(Imag­ine a world in which Frank Jack­son’s fa­mous re­ver­sal on the knowl­edge ar­gu­ment wasn’t news — be­cause es­tab­lished philoso­phers changed their minds all the time. Imag­ine a world in which philoso­phers were fine-tuned enough to reach con­sen­sus on 10 bits of ev­i­dence rather than 1,000.)

We might also ask: How well do philoso­phers perform on stan­dard tests of ra­tio­nal­ity, for ex­am­ple Fred­er­ick (2005)’s CRT? Liven­good et al. (2010) found, via an in­ter­net sur­vey, that sub­jects with grad­u­ate-level philos­o­phy train­ing had a mean CRT score of 1.32. (The best pos­si­ble score is 3.)

A score of 1.32 isn’t rad­i­cally differ­ent from the mean CRT scores found for psy­chol­ogy un­der­grad­u­ates (1.5), fi­nan­cial plan­ners (1.76), Florida Cir­cuit Court judges (1.23), Prince­ton Un­der­grad­u­ates (1.63), and peo­ple who hap­pened to be sit­ting along the Charles River dur­ing a July 4th fire­works dis­play (1.53). It is also no­tice­ably lower than the mean CRT scores found for MIT stu­dents (2.18) and for at­ten­dees to a meetup group (2.69).

More­over, sev­eral stud­ies show that philoso­phers are just as prone to par­tic­u­lar bi­ases as laypeo­ple (Schulz et al. 2011; To­bia et al. 2012), for ex­am­ple or­der effects in moral judg­ment (Sch­witzgebel & Cush­man 2012).

Peo­ple are typ­i­cally ex­cited about the Cen­ter for Ap­plied Ra­tion­al­ity be­cause it teaches think­ing skills that can im­prove one’s hap­piness and effec­tive­ness. That ex­cites me, too. But I hope that in the long run CFAR will also help pro­duce bet­ter philoso­phers, be­cause it looks to me like we need top-notch philo­soph­i­cal work to se­cure a de­sir­able fu­ture for hu­man­ity.3

Next post: Train Philoso­phers with Pearl and Kah­ne­man, not Plato and Kant

Pre­vi­ous post: In­tu­itions Aren’t Shared That Way


1 Clearly, many philoso­phers have ad­vanced ver­sions of mo­ti­va­tional in­ter­nal­ism that are di­rectly con­tra­dicted by these re­sults from psy­chol­ogy. How­ever, we don’t know ex­actly which ver­sion of mo­ti­va­tional in­ter­nal­ism is defended by each sur­vey par­ti­ci­pant who said they “ac­cept” or “lean to­ward” mo­ti­va­tional in­ter­nal­ism. Per­haps many of them defend weak­ened ver­sions of mo­ti­va­tional in­ter­nal­ism, such as those dis­cussed in sec­tion 3.1 of May (forth­com­ing).

2 Math­e­mat­i­ci­ans reach even stronger con­sen­sus than physi­cists, but they don’t ap­peal to what is usu­ally thought of as “moun­tains of ev­i­dence.” What’s go­ing on, there? Math­e­mat­i­ci­ans and philoso­phers al­most always agree about whether a proof or an ar­gu­ment is valid, given a par­tic­u­lar for­mal sys­tem. The differ­ence is that a math­e­mat­i­cian’s premises con­sist in ax­ioms and in the­o­rems already strongly proven, whereas a philoso­pher’s premises con­sist in sub­stan­tive claims about the world for which the ev­i­dence given is of­ten very weak (e.g. that philoso­pher’s in­tu­itions).

3 Bostrom (2000); Yud­kowsky (2008); Muehlhauser (2011).