Self-skepticism: the first principle of rationality

When Richard Feyn­man started in­ves­ti­gat­ing ir­ra­tional­ity in the 1970s, he quickly be­gun to re­al­ize the prob­lem wasn’t limited to the ob­vi­ous ir­ra­tional­ists.

Uri Gel­ler claimed he could bend keys with his mind. But was he re­ally any differ­ent from the aca­demics who in­sisted their spe­cial tech­niques could teach chil­dren to read? Both failed the cru­cial sci­en­tific test of skep­ti­cal ex­per­i­ment: Gel­ler’s keys failed to bend in Feyn­man’s hands; out­side tests showed the new tech­niques only caused read­ing scores to go down.

What mat­tered was not how smart the peo­ple were, or whether they wore lab coats or used long words, but whether they fol­lowed what he con­cluded was the cru­cial prin­ci­ple of truly sci­en­tific thought: “a kind of ut­ter hon­esty—a kind of lean­ing over back­wards” to prove your­self wrong. In a word: self-skep­ti­cism.

As Feyn­man wrote, “The first prin­ci­ple is that you must not fool your­self—and you are the eas­iest per­son to fool.” Our be­liefs always seem cor­rect to us—af­ter all, that’s why they’re our be­liefs—so we have to work ex­tra-hard to try to prove them wrong. This means con­stantly look­ing for ways to test them against re­al­ity and to think of rea­sons our tests might be in­suffi­cient.

When I think of the most ra­tio­nal peo­ple I know, it’s this qual­ity of theirs that’s most pro­nounced. They are con­stantly try­ing to prove them­selves wrong—they at­tack their be­liefs with ev­ery­thing they can find and when they run out of weapons they go out and search for more. The re­sult is that by the time I come around, they not only ac­knowl­edge all my crit­i­cisms but pro­pose sev­eral more I hadn’t even thought of.

And when I think of the least ra­tio­nal peo­ple I know, what’s strik­ing is how they do the ex­act op­po­site: in­stead of vi­ciously at­tack­ing their be­liefs, they try des­per­ately to defend them. They too have re­sponses to all my cri­tiques, but in­stead of ac­knowl­edg­ing and agree­ing, they vi­ciously at­tack my cri­tique so it never touches their pre­cious be­lief.

Since these two can be hard to dis­t­in­guish, it’s best to look at some ex­am­ples. The Cochrane Col­lab­o­ra­tion ar­gues that sup­port from hos­pi­tal nurses may be helpful in get­ting peo­ple to quit smok­ing. How do they know that? you might ask. Well, they found this was the re­sult from do­ing a meta-anal­y­sis of 31 differ­ent stud­ies. But maybe they chose a bi­ased se­lec­tion of stud­ies? Well, they sys­tem­at­i­cally searched “MEDLINE, EMBASE and Psy­cINFO [along with] hand search­ing of spe­cial­ist jour­nals, con­fer­ence pro­ceed­ings, and refer­ence lists of pre­vi­ous tri­als and overviews.” But did the stud­ies they pick suffer from se­lec­tion bias? Well, they searched for that—along with three other kinds of sys­tem­atic bias. And so on. But even af­ter all this care­ful work, they still only are con­fi­dent enough to con­clude “the re­sults…sup­port a mod­est but pos­i­tive effect…with cau­tion … these meta-anal­y­sis find­ings need to be in­ter­preted care­fully in light of the method­olog­i­cal limi­ta­tions”.

Com­pare this to the Her­i­tage Foun­da­tion’s ar­gu­ment for the bi­par­ti­san Wy­den–Ryan pre­mium sup­port plan. Their re­port also dis­cusses lots of ob­jec­tions to the pro­posal, but con­fi­dently knocks down each one: “this anal­y­sis re­lies on two highly im­plau­si­ble as­sump­tions … All these pre­dic­tions were dead wrong. … this per­spec­tive com­pletely ig­nores the his­tory of Med­i­care” Their con­clu­sion is similarly con­fi­dent: “The ar­gu­ments used by op­po­nents of pre­mium sup­port are weak and flawed.” Ap­par­ently there’s just not a sin­gle rea­son to be cau­tious about their enor­mous gov­ern­ment policy pro­posal!

Now, of course, the Cochrane au­thors might be se­cretly quite con­fi­dent and the Her­i­tage Foun­da­tion might be wring­ing their hands with self-skep­ti­cism be­hind-the-scenes. But let’s imag­ine for a mo­ment that these aren’t just re­portes in­tended to per­suade oth­ers of a be­lief and in­stead ac­cu­rate por­tray­als of how these two differ­ent groups ap­proached the ques­tion. Now ask: which style of think­ing is more likely to lead the au­thors to the right an­swer? Which at­ti­tude seems more like Richard Feyn­man? Which seems more like Uri Gel­ler?