In praise of gullibility?

I was re­cently re-read­ing a piece by Yvain/​Scott Alexan­der called Epistemic Learned Hel­pless­ness. It’s a very in­sight­ful post, as is typ­i­cal for Scott, and I recom­mend giv­ing it a read if you haven’t already. In it he writes:

When I was young I used to read pseu­do­his­tory books; Im­manuel Velikovsky’s Ages in Chaos is a good ex­am­ple of the best this genre has to offer. I read it and it seemed so ob­vi­ously cor­rect, so perfect, that I could barely bring my­self to bother to search out re­but­tals.

And then I read the re­but­tals, and they were so ob­vi­ously cor­rect, so dev­as­tat­ing, that I couldn’t be­lieve I had ever been so dumb as to be­lieve Velikovsky.

And then I read the re­but­tals to the re­but­tals, and they were so ob­vi­ously cor­rect that I felt silly for ever doubt­ing.

And so on for sev­eral more iter­a­tions, un­til the labyrinth of doubt seemed in­escapable.

He goes on to con­clude that the skill of tak­ing ideas se­ri­ously—of­ten con­sid­ered one of the most im­por­tant traits a ra­tio­nal­ist can have—is a dan­ger­ous one. After all, it’s very easy for ar­gu­ments to sound con­vinc­ing even when they’re not, and if you’re too eas­ily swayed by ar­gu­ment you can end up with some very ab­surd be­liefs (like that Venus is a comet, say).

This post re­ally res­onated with me. I’ve had sev­eral ex­pe­riences similar to what Scott de­scribes, of be­ing trapped be­tween two de­baters who both had a con­vinc­ing­ness that ex­ceeded my abil­ity to dis­cern truth. And my re­ac­tion in those situ­a­tions was similar to his: even­tu­ally, af­ter go­ing through the end­less chain of re­but­tals and counter-re­but­tals, chang­ing my mind at each turn, I was forced to throw up my hands and ad­mit that I prob­a­bly wasn’t go­ing to be able to de­ter­mine the truth of the mat­ter—at least, not with­out spend­ing a lot more time in­ves­ti­gat­ing the differ­ent claims than I was will­ing to. And so in many cases I ended up adopt­ing a sort of semi-prin­ci­pled stance of ag­nos­ti­cism: un­less it was a re­ally re­ally im­por­tant ques­tion (in which case I was sort of obli­gated to do the hard work of in­ves­ti­gat­ing the mat­ter to ac­tu­ally figure out the truth), I would just say I don’t know when asked for my opinion.

[Non-ex­haus­tive list of ar­eas in which I am cur­rently epistem­i­cally hel­pless: geopoli­tics (in par­tic­u­lar the Is­rael/​Pales­tine situ­a­tion), an­throp­ics, nu­tri­tion sci­ence, pop­u­la­tion ethics]

All of which is to say: I think Scott is ba­si­cally right here, in many cases we shouldn’t have too strong of an opinion on com­pli­cated mat­ters. But when I re-read the piece re­cently I was struck by the fact that his whole ar­gu­ment could be summed up much more suc­cinctly (albeit much more pithily) as:

“Don’t be gullible.”

Huh. Sounds a lot more ob­vi­ous that way.

Now, don’t get me wrong: this is still good ad­vice. I think peo­ple should en­deav­our to not be gullible if at all pos­si­ble. But it makes you won­der: why did Scott feel the need to write a post de­nounc­ing gullibil­ity? After all, most peo­ple kind of already think be­ing gullible is bad—who ex­actly is he ar­gu­ing against here?

Well, re­call that he wrote the post in re­sponse to the no­tion that peo­ple should be­lieve ar­gu­ments and take ideas se­ri­ously. Th­ese sound like good, LW-ap­proved ideas, but note that un­less you’re already ex­cep­tion­ally smart or ex­cep­tion­ally well-in­formed, be­liev­ing ar­gu­ments and tak­ing ideas se­ri­ously is tan­ta­mount to...well, to be­ing gullible. In fact, you could prob­a­bly think of gullibil­ity as a kind of ex­treme and patholog­i­cal form of light­ness; a will­ing­ness to be swept away by the winds of ev­i­dence, no mat­ter how strong (or weak) they may be.

There seems to be some ten­sion here. On the one hand we have an in­tu­itive be­lief that gullibil­ity is bad; that the proper re­sponse to any new claim should be skep­ti­cism. But on the other hand we also have some epistemic norms here at LW that are—well, maybe they don’t en­dorse be­ing gullible, but they don’t ex­actly not en­dorse it ei­ther. I’d say the LW meme­plex is at least mildly friendly to­wards the no­tion that one should be­lieve con­clu­sions that come from con­vinc­ing-sound­ing ar­gu­ments, even if they seem ab­surd. A core tenet of LW is that we change our mind too lit­tle, not too much, and we’re cer­tainly all in favour of light­ness as a virtue.

Any­way, I thought about this ten­sion for a while and came to the con­clu­sion that I had prob­a­bly just lost sight of my pur­pose. The goal of (epistemic) ra­tio­nal­ity isn’t to not be gullible or not be skep­ti­cal—the goal is to form cor­rect be­liefs, full stop. Terms like gullibil­ity and skep­ti­cism are use­ful to the ex­tent that peo­ple tend to be sys­tem­at­i­cally overly ac­cept­ing or dis­mis­sive of new ar­gu­ments—in­di­vi­d­ual be­liefs them­selves are sim­ply ei­ther right or wrong. So, for ex­am­ple, if we do stud­ies and find out that peo­ple tend to ac­cept new ideas too eas­ily on av­er­age, then we can write posts ex­plain­ing why we should all be less gullible, and give tips on how to ac­com­plish this. And if on the other hand it turns out that peo­ple ac­tu­ally ac­cept far too few new ideas on av­er­age, then we can start talk­ing about how we’re all much too skep­ti­cal and how we can com­bat that. But in the end, in terms of be­com­ing less wrong, there’s no sense in which gullibil­ity would be in­trin­si­cally bet­ter or worse than skep­ti­cism—they’re both just words we use to de­scribe de­vi­a­tions from the ideal, which is ac­cept­ing only true ideas and re­ject­ing only false ones.

This an­swer ba­si­cally wrapped the mat­ter up to my satis­fac­tion, and re­solved the sense of ten­sion I was feel­ing. But af­ter­wards I was left with an ad­di­tional in­ter­est­ing thought: might gullibil­ity be, if not a de­sir­able end point, then an eas­ier start­ing point on the path to ra­tio­nal­ity?

That is: no one should as­pire to be gullible, ob­vi­ously. That would be as­piring to­wards im­perfec­tion. But if you were set­ting out on a jour­ney to be­come more ra­tio­nal, and you were forced to choose be­tween start­ing off too gullible or too skep­ti­cal, could gullibil­ity be an eas­ier ini­tial con­di­tion?

I think it might be. It strikes me that if you start off too gullible you be­gin with an im­por­tant skill: you already know how to change your mind. In fact, chang­ing your mind is in some ways your de­fault set­ting if you’re gullible. And con­sid­er­ing that like half the freakin se­quences were de­voted to learn­ing how to ac­tu­ally change your mind, start­ing off with some prac­tice in that de­part­ment could be a very good thing.

I con­sider my­self to be...well, maybe not more gullible than av­er­age in ab­solute terms—I don’t get sucked into pyra­mid scams or send money to Nige­rian princes or any­thing like that. But I’m prob­a­bly more gullible than av­er­age for my in­tel­li­gence level. There’s an old dis­cus­sion post I wrote a few years back that serves as a perfect demon­stra­tion of this (I won’t link to it out of em­bar­rass­ment, but I’m sure you could find it if you looked). And again, this isn’t a good thing—to the ex­tent that I’m overly gullible, I as­pire to be­come less gullible (Tsuyoku Nar­i­tai!). I’m not try­ing to ex­cuse any of my past be­havi­our. But when I look back on my still-on­go­ing jour­ney to­wards ra­tio­nal­ity, I can see that my abil­ity to aban­don old ideas at the (rel­a­tive) drop of a hat has been tremen­dously use­ful so far, and I do at­tribute that abil­ity in part to years of prac­tice at...well, at be­liev­ing things that peo­ple told me, and some­times gullibly be­liev­ing things that peo­ple told me. Call it epistemic defer­en­tial­ity, or some­thing—the tacit be­lief that other peo­ple know bet­ter than you (es­pe­cially if they’re speak­ing con­fi­dently) and that you should listen to them. It’s cer­tainly not a char­ac­ter trait you’re go­ing to want to keep as a ra­tio­nal­ist, and I’m still try­ing to do what I can to get rid of it—but as a start­ing point? You could do worse I think.

Now, I don’t pre­tend that the above is any­thing more than a plau­si­bil­ity ar­gu­ment, and maybe not a strong one at that. For one I’m not sure how well this idea carves re­al­ity at its joints—af­ter all, gullibil­ity isn’t quite the same thing as light­ness, even if they’re closely re­lated. For an­other, if the above were true, you would prob­a­bly ex­pect LWer’s to be more gullible than av­er­age. But that doesn’t seem quite right—while LW is ad­mirably will­ing to en­gage with new ideas, no mat­ter how ab­surd they might seem, the de­fault at­ti­tude to­wards a new idea on this site is still one of in­tense skep­ti­cism. Post some­thing half-baked on LW and you will be torn to shreds. Which is great, of course, and I wouldn’t have it any other way—but it doesn’t re­ally sound like the be­havi­our of a web­site full of gullible peo­ple.

(Of course, on the other hand it could be that LWer’s re­ally are more gullible than av­er­age, but they’re just smart enough to com­pen­sate for it)

Any­way, I’m not sure what to make of this idea, but it seemed in­ter­est­ing and worth a dis­cus­sion post at least. I’m cu­ri­ous to hear what peo­ple think: does any of the above ring true to you? How helpful do you think gullibil­ity is, if it is at all? Can you be “light” with­out be­ing gullible? And for the sake of col­lect­ing in­for­ma­tion: do you con­sider your­self to be more or less gullible than av­er­age for some­one of your in­tel­li­gence level?