Self-Congratulatory Rationalism

Quite a few peo­ple com­plain about the athe­ist/​skep­tic/​ra­tio­nal­ist com­mu­ni­ties be­ing self-con­grat­u­la­tory. I used to dis­miss this as a sign of peo­ple’s un­will­ing­ness to ad­mit that re­ject­ing re­li­gion, or as­trol­ogy, or what­ever, was any more ra­tio­nal than ac­cept­ing those things. Lately, though, I’ve started to worry.

Frankly, there seem to be a lot of peo­ple in the LessWrong com­mu­nity who imag­ine them­selves to be, not just more ra­tio­nal than av­er­age, but paragons of ra­tio­nal­ity who other peo­ple should ac­cept as such. I’ve en­coun­tered peo­ple talk­ing as if it’s ridicu­lous to sug­gest they might some­times re­spond badly to be­ing told the truth about cer­tain sub­jects. I’ve en­coun­tered peo­ple as­sert­ing the ra­tio­nal su­pe­ri­or­ity of them­selves and oth­ers in the com­mu­nity for flimsy rea­sons, or no rea­son at all.

Yet the readi­ness of mem­bers of the LessWrong com­mu­nity to dis­agree with and crit­i­cize each other sug­gests we don’t ac­tu­ally think all that highly of each other’s ra­tio­nal­ity. The fact that mem­bers of the LessWrong com­mu­nity tend to be smart is no guaran­tee that they will be ra­tio­nal. And we have much rea­son to fear “ra­tio­nal­ity” de­gen­er­at­ing into sig­nal­ing games.

What Disagree­ment Signifies

Let’s start by talk­ing about dis­agree­ment. There’s been a lot of dis­cus­sion of dis­agree­ment on LessWrong, and in par­tic­u­lar of Au­mann’s agree­ment the­o­rem, of­ten glossed as some­thing like “two ra­tio­nal­ists can’t agree to dis­agree.” (Or per­haps that we can’t fore­see to dis­agree.) Dis­cus­sion of dis­agree­ment, how­ever, tends to fo­cus on what to do about it. I’d rather take a step back, and look at what dis­agree­ment tells us about our­selves: namely, that we don’t think all that highly of each other’s ra­tio­nal­ity.

This, for me, is the take-away from Tyler Cowen and Robin Han­son’s pa­per Are Disagree­ments Hon­est? In the pa­per, Cowen and Han­son define hon­est dis­agree­ment as mean­ing that “mean­ing that the dis­putants re­spect each other’s rele­vant abil­ities, and con­sider each per­son’s stated opinion to be his best es­ti­mate of the truth, given his in­for­ma­tion and effort,” and they ar­gue dis­agree­ments aren’t hon­est in this sense.

I don’t find this con­clu­sion sur­pris­ing. In fact, I sus­pect that while peo­ple some­times do mean it when they talk about re­spect­ful dis­agree­ment, of­ten they re­al­ize this is a po­lite fic­tion (which isn’t nec­es­sar­ily a bad thing). Deep down, they know that dis­agree­ment is dis­re­spect, at least in the sense of not think­ing that highly of the other per­son’s ra­tio­nal­ity. That peo­ple know this is shown in the fact that they don’t like be­ing told they’re wrong—the rea­son why Dale Carnegie says you can’t win an ar­gu­ment.

On LessWrong, peo­ple are quick to crit­i­cize each oth­ers’ views, so much so that I’ve heard peo­ple cite this as a rea­son to be re­luc­tant to post/​com­ment (again show­ing they know in­tu­itively that dis­agree­ment is dis­re­spect). Fur­ther­more when peo­ple in LessWrong crit­i­cize oth­ers’ views, they very of­ten don’t seem to ex­pect to quickly reach agree­ment. Even peo­ple Yvain would clas­sify as “ex­pe­rienced ra­tio­nal­ists” some­times know­ingly have per­sis­tent dis­agree­ments. This sug­gests that LessWrongers al­most never con­sider each other to be perfect ra­tio­nal­ists.

And I ac­tu­ally think this is a sen­si­ble stance. For one thing, even if you met a perfect ra­tio­nal­ist, it could be hard to figure out that they are one. Fur­ther­more, the prob­lem of know­ing what to do about dis­agree­ment is made harder when you’re faced with other peo­ple hav­ing per­sis­tent dis­agree­ments: if you find your­self agree­ing with Alice, you’ll have to think Bob is be­ing ir­ra­tional, and vice versa. If you rate them equally ra­tio­nal and adopt an in­ter­me­di­ate view, you’ll have to think they’re both be­ing a bit ir­ra­tional for not do­ing like­wise.

The situ­a­tion is similar to Moore’s para­dox in philos­o­phy—the im­pos­si­bil­ity of as­sert­ing “it’s rain­ing, but I don’t be­lieve it’s rain­ing.” Or, as you might say, “Of course I think my opinions are right and other peo­ple’s are wrong. Other­wise I’d change my mind.” Similarly, when we think about dis­agree­ment, it seems like we’re forced to say, “Of course I think my opinions are ra­tio­nal and other peo­ple’s are ir­ra­tional. Other­wise I’d change my mind.”

We can find some room for hu­mil­ity in an ana­log of the pref­ace para­dox, the fact that the au­thor of a book can say things like “any er­rors that re­main are mine.” We can say this be­cause we might think each in­di­vi­d­ual claim in the book is highly prob­a­ble, while rec­og­nize that all the lit­tle un­cer­tain­ties add up to it be­ing likely there are still er­rors. Similarly, we can think each of our be­liefs are in­di­vi­d­u­ally ra­tio­nal, while rec­og­niz­ing we still prob­a­bly have some ir­ra­tional be­liefs—we just don’t know which ones And just be­cause re­spect­ful dis­agree­ment is a po­lite fic­tion doesn’t mean we should aban­don it.

I don’t have a clear sense of how con­tro­ver­sial the above will be. Maybe we all already rec­og­nize that we don’t re­spect each other’s opinions ’round these parts. But I think some fea­tures of dis­cus­sion at LessWrong look odd in light of the above points about dis­agree­ment—in­clud­ing some of the things peo­ple say about dis­agree­ment.

The wiki, for ex­am­ple, says that “Out­side of well-func­tion­ing pre­dic­tion mar­kets, Au­mann agree­ment can prob­a­bly only be ap­prox­i­mated by care­ful de­liber­a­tive dis­course. Thus, fos­ter­ing effec­tive de­liber­a­tion should be seen as a key goal of Less Wrong.” The point of Au­mann’s agree­ment the­o­rem, though, is pre­cisely that ideal ra­tio­nal­ists shouldn’t need to en­gage in de­liber­a­tive dis­course, as usu­ally con­ceived, in or­der to reach agree­ment.

As Cowen and Han­son put it, “Merely know­ing some­one else’s opinion pro­vides a pow­er­ful sum­mary of ev­ery­thing that per­son knows, pow­er­ful enough to elimi­nate any differ­ences of opinion due to differ­ing in­for­ma­tion.” So shar­ing ev­i­dence the nor­mal way shouldn’t be nec­es­sary. Ask­ing some­one “what’s the ev­i­dence for that?” im­plic­itly says, “I don’t trust your ra­tio­nal­ity enough to take your word for it.” But when deal­ing with real peo­ple who may or may not have a ra­tio­nal ba­sis for their be­liefs, that’s al­most always the right stance to take.

In­tel­li­gence and Rationality

In­tel­li­gence does not equal ra­tio­nal­ity. Need I say more? Not long ago, I wouldn’t have thought so. I would have thought it was a fun­da­men­tal premise be­hind LessWrong, in­deed be­hind old-school sci­en­tific skep­ti­cism. As Michael Sher­mer once said, “Smart peo­ple be­lieve weird things be­cause they are skil­led at defend­ing be­liefs they ar­rived at for non-smart rea­sons.”

Yet I’ve heard peo­ple sug­gest that you must never be dis­mis­sive of things said by smart peo­ple, or that the pur­port­edly high IQ of the LessWrong com­mu­nity means peo­ple here don’t make bad ar­gu­ments. When I hear that, I think “whaaat? Peo­ple on LessWrong make bad ar­gu­ments all the time!” When this hap­pens, I gen­er­ally limit my­self to try­ing to point out the flaw in the ar­gu­ment and/​or down­vot­ing, and re­sist the urge to shout “YOUR ARGUMENTS ARE BAD AND YOU SHOULD FEEL BAD.” I just think it.

When I reach for an ex­pla­na­tion of why ter­rible ar­gu­ments from smart peo­ple shouldn’t sur­prise any­one, I go to Yvain’s In­tel­lec­tual Hip­sters and Meta-Con­tar­i­anism, one of my fa­vorite LessWrong posts of all time. While Yvain notes that meta-con­trar­i­anism of­ten isn’t a good thing, though, on re-read­ing it I no­ticed what seems like an im­por­tant over­sight:

A per­son who is some­what up­per-class will con­spicu­ously sig­nal eir wealth by buy­ing difficult-to-ob­tain goods. A per­son who is very up­per-class will con­spicu­ously sig­nal that ey feels no need to con­spicu­ously sig­nal eir wealth, by de­liber­ately not buy­ing difficult-to-ob­tain goods.

A per­son who is some­what in­tel­li­gent will con­spicu­ously sig­nal eir in­tel­li­gence by hold­ing difficult-to-un­der­stand opinions. A per­son who is very in­tel­li­gent will con­spicu­ously sig­nal that ey feels no need to con­spicu­ously sig­nal eir in­tel­li­gence, by de­liber­ately not hold­ing difficult-to-un­der­stand opinions.

Ac­cord­ing to the sur­vey, the av­er­age IQ on this site is around 145. Peo­ple on this site differ from the main­stream in that they are more will­ing to say death is bad, more will­ing to say that sci­ence, cap­i­tal­ism, and the like are good, and less will­ing to say that there’s some deep philo­soph­i­cal sense in which 1+1 = 3. That sug­gests peo­ple around that level of in­tel­li­gence have reached the point where they no longer feel it nec­es­sary to differ­en­ti­ate them­selves from the sort of peo­ple who aren’t smart enough to un­der­stand that there might be side benefits to death.

The pat­tern of coun­tersig­nal­ing Yvain de­scribes here is real. But it’s im­por­tant not to for­get that some­times, the su­per-wealthy sig­nal their wealth by buy­ing things even the mod­er­ately wealthy can’t af­ford. And some­times, the very in­tel­li­gent sig­nal their in­tel­li­gence by hold­ing opinions even the mod­er­ately in­tel­li­gent have trou­ble un­der­stand­ing. You also get hy­brid sta­tus moves: de­signer ver­sions of nor­mally low-class clothes, com­pli­cated jus­tifi­ca­tions for opinions nor­mally found among the un­e­d­u­cated.

Robin Han­son has ar­gued that this leads to bi­ases in academia:

I’ve ar­gued that the main so­cial func­tion of academia is to let stu­dents, pa­trons, read­ers, etc. af­fili­ate with cre­den­tialed-as-im­pres­sive minds. If so, aca­demic be­liefs are sec­ondary – the im­por­tant thing is to clearly show re­spect to those who make im­pres­sive dis­plays like the­o­rems or difficult data anal­y­sis. And the ob­vi­ous way for aca­demics to use their be­liefs to show re­spect for im­pres­sive folks is to have aca­demic be­liefs track the most im­pres­sive re­cent aca­demic work.

Robin’s post fo­cuses on eco­nomics, but I sus­pect the prob­lem is even worse in my home field of philos­o­phy. As I’ve writ­ten be­fore, the prob­lem is that in philos­o­phy, philoso­phers never agree on whether a philoso­pher has solved a prob­lem. There­fore, there can be no re­wards for be­ing right, only re­wards for show­ing off your im­pres­sive in­tel­lect. This of­ten means find­ing clever ways to be wrong.

I need to em­pha­size that I re­ally do think philoso­phers are show­ing off real in­tel­li­gence, not merely show­ing off faux-clev­er­ness. GRE scores sug­gest philoso­phers are among the smartest aca­demics, and their perfor­mance is ar­guably made more im­pres­sive by the fact that GRE quant scores are bi­modally dis­tributed based on whether your ma­jor re­quired you to spend four years prac­tic­ing your high school math, with philos­o­phy be­ing one of the ma­jors that doesn’t grant that ad­van­tage. Based on this, if you think it’s wrong to dis­miss the views of high-IQ peo­ple, you shouldn’t be dis­mis­sive of main­stream philos­o­phy. But in fact I think LessWrong’s oft-no­ticed dis­mis­sive­ness of main­stream philos­o­phy is largely jus­tified.

I’ve found philos­o­phy of re­li­gion in par­tic­u­lar to be a gold­mine of ter­rible ar­gu­ments made by smart peo­ple. Con­sider Alvin Plant­inga’s modal on­tolog­i­cal ar­gu­ment. The ar­gu­ment is suffi­ciently difficult to un­der­stand that I won’t try to ex­plain it here. If you want to un­der­stand it, I’m not sure what to tell you ex­cept to maybe read Plant­inga’s book The Na­ture of Ne­ces­sity. In fact, I pre­dict at least one LessWronger will com­ment on this thread with an in­cor­rect ex­pla­na­tion or crit­i­cism of the ar­gu­ment. Which is not to say they wouldn’t be smart enough to un­der­stand it, just that it might take them a few iter­a­tions of get­ting it wrong to fi­nally get it right. And com­ing up with an ar­gu­ment like that is no mean feat—I’d guess Plant­inga’s IQ is just as high as the av­er­age LessWronger’s.

Once you un­der­stand the modal on­tolog­i­cal ar­gu­ment, though, it quickly be­comes ob­vi­ous that Plant­inga’s logic works just as well to “prove” that it’s a nec­es­sary truth that pigs fly. Or that Plant­inga’s god does not ex­ist. Or even as a gen­eral pur­pose “proof” of any pur­ported math­e­mat­i­cal truth you please. The main point is that Plant­inga’s ar­gu­ment is not stupid in the sense of be­ing some­thing you’d only come up with if you had a low IQ—the op­po­site is true. But Plant­inga’s ar­gu­ment is stupid in the sense of be­ing some­thing you’d only come up with it while un­der the in­fluence of some se­ri­ous mo­ti­vated rea­son­ing.

The modal on­tolog­i­cal ar­gu­ment is ad­mit­tedly an ex­treme case. Rarely is the chasm be­tween the difficulty of the con­cepts un­der­ly­ing an ar­gu­ment, and the ar­gu­ment’s ac­tual mer­its, so vast. Still, be­ware the temp­ta­tion to af­fili­ate with smart peo­ple by tak­ing ev­ery­thing they say se­ri­ously.

Edited to add: in the origi­nal post, I in­tended but for­got to em­pha­size that I think the cor­re­la­tion be­tween IQ and ra­tio­nal­ity is weak at best. Do peo­ple dis­agree? Does any­one want to go out on a limb and say, “They aren’t the same thing, but the cor­re­la­tion is still very strong?”

The Prin­ci­ple of Charity

I’ve made no se­cret of the fact that I’m not a big fan of the prin­ci­ple of char­ity—of­ten defined as the rule that you should in­ter­pret other peo­ple’s ar­gu­ments on the as­sump­tion that they are not say­ing any­thing stupid. The prob­lem with this is that other peo­ple are of­ten say­ing some­thing stupid. Be­cause of that, I think char­i­ta­ble is over-rated com­pared to fair and ac­cu­rate read­ing. When some­one says some­thing stupid, you don’t have to pre­tend oth­er­wise, but it’s re­ally im­por­tant not to at­tribute to peo­ple stupid things they never said.

More frus­trat­ing than this sim­ple dis­agree­ment over char­ity, though, is when peo­ple who in­voke the prin­ci­ple of char­ity do so se­lec­tively. They ap­ply it to peo­ple who’s views they’re at least some­what sym­pa­thetic to, but when they find some­one they want to at­tack, they have trou­ble meet­ing ba­sic stan­dards of fair­ness. And in the most frus­trat­ing cases, this gets ex­plicit jus­tifi­ca­tion: “we need to read these peo­ple char­i­ta­bly, be­cause they are ob­vi­ously very in­tel­li­gent and ra­tio­nal.” I once had a mem­ber of the LessWrong com­mu­nity ac­tu­ally tell me, “You need to in­ter­pret me more char­i­ta­bly, be­cause you know I’m sane.” “Ac­tu­ally, buddy, I don’t know that,” I wanted to re­ply—but didn’t, be­cause that would’ve been rude.

I can see benefits to the prin­ci­ple of char­ity. It helps avoid flame wars, and from a Machi­avel­lian point of view it’s nice to close off the “what I ac­tu­ally meant was...” re­sponses. What­ever its mer­its, though, they can’t de­pend on the ac­tual in­tel­li­gence and ra­tio­nal­ity of the per­son mak­ing an ar­gu­ment. Not only is in­tel­li­gence no guaran­tee against mak­ing bad ar­gu­ments, the whole rea­son we de­mand other peo­ple tell us their rea­sons for their opinions in the first place is we fear their rea­sons might be bad ones.

As I’ve already ex­plained, there’s a difficult prob­lem here about how to be ap­pro­pri­ately mod­est about our own ra­tio­nal­ity. When I say some­thing, I never think it’s stupid, oth­er­wise I wouldn’t say it. But at least I’m not so ar­ro­gant as to go around de­mand­ing other peo­ple ac­knowl­edge my highly ad­vanced ra­tio­nal­ity. I don’t de­mand that they ac­cept “Chris isn’t say­ing any­thing stupid” as an ax­iom in or­der to en­gage with me.

Be­ware Weird­ness for Weird­ness’ Sake

There’s a the­ory in the psy­chol­ogy and so­ciol­ogy of re­li­gion that the pur­pose of seem­ingly fool­ish rit­u­als like cir­cum­ci­sion and snake-han­dling is to provide a costly and there­fore hard-to-fake sig­nal of group com­mit­ment. I think I’ve heard it sug­gested—though I can’t find by who—that crazy re­li­gious doc­trines could serve a similar pur­pose. It’s easy to say you be­lieve in a god, but be­ing will­ing to risk ridicule by say­ing you be­lieve in one god who is three per­sons, who are all the same god, yet not iden­ti­cal to each other, and you can’t ex­plain how that is but it’s a mys­tery you ac­cept on faith… now that takes ded­i­ca­tion.

Once you no­tice the gen­eral “sig­nal group com­mit­ment in costly ways” strat­egy, it seems to crop up ev­ery­where. Sub­cul­tures of­ten seem to go out of their way to be weird, to do things that will shock peo­ple out­side the sub­cul­ture, rang­ing from tat­toos and weird cloth­ing to com­ing up with rea­sons why things re­garded as nor­mal and in­nocu­ous in the broader cul­ture are ac­tu­ally evil. Even some­thing as sim­ple as a large body of jar­gon and in-jokes can do the trick: if some­one takes the time to learn all the jar­gon and in-jokes, you know they’re com­mit­ted.

This ten­dency is prob­a­bly harm­less when done with hu­mor and self-aware­ness, but it’s more wor­ri­some when a group be­comes con­vinced its lit­tle bits of weird­ness for weird­ness’ sake are a sign of its su­pe­ri­or­ity to other groups. And it’s worth be­ing aware of, be­cause it makes sense of sig­nal­ing moves that aren’t straight­for­wardly plays for higher sta­tus.

The LessWrong com­mu­nity has amassed a truly im­pres­sive store of jar­gon and in-jokes over the years, and some of it’s quite use­ful (I re­it­er­ate my love for the term “meta-con­trar­ian”). But as with all jar­gon, LessWron­gian jar­gon is of­ten just a silly way of say­ing things you could have said with­out it. For ex­am­ple, peo­ple say “I have a poor men­tal model of...” when they could have just said they don’t un­der­stand it very well.

That bit of LessWrong jar­gon is merely silly. Worse, I think, is the jar­gon around poli­tics. Re­cently, a friend gave “they avoid blue-green poli­tics” as a rea­son LessWron­gians are more ra­tio­nal than other peo­ple. It took a day be­fore it clicked that “blue-green poli­tics” here ba­si­cally just meant “par­ti­san­ship.” But com­plain­ing about par­ti­san­ship is old hat—liter­ally. Amer­ica’s founders were fret­ting about it back in the 18th cen­tury. Nowa­days, such wor­ries are some­thing you ex­pect to hear from bor­ingly mid­dle-brow colum­nists at ma­jor news­pa­pers, not edgy con­trar­i­ans.

But “blue-green poli­tics,” “poli­tics is the mind-kil­ler”… never mind how much con­tent they add, the point is they’re ob­scure enough to work as an ex­cuse to feel su­pe­rior to any­one whose poli­ti­cal views are too main­stream. Out­siders will prob­a­bly think you’re weird, in­vok­ing ob­scure jar­gon to quickly dis­miss ideas that seem plau­si­ble to them, but on the up­side you’ll get to bond with mem­bers of your in-group over your feel­ings of su­pe­ri­or­ity.

A More Hum­ble Ra­tion­al­ism?

I feel like I should wrap up with some ad­vice. Un­for­tu­nately, this post was mo­ti­vated by prob­lems I’d seen, not my hav­ing thought of brilli­ant solu­tions to them. So I’ll limit my­self to some fairly bor­ing, non-brilli­ant ad­vice.

First, yes, some claims are more ra­tio­nal than oth­ers. Some peo­ple even do bet­ter at ra­tio­nal­ity over­all than oth­ers. But the idea of a real per­son be­ing any­thing close to an ideal ra­tio­nal­ist is an ex­traor­di­nary claim, and should be met with ap­pro­pri­ate skep­ti­cism and de­mands for ev­i­dence. Don’t for­get that.

Also, be­ware sig­nal­ing games. A good dose of Han­so­nian cyn­i­cism, ap­plied to your own in-group, is healthy. Some­what re­lat­edly, I’ve be­gun to won­der if “ra­tio­nal­ism” is re­ally good brand­ing for a move­ment. Ra­tion­al­ity is sys­tem­atized win­ning, sure, but the “ra­tio­nal­ity” brand­ing isn’t as good for keep­ing that front and cen­ter, es­pe­cially com­pared to, say the effec­tive al­tru­ism meme. It’s just a lit­tle too easy to for­get where “ra­tio­nal­ity” is sup­posed to con­nect with the real world, in­creas­ing the temp­ta­tion for “ra­tio­nal­ity” to spiral off into sig­nal­ing games.