Undiscriminating Skepticism

Tl;dr: Since it can be cheap and easy to at­tack ev­ery­thing your tribe doesn’t be­lieve, you shouldn’t trust the ra­tio­nal­ity of just any­one who slams as­trol­ogy and cre­ation­ism; these be­liefs aren’t just false, they’re also non-tribal among ed­u­cated au­di­ences. Test what hap­pens when a “skep­tic” ar­gues for a non-tribal be­lief, or ar­gues against a tribal be­lief, be­fore you de­cide they’re good gen­eral ra­tio­nal­ists. This post is in­tended to be rea­son­ably ac­cessible to out­side au­di­ences.

I don’t be­lieve in UFOs. I don’t be­lieve in as­trol­ogy. I don’t be­lieve in home­opa­thy. I don’t be­lieve in cre­ation­ism. I don’t be­lieve there were ex­plo­sives planted in the World Trade Cen­ter. I don’t be­lieve in haunted houses. I don’t be­lieve in per­pet­ual mo­tion ma­chines. I be­lieve that all these be­liefs are not only wrong but visi­bly in­sane.

If you know noth­ing else about me but this, how much credit should you give me for gen­eral ra­tio­nal­ity?

Cer­tainly any­one who was skil­lful at adding up ev­i­dence, con­sid­er­ing al­ter­na­tive ex­pla­na­tions, and as­sess­ing prior prob­a­bil­ities, would end up dis­be­liev­ing in all of these.

But there would also be a sim­pler ex­pla­na­tion for my views, a less rare fac­tor that could ex­plain it: I could just be anti-non-main­stream. I could be in the habit of hang­ing out in mod­er­ately ed­u­cated cir­cles, and know that as­trol­ogy and home­opa­thy are not ac­cepted be­liefs of my tribe. Or just per­cep­tu­ally rec­og­nize them, on a word­less level, as “sound­ing weird”. And I could mock any­thing that sounds weird and that my fel­low tribesfolk don’t be­lieve, much as cre­ation­ists who hang out with fel­low cre­ation­ists mock evolu­tion for its lu­dicrous as­ser­tion that apes give birth to hu­man be­ings.

You can get cheap credit for ra­tio­nal­ity by mock­ing wrong be­liefs that ev­ery­one in your so­cial cir­cle already be­lieves to be wrong. It wouldn’t mean that I have any abil­ity at all to no­tice a wrong be­lief that the peo­ple around me be­lieve to be right, or vice versa—to fur­ther dis­crim­i­nate truth from falsity, be­yond the fact that my so­cial cir­cle doesn’t already be­lieve in some­thing.

Back in the good old days, there was a sim­ple test for this syn­drome that would get quite a lot of mileage: You could just ask me what I thought about God. If I treated the idea with deeper re­spect than I treated as­trol­ogy, hold­ing it wor­thy of se­ri­ous de­bate even if I said I dis­be­lieved in it, then you knew that I was tak­ing my cues from my so­cial sur­round­ings—that if the peo­ple around me treated a be­lief as high-pres­tige, high-sta­tus, I wouldn’t start mock­ing it no mat­ter what the state of ev­i­dence.

On the other hand sup­pose I said with­out hes­i­ta­tion that my epistemic state on God was similar to my epistemic state on psy­chic pow­ers: no pos­i­tive ev­i­dence, lots of failed tests, highly un­fa­vor­able prior, and if you be­lieve it un­der those cir­cum­stances then some­thing is wrong with your mind. Then you would have heard a bit of skep­ti­cism that might cost me some­thing so­cially, and that not ev­ery­one around me would have en­dorsed, even in ed­u­cated cir­cles. You would know it wasn’t just a cheap way of pick­ing up cheap points.

To­day the God-test no longer works, be­cause some peo­ple re­al­ized that the tak­ing-it-se­ri­ously aura of re­li­gion is in fact the main thing left which pre­vents peo­ple from notic­ing the epistemic awful­ness; there has been a con­certed and, I think, well-ad­vised effort to mock re­li­gion and strip it of its re­spectabil­ity. The up­shot is that there are now quite wide so­cial cir­cles in which God is just an­other stupid be­lief that we all know we don’t be­lieve in, on the same list with as­trol­ogy. You could be deal­ing with an adept ra­tio­nal­ist, or you could just be deal­ing with some­one who reads Red­dit.

And of course I could eas­ily go on to name some be­liefs that oth­ers think are wrong and that I think are right, or vice versa, but would in­evitably lose some of my au­di­ence at each step along the way—just as, a cou­ple of decades ago, I would have lost a lot of my au­di­ence by say­ing that re­li­gion was un­wor­thy of se­ri­ous de­bate. (Thank­fully, to­day this out­right dis­mis­sal is at least con­sid­ered a re­spectable, main­stream po­si­tion even if not ev­ery­one holds it.)

I prob­a­bly won’t lose much by cit­ing anti-Ar­tifi­cial-In­tel­li­gence views as an ex­am­ple of undis­crim­i­nat­ing skep­ti­cism. I think a ma­jor­ity among ed­u­cated cir­cles are sym­pa­thetic to the ar­gu­ment that brains are not magic and so there is no ob­sta­cle in prin­ci­ple to build­ing ma­chines that think. But there are oth­ers, albeit in the minor­ity, who rec­og­nize Ar­tifi­cial In­tel­li­gence as “weird-sound­ing” and “sci-fi”, a be­lief in some­thing that has never yet been demon­strated, hence un­scien­tific—the same epistemic refer­ence class as be­liev­ing in aliens or home­opa­thy.

(This is tech­ni­cally a de­mand for un­ob­tain­able ev­i­dence. The asym­me­try with home­opa­thy can be summed up as fol­lows: First: If we learn that Ar­tifi­cial In­tel­li­gence is definitely im­pos­si­ble, we must have learned some new fact un­known to mod­ern sci­ence—ev­ery­thing we cur­rently know about neu­rons and the evolu­tion of in­tel­li­gence sug­gests that no magic was in­volved. On the other hand, if we learn that home­opa­thy is pos­si­ble, we must have learned some new fact un­known to mod­ern sci­ence; if ev­ery­thing else we be­lieve about physics is true, home­opa­thy shouldn’t work. Se­cond: If home­opa­thy works, we can ex­pect dou­ble-blind med­i­cal stud­ies to demon­strate its effi­cacy right now; the ab­sence of this ev­i­dence is very strong ev­i­dence of ab­sence. If Ar­tifi­cial In­tel­li­gence is pos­si­ble in the­ory and in prac­tice, we can’t nec­es­sar­ily ex­pect its cre­ation to be demon­strated us­ing cur­rent knowl­edge—this ab­sence of ev­i­dence is only weak ev­i­dence of ab­sence.)

I’m us­ing Ar­tifi­cial In­tel­li­gence as an ex­am­ple, be­cause it’s a case where you can see some “skep­tics” di­rect­ing their skep­ti­cism at a be­lief that is very pop­u­lar in ed­u­cated cir­cles, that is, the non­mys­te­ri­ous­ness and ul­ti­mate re­verse-en­g­ineer­abil­ity of mind. You can even see two skep­ti­cal prin­ci­ples brought into con­flict—does a good skep­tic dis­be­lieve in Ar­tifi­cial In­tel­li­gence be­cause it’s a load of sci-fi which has never been demon­strated? Or does a good skep­tic dis­be­lieve in hu­man ex­cep­tion­al­ism, since it would re­quire some mys­te­ri­ous, un­an­a­lyz­able essence-of-mind un­known to mod­ern sci­ence?

It’s on ques­tions like these where we find the fron­tiers of knowl­edge, and ev­ery­thing now in the set­tled lands was once on the fron­tier. It might seem like a mat­ter of lit­tle im­por­tance to de­bate weird non-main­stream be­liefs; a mat­ter for easy dis­mis­sals and open scorn. But if this policy is im­ple­mented in full gen­er­al­ity, progress goes down the tubes. The main­stream is not com­pletely right, and fu­ture sci­ence will not just con­sist of things that sound rea­son­able to ev­ery­one to­day—there will be at least some things in it that sound weird to us. (This is cer­tainly the case if some­thing along the lines of Ar­tifi­cial In­tel­li­gence is con­sid­ered weird!) And yes, even­tu­ally such sci­en­tific truths will be es­tab­lished by ex­per­i­ment, but some­where along the line—be­fore they are definitely es­tab­lished and ev­ery­one already be­lieves in them—the testers will need fund­ing.

Be­ing skep­ti­cal about some non-main­stream be­liefs is not a fringe pro­ject of lit­tle im­por­tance, not always a slam-dunk, not a bit of oc­ca­sional pointless drudgery—though I can cer­tainly un­der­stand why it feels that way to ar­gue with cre­ation­ists. Skep­ti­cism is just the con­verse of ac­cep­tance, and so to be skep­ti­cal of a non-main­stream be­lief is to try to con­tribute to the pro­ject of ad­vanc­ing the bor­ders of the known—to stake an ad­di­tional epistemic claim that the bor­ders should not ex­pand in this di­rec­tion, and should ad­vance in some other di­rec­tion in­stead.

This is high and difficult work—cer­tainly much more difficult than the work of mock­ing ev­ery­thing that sounds weird and that the peo­ple in your so­cial cir­cle don’t already seem to be­lieve.

To put it more for­mally, be­fore I be­lieve that some­one is perform­ing use­ful cog­ni­tive work, I want to know that their skep­ti­cism dis­crim­i­nates truth from false­hood, mak­ing a con­tri­bu­tion over and above the con­tri­bu­tion of this-sounds-weird-and-is-not-a-tribal-be­lief. In Bayesian terms, I want to know that p(mock­ery|be­lief false & not a tribal be­lief) > p(mock­ery|be­lief true & not a tribal be­lief).

If I re­call cor­rectly, the US Air Force’s Pro­ject Blue Book, on UFOs, ex­plained away as a sight­ing of the planet Venus what turned out to ac­tu­ally be an ex­per­i­men­tal air­craft. No, I don’t be­lieve in UFOs ei­ther; but if you’re go­ing to ex­plain away ex­per­i­men­tal air­craft as Venus, then noth­ing else you say pro­vides fur­ther Bayesian ev­i­dence against UFOs ei­ther. You are merely an undis­crim­i­nat­ing skep­tic. I don’t be­lieve in UFOs, but in or­der to credit Pro­ject Blue Book with ad­di­tional help in es­tab­lish­ing this, I would have to be­lieve that if there were UFOs then Pro­ject Blue Book would have turned in a differ­ent re­port.

And so if you’re just as skep­ti­cal of a weird, non-tribal be­lief that turns out to have pretty good sup­port, you just blew the whole deal—that is, if I pay any ex­tra at­ten­tion to your skep­ti­cism, it ought to be be­cause I be­lieve you wouldn’t mock a weird non-tribal be­lief that was wor­thy of de­bate.

Per­son­ally, I think that Michael Sher­mer blew it by mock­ing molec­u­lar nan­otech­nol­ogy, and Penn and Tel­ler blew it by mock­ing cry­on­ics (jus­tifi­ca­tion: more or less ex­actly the same rea­sons I gave for Ar­tifi­cial In­tel­li­gence). Con­versely, Richard Dawk­ins scooped up a huge truck­load of ac­tual-dis­crim­i­nat­ing-skep­tic points, at least in my book, for not mak­ing fun of the many-wor­lds in­ter­pre­ta­tion when he was asked about in an in­ter­view; in­deed, Dawk­ins noted (cor­rectly) that the tra­di­tional col­lapse pos­tu­late pretty much has to be in­cor­rect. The many-wor­lds in­ter­pre­ta­tion isn’t just the for­mally sim­plest ex­pla­na­tion that fits the facts, it also sounds weird and is not yet a tribal be­lief of the ed­u­cated crowd; so whether some­one makes fun of MWI is in­deed a good test of whether they un­der­stand Oc­cam’s Ra­zor or are just mock­ing ev­ery­thing that’s not a tribal be­lief.

Of course you may not trust me about any of that. And so my pur­pose to­day is not to pro­pose a new lit­mus test to re­place athe­ism.

But I do pro­pose that be­fore you give any­one credit for be­ing a smart, ra­tio­nal skep­tic, that you ask them to defend some non-main­stream be­lief. And no, athe­ism doesn’t count as non-main­stream any­more, no mat­ter what the polls show. It has to be some­thing that most of their so­cial cir­cle doesn’t be­lieve, or some­thing that most of their so­cial cir­cle does be­lieve which they think is wrong. Dawk­ins en­dors­ing many-wor­lds still counts for now, al­though its use­ful­ness as an in­di­ca­tor is fad­ing fast… but the point is not to en­dorse many-wor­lds, but to see them take some sort of pos­i­tive stance on where the fron­tiers of knowl­edge should change.

Don’t get me wrong, there’s a whole crazy world out there, and when Richard Dawk­ins starts whal­ing on as­trol­ogy in “The Ene­mies of Rea­son” doc­u­men­tary, he is do­ing good and nec­es­sary work. But it’s dan­ger­ous to let peo­ple pick up too much credit just for slam­ming as­trol­ogy and home­opa­thy and UFOs and God. What if they be­come fa­mous skep­tics by pick­ing off the cheap tar­gets, and then use that pres­tige and cred­i­bil­ity to go af­ter nan­otech­nol­ogy? Who will dare to con­sider cry­on­ics now that it’s been fea­tured on an epi­sode of Penn and Tel­ler’s “Bul­lshit”? On the cur­rent sys­tem you can gain high pres­tige in the ed­u­cated cir­cle just by tar­get­ing be­liefs like as­trol­ogy that are widely be­lieved to be un­e­d­u­cated; but then the same guns can be turned on new ideas like the many-wor­lds in­ter­pre­ta­tion, even though it’s be­ing ac­tively de­bated by physi­cists. And that’s why I sug­gest, not any par­tic­u­lar lit­mus test, but just that you ought to have to stick your neck out and say some­thing a lit­tle less usual—say where you are not skep­ti­cal (and most of your tribe­mates are) or where you are skep­ti­cal (and most of the peo­ple in your tribe are not).

I am minded to pay at­ten­tion to Robyn Dawes as a skil­lful ra­tio­nal­ist, not be­cause Dawes has slammed easy tar­gets like as­trol­ogy, but be­cause he also took the lead in as­sem­bling and pop­u­lariz­ing the to­tal lack of ex­per­i­men­tal ev­i­dence for nearly all schools of psy­chother­apy and the per­sis­tence of mul­ti­ple su­per­sti­tions such as Rorschach ink-blot in­ter­pre­ta­tion in the face of liter­ally hun­dreds of ex­per­i­ments try­ing and failing to find any ev­i­dence for it. It’s not that psy­chother­apy seemed like a difficult tar­get af­ter Dawes got through with it, but that, at the time he at­tacked it, peo­ple in ed­u­cated cir­cles still thought of it as some­thing that ed­u­cated peo­ple be­lieved in. It’s not quite as use­ful to­day, but back when Richard Feyn­man pub­lished “Surely You’re Jok­ing, Mr. Feyn­man” you could pick up ev­i­dence that he was ac­tu­ally think­ing from the fact that he dis­re­spected psy­chother­a­pists as well as psy­chics.

I’ll con­clude with some sim­ple and non-trust­wor­thy in­di­ca­tors that the skep­tic is just filling in a cheap and largely au­to­matic mock­ery tem­plate:

  • The “skep­tic” opens by re­mark­ing about the crazy true be­liev­ers and wish­ful thinkers who be­lieve in X, where there seem to be a sur­pris­ing num­ber of physi­cists mak­ing up the pop­u­la­tion of those wacky cult vic­tims who be­lieve in X. (The physi­cist-test is not an in­fal­lible in­di­ca­tor of right­ness or even non-stu­pidity, but it’s a filter that rapidly picks up on, say, strong AI, molec­u­lar nan­otech­nol­ogy, cry­on­ics, the many-wor­lds in­ter­pre­ta­tion, and so on.) Bonus point losses if the “skep­tic” re­marks on how eas­ily physi­cists are se­duced by sci-fi ideas. The rea­son why this is a par­tic­u­larly nega­tive in­di­ca­tor is that when some­one is in a mode of au­to­mat­i­cally ar­gu­ing against ev­ery­thing that seems weird and isn’t a be­lief of their tribe—of re­ject­ing weird be­liefs as a mat­ter of naked per­cep­tual recog­ni­tion of weird­ness—then they tend to per­cep­tu­ally fill-in-the-blank by as­sum­ing that any­thing weird is be­lieved by wacky cult vic­tims (i.e., peo­ple Not Of Our Tribe). And they don’t back­track, or won­der oth­er­wise, even if they find out that the “cult” seems to ex­hibit a sur­pris­ing num­ber of peo­ple who go around talk­ing about ra­tio­nal­ity and/​or mem­bers with PhDs in physics. Roughly, they have an au­to­matic tem­plate for mock­ing weird be­liefs, and if this re­quires them to just swap in physi­cists for as­trologers as gullible mo­rons, that’s what they’ll do. Of course physi­cists can be gullible mo­rons too, but you should be es­tab­lish­ing that as a sur­pris­ing con­clu­sion, not us­ing it as an open­ing premise!

  • The “skep­tic” offers up items of “ev­i­dence” against X which are not much less ex­pected in the case that X is true than in the case that X is false; in other words, they fail to grasp the el­e­men­tary Bayesian no­tion of ev­i­dence. I don’t be­lieve that UFOs are alien vis­i­tors, but my skep­ti­cism has noth­ing to do with all the crazy peo­ple who be­lieve in UFOs—the ex­is­tence of wacky cults is not much less ex­pected in the case that aliens do ex­ist, than in the case that they do not. (I am skep­ti­cal of UFOs, not be­cause I fear af­fili­at­ing my­self with the low-pres­tige peo­ple who be­lieve in UFOs, but be­cause I don’t be­lieve aliens would (a) travel across in­ter­stel­lar dis­tances AND (b) hide all signs of their pres­ence AND THEN (c) fly gi­gan­tic non-nan­otech­nolog­i­cal air­craft over our mil­i­tary bases with their ex­te­rior lights on.)

  • The de­mand for un­ob­tain­able ev­i­dence is a spe­cial case of the above, and of course a very com­mon mode of skep­ti­cism gone wrong. Ar­tifi­cial In­tel­li­gence and molec­u­lar nan­otech­nol­ogy both in­volve be­liefs in the fu­ture fea­si­bil­ity of tech­nolo­gies that we can’t build right now, but (ar­guendo) seem to be strongly per­mit­ted by cur­rent sci­en­tific be­lief, i.e., the non-in­ef­fa­bil­ity of the brain, or the ba­sic phys­i­cal calcu­la­tions which seem to show that sim­ple nan­otech­nolog­i­cal ma­chines should work. To dis­card all the ar­gu­ments from cog­ni­tive sci­ence and rely on the knock­down ar­gu­ment “no re­li­able re­porter has ever seen an AI!” is blindly filling in the tem­plate from haunted houses.

  • The “skep­tic” tries to scare you away from the be­lief in their very first open­ing re­marks: for ex­am­ple, point­ing out how UFO cults beat and starve their vic­tims (when this can just as eas­ily hap­pen if aliens are vis­it­ing the Earth). The nega­tive con­se­quences of a false be­lief may be real, le­gi­t­i­mate truths to be com­mu­ni­cated; but only af­ter you es­tab­lish by other means that the be­lief is fac­tu­ally false—oth­er­wise it’s the log­i­cal fal­lacy of ap­peal to con­se­quences.

  • They mock first and coun­ter­ar­gue later or not at all. I do be­lieve there’s a place for mock­ery in the war on dumb ideas, but first you write the crush­ing fac­tual coun­ter­ar­gu­ment, then you con­clude with the mock­ery.

I’ll con­clude the con­clu­sion by ob­serv­ing that poor skep­ti­cism can just as eas­ily ex­ist in a case where a be­lief is wrong as when a be­lief is right, so point­ing out these flaws in some­one’s skep­ti­cism can hardly serve to es­tab­lish a pos­i­tive be­lief about where the fron­tiers of knowl­edge should move.