Is That Your True Rejection?

It hap­pens ev­ery now and then that some­one en­coun­ters some of my tran­shu­man­ist-side be­liefs—as op­posed to my ideas hav­ing to do with hu­man ra­tio­nal­ity—strange, ex­otic-sound­ing ideas like su­per­in­tel­li­gence and Friendly AI. And the one re­jects them.

If the one is called upon to ex­plain the re­jec­tion, not un­com­monly the one says, “Why should I be­lieve any­thing Yud­kowsky says? He doesn’t have a PhD!”

And oc­ca­sion­ally some­one else, hear­ing, says, “Oh, you should get a PhD, so that peo­ple will listen to you.” Or this ad­vice may even be offered by the same one who ex­pressed dis­be­lief, say­ing, “Come back when you have a PhD.”

Now, there are good and bad rea­sons to get a PhD. This is one of the bad ones.

There are many rea­sons why some­one might ac­tu­ally have an ini­tial ad­verse re­ac­tion to tran­shu­man­ist the­ses. Most are mat­ters of pat­tern recog­ni­tion, rather than ver­bal thought: the the­sis calls to mind an as­so­ci­ated cat­e­gory like “strange weird idea” or “sci­ence fic­tion” or “end-of-the-world cult” or “ov­er­en­thu­si­as­tic youth.”1 Im­me­di­ately, at the speed of per­cep­tion, the idea is re­jected.

If some­one af­ter­ward says, “Why not?” this launches a search for jus­tifi­ca­tion, but the search won’t nec­es­sar­ily hit on the true rea­son. By “’true rea­son,” I don’t mean the best rea­son that could be offered. Rather, I mean whichever causes were de­ci­sive as a mat­ter of his­tor­i­cal fact, at the very first mo­ment the re­jec­tion oc­curred.

In­stead, the search for jus­tifi­ca­tion hits on the jus­tify­ing-sound­ing fact, “This speaker does not have a PhD.” But I also don’t have a PhD when I talk about hu­man ra­tio­nal­ity, so why is the same ob­jec­tion not raised there?

More to the point, if I had a PhD, peo­ple would not treat this as a de­ci­sive fac­tor in­di­cat­ing that they ought to be­lieve ev­ery­thing I say. Rather, the same ini­tial re­jec­tion would oc­cur, for the same rea­sons; and the search for jus­tifi­ca­tion, af­ter­ward, would ter­mi­nate at a differ­ent stop­ping point.

They would say, “Why should I be­lieve you? You’re just some guy with a PhD! There are lots of those. Come back when you’re well-known in your field and tenured at a ma­jor uni­ver­sity.”

But do peo­ple ac­tu­ally be­lieve ar­bi­trary pro­fes­sors at Har­vard who say weird things? Of course not.

If you’re say­ing things that sound wrong to a novice, as op­posed to just rat­tling off mag­i­cal-sound­ing tech­nob­a­b­ble about lep­ti­cal quark braids in N + 2 di­men­sions; and if the hearer is a stranger, un­fa­mil­iar with you per­son­ally and un­fa­mil­iar with the sub­ject mat­ter of your field; then I sus­pect that the point at which the av­er­age per­son will ac­tu­ally start to grant cre­dence over­rid­ing their ini­tial im­pres­sion, purely be­cause of aca­demic cre­den­tials, is some­where around the No­bel Lau­re­ate level. If that. Roughly, you need what­ever level of aca­demic cre­den­tial qual­ifies as “be­yond the mun­dane.”

This is more or less what hap­pened to Eric Drexler, as far as I can tell. He pre­sented his vi­sion of nan­otech­nol­ogy, and peo­ple said, “Where are the tech­ni­cal de­tails?” or “Come back when you have a PhD!” And Eric Drexler spent six years writ­ing up tech­ni­cal de­tails and got his PhD un­der Marvin Min­sky for do­ing it. And Nanosys­tems is a great book. But did the same peo­ple who said, “Come back when you have a PhD,” ac­tu­ally change their minds at all about molec­u­lar nan­otech­nol­ogy? Not so far as I ever heard.

This might be an im­por­tant thing for young busi­nesses and new-minted con­sul­tants to keep in mind—that what your failed prospects tell you is the rea­son for re­jec­tion may not make the real differ­ence; and you should pon­der that care­fully be­fore spend­ing huge efforts. If the ven­ture cap­i­tal­ist says, “If only your sales were grow­ing a lit­tle faster!” or if the po­ten­tial cus­tomer says, “It seems good, but you don’t have fea­ture X,” that may not be the true re­jec­tion. Fix­ing it may, or may not, change any­thing.

And it would also be some­thing to keep in mind dur­ing dis­agree­ments. Robin Han­son and I share a be­lief that two ra­tio­nal­ists should not agree to dis­agree: they should not have com­mon knowl­edge of epistemic dis­agree­ment un­less some­thing is very wrong.2

I sus­pect that, in gen­eral, if two ra­tio­nal­ists set out to re­solve a dis­agree­ment that per­sisted past the first ex­change, they should ex­pect to find that the true sources of the dis­agree­ment are ei­ther hard to com­mu­ni­cate, or hard to ex­pose. E.g.:

  • Un­com­mon, but well-sup­ported, sci­en­tific knowl­edge or math;

  • Long in­fer­en­tial dis­tances;

  • Hard-to-ver­bal­ize in­tu­itions, per­haps stem­ming from spe­cific vi­su­al­iza­tions;

  • Zeit­geists in­her­ited from a pro­fes­sion (that may have good rea­son for it);

  • Pat­terns per­cep­tu­ally rec­og­nized from ex­pe­rience;

  • Sheer habits of thought;

  • Emo­tional com­mit­ments to be­liev­ing in a par­tic­u­lar out­come;

  • Fear that a past mis­take could be dis­proved;

  • Deep self-de­cep­tion for the sake of pride or other per­sonal benefits.

If the mat­ter were one in which all the true re­jec­tions could be eas­ily laid on the table, the dis­agree­ment would prob­a­bly be so straight­for­ward to re­solve that it would never have lasted past the first meet­ing.

“Is this my true re­jec­tion?” is some­thing that both dis­agreers should surely be ask­ing them­selves, to make things eas­ier on the other per­son. How­ever, at­tempts to di­rectly, pub­li­cly psy­cho­an­a­lyze the other may cause the con­ver­sa­tion to de­gen­er­ate very fast, from what I’ve seen.

Still—“Is that your true re­jec­tion?” should be fair game for Disagreers to humbly ask, if there’s any pro­duc­tive way to pur­sue that sub-is­sue. Maybe the rule could be that you can openly ask, “Is that sim­ple straight­for­ward-sound­ing rea­son your true re­jec­tion, or does it come from in­tu­ition-X or pro­fes­sional-zeit­geist-Y ?” While the more em­bar­rass­ing pos­si­bil­ities lower on the table are left to the Other’s con­science, as their own re­spon­si­bil­ity to han­dle.

1See “Science as At­tire” in Map and Ter­ri­tory.

2See Hal Fin­ney, “Agree­ing to Agree,” Over­com­ing Bias (blog), 2006, http://​​www.over­com­ing­bias.com/​​2006/​​12/​​agree­ing_to_agr.html.