Existential Risk and Public Relations

[Added 02/​24/​14: Some time af­ter writ­ing this post, I dis­cov­ered that it was based on a some­what un­rep­re­sen­ta­tive pic­ture of SIAI. I still think that the con­cerns therein were le­gi­t­i­mate, but they had less rel­a­tive sig­nifi­cance than I had thought at the time. SIAI (now MIRI) has evolved sub­stan­tially since 2010 when I wrote this post, and the crit­i­cisms made in the post don’t ap­ply to MIRI as presently con­sti­tuted.]

A com­mon trope on Less Wrong is the idea that gov­ern­ments and the aca­demic es­tab­lish­ment have ne­glected to con­sider, study and work against ex­is­ten­tial risk on ac­count of their short­sight­ed­ness. This idea is un­doubt­edly true in large mea­sure. In my opinion and in the opinion of many Less Wrong posters, it would be very de­sir­able to get more peo­ple think­ing se­ri­ously about ex­is­ten­tial risk. The ques­tion then arises: is it pos­si­ble to get more peo­ple think­ing se­ri­ously about ex­is­ten­tial risk? A first ap­prox­i­ma­tion to an an­swer to this ques­tion is “yes, by talk­ing about it.” But this an­swer re­quires sub­stan­tial qual­ifi­ca­tion: if the speaker or the speaker’s claims have low cred­i­bil­ity in the eyes of the au­di­ence then the speaker will be al­most en­tirely un­suc­cess­ful in per­suad­ing his or her au­di­ence to think se­ri­ously about ex­is­ten­tial risk. Speak­ers who have low cred­i­bil­ity in the eyes of an au­di­ence mem­ber de­crease the au­di­ence mem­ber’s re­cep­tive­ness to think­ing about ex­is­ten­tial risk. Rather per­versely, speak­ers who have low cred­i­bil­ity in the eyes of a suffi­ciently large frac­tion of their au­di­ence sys­tem­at­i­cally raise ex­is­ten­tial risk by de­creas­ing peo­ple’s in­cli­na­tion to think about ex­is­ten­tial risk. This is true whether or not the speak­ers’ claims are valid.

As Yvain has dis­cussed in his ex­cel­lent ar­ti­cle ti­tled The Trou­ble with “Good”

To make an out­ra­geous metaphor: our brains run a sys­tem rather like Less Wrong’s karma. You’re aller­gic to cats, so you down-vote “cats” a cou­ple of points. You hear about a Pales­ti­nian com­mit­ting a ter­ror­ist at­tack, so you down-vote “Pales­ti­ni­ans” a few points. Richard Dawk­ins just said some­thing es­pe­cially witty, so you up-vote “athe­ism”. High karma score means seek it, use it, ac­quire it, or en­dorse it. Low karma score means avoid it, ig­nore it, dis­card it, or con­demn it.

When Per­son X makes a claim which an au­di­ence mem­ber finds un­cred­ible, the au­di­ence mem­ber’s brain (semi­con­sciously) makes a men­tal note of the form “Boo for Per­son X’s claims!” If the au­di­ence mem­ber also knows that Per­son X is an ad­vo­cate of ex­is­ten­tial risk re­duc­tion, the au­di­ence mem­ber’s brain may (semi­con­sciously) make a men­tal note of the type “Boo for ex­is­ten­tial risk re­duc­tion!”

The nega­tive re­ac­tion to Per­son X’s claims is es­pe­cially strong if the au­di­ence mem­ber per­ceives Per­son X’s claims as aris­ing from a (pos­si­bly sub­con­scious) at­tempt on Per­son X’s part to at­tract at­ten­tion and gain higher sta­tus, or even sim­ply to feel as though he or she has high sta­tus. As Yvain says in his ex­cel­lent ar­ti­cle ti­tled That other kind of sta­tus:

But many, maybe most hu­man ac­tions are coun­ter­pro­duc­tive at mov­ing up the sta­tus lad­der. 9-11 Con­spir­acy The­o­ries are a case in point. They’re a quick and easy way to have most of so­ciety think you’re stupid and crazy. So is se­ri­ous in­ter­est in the para­nor­mal or any ex­trem­ist poli­ti­cal or re­li­gious be­lief. So why do these stay pop­u­lar?

[...]

a per­son try­ing to es­ti­mate zir so­cial sta­tus must bal­ance two con­flict­ing goals. First, ze must try to get as ac­cu­rate an as­sess­ment of sta­tus as pos­si­ble in or­der to plan a so­cial life and pre­dict oth­ers’ re­ac­tions. Se­cond, ze must con­struct a nar­ra­tive that al­lows them to pre­sent zir so­cial sta­tus as as high as pos­si­ble, in or­der to reap the benefits of ap­pear­ing high sta­tus.

[...]

In this model, peo­ple aren’t just seek­ing sta­tus, they’re (also? in­stead?) seek­ing a state of af­fairs that al­lows them to be­lieve they have sta­tus. Gen­uinely hav­ing high sta­tus lets them as­sign them­selves high sta­tus, but so do lots of other things. Be­ing a 9-11 Truther works for ex­actly the rea­son men­tioned in the origi­nal quote: they’ve figured out a deep and im­por­tant se­cret that the rest of the world is too com­pla­cent to re­al­ize.

I’m presently a grad­u­ate stu­dent in pure math­e­mat­ics. Dur­ing grad­u­ate school I’ve met many smart peo­ple who I wish would take ex­is­ten­tial risk more se­ri­ously. Most such peo­ple who have heard of Eliezer do not find his claims cred­ible. My un­der­stand­ing is that the rea­son for this is that Eliezer has made some claims which they per­ceive to be fal­ling un­der the above rubric, and the strength of their nega­tive re­ac­tion to these has tar­nished their men­tal image of all of Eliezer’s claims. Since Eliezer sup­ports ex­is­ten­tial risk re­duc­tion, I be­lieve that this has made them less in­clined to think about ex­is­ten­tial risk than they were be­fore they heard of Eliezer.

There is also a so­cial effect which com­pounds the is­sue which I just men­tioned. The is­sue which I just men­tioned makes peo­ple who are not di­rectly in­fluenced by the is­sue that I just men­tioned less likely to think se­ri­ously about ex­is­ten­tial risk on ac­count of their de­sire to avoid be­ing per­ceived as as­so­ci­ated with claims that peo­ple find un­cred­ible.

I’m very dis­ap­pointed that Eliezer has made state­ments such as:

If I got hit by a me­te­orite now, what would hap­pen is that Michael Vas­sar would take over sort of tak­ing re­spon­si­bil­ity for see­ing the planet through to safety...Mar­cello Her­reshoff would be the one tasked with rec­og­niz­ing an­other Eliezer Yud­kowsky if one showed up and could take over the pro­ject, but at pre­sent I don’t know of any other per­son who could do that...

which are eas­ily con­strued as claims that his work has higher ex­pected value to hu­man­ity than the work of vir­tu­ally all hu­mans in ex­is­tence. Even if such claims are true, peo­ple do not have the in­for­ma­tion that they need to ver­ify that such claims are true, and so vir­tu­ally ev­ery­body who could be helping out as­suage ex­is­ten­tial risk find such claims un­cred­ible. Many such peo­ple have an es­pe­cially nega­tive re­ac­tion to such claims be­cause they can be viewed as aris­ing from a ten­dency to­ward sta­tus grub­bing, and hu­mans are very strongly wired to be sus­pi­cious of those who they sus­pect to be vy­ing for in­ap­pro­pri­ately high sta­tus. I be­lieve that such peo­ple who come into con­tact with Eliezer’s state­ments like the one I have quoted above are less statis­ti­cally likely to work to re­duce ex­is­ten­tial risk than they were be­fore com­ing into con­tact with such state­ments. I there­fore be­lieve that by mak­ing such claims, Eliezer has in­creased ex­is­ten­tial risk.

I would go fur­ther than that and say that that I presently be­lieve that donat­ing to SIAI has nega­tive ex­pected im­pact on ex­is­ten­tial risk re­duc­tion on ac­count of that SIAI staff are mak­ing un­cred­ible claims which are poi­son­ing the ex­is­ten­tial risk re­duc­tion meme. This is a mat­ter on which rea­son­able peo­ple can dis­agree. In a re­cent com­ment, Carl Shul­man ex­pressed the view that though SIAI has had some nega­tive im­pact on the ex­is­ten­tial risk re­duc­tion meme, the net im­pact of SIAI on the ex­is­ten­tial risk meme is pos­i­tive. In any case, there’s definitely room for im­prove­ment on this point.

Last July I made a com­ment rais­ing this is­sue and Vladimir_Nesov sug­gested that I con­tact SIAI. Since then I have cor­re­sponded with Michael Vas­sar about this mat­ter. My un­der­stand­ing of Michael Vas­sar’s po­si­tion is that the peo­ple who are dis­suaded from think­ing about ex­is­ten­tial risk be­cause of re­marks like Eliezer’s are too ir­ra­tional for it to be worth­while for them to be think­ing about ex­is­ten­tial risk. I may have mi­s­un­der­stood Michael’s po­si­tion and en­courage him to make a pub­lic state­ment clar­ify­ing his po­si­tion on this mat­ter. If I have cor­rectly un­der­stood his po­si­tion, I do not find Michael Vas­sar’s po­si­tion on this mat­ter cred­ible.

I be­lieve that if Carl Shul­man is right, then donat­ing to SIAI has pos­i­tive ex­pected im­pact on ex­is­ten­tial risk re­duc­tion. I be­lieve that that even if this is the case, a higher ex­pected value strat­egy is to with­old dona­tions from SIAI and in­form­ing SIAI that you will fund them if and only if they re­quire their staff to ex­hibit a high de­gree of vigilance about the pos­si­bil­ity of poi­son­ing the ex­is­ten­tial risk meme by mak­ing claims that peo­ple find un­cred­ible. I sug­gest that those who share my con­cerns adopt the lat­ter policy un­til their con­cerns have been re­solved.

Be­fore I close, I should em­pha­size that my post should not be con­strued as an at­tack on Eliezer. I view Eliezer as an ad­mirable per­son and don’t think that he would ever know­ingly do some­thing that raises ex­is­ten­tial risk. Roko’s Asperg­ers Poll sug­gests a strong pos­si­bil­ity that the Less Wrong com­mu­nity ex­hibits an un­usu­ally high abun­dance of the traits as­so­ci­ated with Asperg­ers Syn­drome. It would not be at all sur­pris­ing if the founders of Less Wrong have a similar un­usual abun­dance of the traits as­so­ci­ated with Asperg­ers Syn­drome. I be­lieve that more likely than not, the rea­son why Eliezer has missed the point that I raise in this post is so­cial naivete on his part rather than willful self-de­cep­tion.