Existential Risk and Public Relations

[Added 02/​24/​14: Some time after writing this post, I discovered that it was based on a somewhat unrepresentative picture of SIAI. I still think that the concerns therein were legitimate, but they had less relative significance than I had thought at the time. SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don’t apply to MIRI as presently constituted.]

A common trope on Less Wrong is the idea that governments and the academic establishment have neglected to consider, study and work against existential risk on account of their shortsightedness. This idea is undoubtedly true in large measure. In my opinion and in the opinion of many Less Wrong posters, it would be very desirable to get more people thinking seriously about existential risk. The question then arises: is it possible to get more people thinking seriously about existential risk? A first approximation to an answer to this question is “yes, by talking about it.” But this answer requires substantial qualification: if the speaker or the speaker’s claims have low credibility in the eyes of the audience then the speaker will be almost entirely unsuccessful in persuading his or her audience to think seriously about existential risk. Speakers who have low credibility in the eyes of an audience member decrease the audience member’s receptiveness to thinking about existential risk. Rather perversely, speakers who have low credibility in the eyes of a sufficiently large fraction of their audience systematically raise existential risk by decreasing people’s inclination to think about existential risk. This is true whether or not the speakers’ claims are valid.

As Yvain has discussed in his excellent article titled The Trouble with “Good”

To make an outrageous metaphor: our brains run a system rather like Less Wrong’s karma. You’re allergic to cats, so you down-vote “cats” a couple of points. You hear about a Palestinian committing a terrorist attack, so you down-vote “Palestinians” a few points. Richard Dawkins just said something especially witty, so you up-vote “atheism”. High karma score means seek it, use it, acquire it, or endorse it. Low karma score means avoid it, ignore it, discard it, or condemn it.

When Person X makes a claim which an audience member finds uncredible, the audience member’s brain (semiconsciously) makes a mental note of the form “Boo for Person X’s claims!” If the audience member also knows that Person X is an advocate of existential risk reduction, the audience member’s brain may (semiconsciously) make a mental note of the type “Boo for existential risk reduction!”

The negative reaction to Person X’s claims is especially strong if the audience member perceives Person X’s claims as arising from a (possibly subconscious) attempt on Person X’s part to attract attention and gain higher status, or even simply to feel as though he or she has high status. As Yvain says in his excellent article titled That other kind of status:

But many, maybe most human actions are counterproductive at moving up the status ladder. 9-11 Conspiracy Theories are a case in point. They’re a quick and easy way to have most of society think you’re stupid and crazy. So is serious interest in the paranormal or any extremist political or religious belief. So why do these stay popular?

[...]

a person trying to estimate zir social status must balance two conflicting goals. First, ze must try to get as accurate an assessment of status as possible in order to plan a social life and predict others’ reactions. Second, ze must construct a narrative that allows them to present zir social status as as high as possible, in order to reap the benefits of appearing high status.

[...]

In this model, people aren’t just seeking status, they’re (also? instead?) seeking a state of affairs that allows them to believe they have status. Genuinely having high status lets them assign themselves high status, but so do lots of other things. Being a 9-11 Truther works for exactly the reason mentioned in the original quote: they’ve figured out a deep and important secret that the rest of the world is too complacent to realize.

I’m presently a graduate student in pure mathematics. During graduate school I’ve met many smart people who I wish would take existential risk more seriously. Most such people who have heard of Eliezer do not find his claims credible. My understanding is that the reason for this is that Eliezer has made some claims which they perceive to be falling under the above rubric, and the strength of their negative reaction to these has tarnished their mental image of all of Eliezer’s claims. Since Eliezer supports existential risk reduction, I believe that this has made them less inclined to think about existential risk than they were before they heard of Eliezer.

There is also a social effect which compounds the issue which I just mentioned. The issue which I just mentioned makes people who are not directly influenced by the issue that I just mentioned less likely to think seriously about existential risk on account of their desire to avoid being perceived as associated with claims that people find uncredible.

I’m very disappointed that Eliezer has made statements such as:

If I got hit by a meteorite now, what would happen is that Michael Vassar would take over sort of taking responsibility for seeing the planet through to safety...Marcello Herreshoff would be the one tasked with recognizing another Eliezer Yudkowsky if one showed up and could take over the project, but at present I don’t know of any other person who could do that...

which are easily construed as claims that his work has higher expected value to humanity than the work of virtually all humans in existence. Even if such claims are true, people do not have the information that they need to verify that such claims are true, and so virtually everybody who could be helping out assuage existential risk find such claims uncredible. Many such people have an especially negative reaction to such claims because they can be viewed as arising from a tendency toward status grubbing, and humans are very strongly wired to be suspicious of those who they suspect to be vying for inappropriately high status. I believe that such people who come into contact with Eliezer’s statements like the one I have quoted above are less statistically likely to work to reduce existential risk than they were before coming into contact with such statements. I therefore believe that by making such claims, Eliezer has increased existential risk.

I would go further than that and say that that I presently believe that donating to SIAI has negative expected impact on existential risk reduction on account of that SIAI staff are making uncredible claims which are poisoning the existential risk reduction meme. This is a matter on which reasonable people can disagree. In a recent comment, Carl Shulman expressed the view that though SIAI has had some negative impact on the existential risk reduction meme, the net impact of SIAI on the existential risk meme is positive. In any case, there’s definitely room for improvement on this point.

Last July I made a comment raising this issue and Vladimir_Nesov suggested that I contact SIAI. Since then I have corresponded with Michael Vassar about this matter. My understanding of Michael Vassar’s position is that the people who are dissuaded from thinking about existential risk because of remarks like Eliezer’s are too irrational for it to be worthwhile for them to be thinking about existential risk. I may have misunderstood Michael’s position and encourage him to make a public statement clarifying his position on this matter. If I have correctly understood his position, I do not find Michael Vassar’s position on this matter credible.

I believe that if Carl Shulman is right, then donating to SIAI has positive expected impact on existential risk reduction. I believe that that even if this is the case, a higher expected value strategy is to withold donations from SIAI and informing SIAI that you will fund them if and only if they require their staff to exhibit a high degree of vigilance about the possibility of poisoning the existential risk meme by making claims that people find uncredible. I suggest that those who share my concerns adopt the latter policy until their concerns have been resolved.

Before I close, I should emphasize that my post should not be construed as an attack on Eliezer. I view Eliezer as an admirable person and don’t think that he would ever knowingly do something that raises existential risk. Roko’s Aspergers Poll suggests a strong possibility that the Less Wrong community exhibits an unusually high abundance of the traits associated with Aspergers Syndrome. It would not be at all surprising if the founders of Less Wrong have a similar unusual abundance of the traits associated with Aspergers Syndrome. I believe that more likely than not, the reason why Eliezer has missed the point that I raise in this post is social naivete on his part rather than willful self-deception.