Reflections on a Personal Public Relations Failure: A Lesson in Communication

Related To: Are Your Enemies Innately Evil?, Talking Snakes: A Cautionary Tale, How to Not Lose an Argument

Eliezer’s excellent article Are Your Enemies Innately Evil? points to the fact that when two people have a strong disagreement it’s often the case that each person sincerely believes that he or she is on the right side. Yvain’s excellent article Talking Snakes: A Cautionary Tale highlights that the fact that to each such person, without knowledge of the larger context that the other person’s beliefs fit into, the other person’s beliefs can appear to be absurd. The frequency with which this phenomenon occurs is sufficiently high so that it’s important for each participant in an argument to make a strong effort to understand where the other person is coming from and to frame one’s own ideas with the other person’s perspective in mind.

Last month I made a sequence of posts [1], [2], [3], [4] raising concerns about the fruitfulness of SIAI’s approach to reducing existential risk. My concerns were sincere and I made my sequence of postings in good faith. All the same, there’s a sense in which my sequence of postings was a failure. In the first of these I argued that the SIAI staff should place greater emphasis on public relations. Ironically, in my subsequent postings I myself should have placed greater emphasis on public relations. I made mistakes which damaged my credibility and barred me from serious consideration by some of those who I hoped to influence.

In the present posting I catalog these mistakes and describe the related lessons that I’ve learned about communication.

Mistake #1: Starting during the Singularity Summit

I started my string of posts during the Singularity Summit. This was interpreted by some to be underhanded and overly aggressive. In fact, the coincidence of my string of posts with the Singularity Summit was influenced more by the appearance of XiXiDu’s Should I Believe What the SIAI claims? than anything else, but it’s understandable that some SIAI supporters would construe the timing of my posts as premeditated and hostile in nature. Moreover, the timing of my posts did not give the SIAI staff a fair chance to respond real time. I should have avoided posting during a period of time when I knew that the SIAI staff would be occupied, waiting until a week after the Singularity Summit to begin my sequence of posts.

Mistake #2: Failing to balance criticism with praise

As Robin Hanson says in Against Disclaimers:

If you say anything nice (or critical) about anything associated with a group or person you are presumed to support (or oppose) them overall.

I don’t agree with Hanson that people are wrong to presume this—I think that statistically speaking, the above presumption is correct.

For this reason, it’s important to balance criticism of a group which one does not oppose with praise. I think that a number of things that SIAI staff have done have had expected favorable impacts on existential risk, even if I think other things they have done have negative expected impact. By failing to make this point salient, I mislead Airedale and others to believe that I have an agenda against SIAI.

Mistake #3: Letting my emotions get the better of me

My first pair of postings attracted considerable criticism, most of which which appeared to me to be ungrounded. I unreasonably assumed that these criticisms were made in bad faith, failing to take to heart the message of Talking Snakes: A Cautionary Tale that one’s positions can appear to be absurd to those who have access to a different set of contextual data from one’s own. As Gandhi said:

...what appears to be truth to the one may appear to be error to the other.

We’re wired to generalize from one example and erroneously assume that others have the same access to the same context that we do. As such, it’s natural for us to assume that when other strongly disagree with us it’s because they’re unreasonable people. While this is understandable, it’s conducive to emotional agitation which when left unchecked typically leads to further misunderstanding.

I should have waited until I had returned to emotional equilibrium before continuing my string of postings beyond the first two. Because I did not wait until returning emotional equilibrium, my final pair of postings was less effectiveness-oriented than it should have been and more about satisfying my immediate need for self-expression. I wholeheartedly agree with a relevant quote by Eliezer from Circular Altruism:

This isn’t about your feelings. A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain’s feelings of comfort or discomfort with a plan

Mistake #4: Getting personal with insufficient justification

As Eliezer has said in Politics is the Mind-Killer, it’s best to avoid touching on emotionally charged topics when possible. One LW poster who’s really great at this and who I look to as a role model in this regard is Yvain.

In my posting on The Importance of Self-Doubt I levied personal criticisms which many LW commentators felt uncomfortable with [1], [2], [3], [4]. It was wrong for me to make such personal criticisms without having thoroughly explored alternate avenues for accomplishing my goals. At least initially, I could have spoken in more general terms as prase did in a comment on my post—this may have sufficed to accomplish my goals without the need to discuss the sensitive subject matter that I did.

Mistake #5: Failing to share my posts with an SIAI supporter before posting

It’s best to share one’s proposed writings with a member of a given group before offering public criticisms of the activities of members of the said group. This gives him or her an opportunity to respond and provide context which one may be unaware of. After I made my sequence of postings, I had extensive dialogue with SIAI Visiting Fellow Carl Shulman. In the course of this dialogue I realized that I had crucial misconceptions about some of SIAI’s activities. I had been unaware of some of the activities which SIAI staff have been engaging in; activities which I judge to have significant positive expected value. I had also misinterpreted some of SIAI’s policies in ways that made them look worse than they now appear to me to be.

Sharing my posts with Carl before posting would have given me the opportunity to offer a more evenhanded account of SIAI’s activities and would have given me the feedback needed to avoid being misinterpreted.

Mistake #6: Expressing apparently absurd views before contextualizing them

In a comment to one of my postings, I expressed very low confidence in the success of Eliezer’s project. In line with Talking Snakes: A Cautionary Tale, I imagine that a staunch atheist would perceive a fundamentalist Christian’s probability estimate of the truth of Christianity to be absurd and that on the flip side a fundamentalist Christian would perceive a staunch atheist’s probability estimate of the truth of Christianity to be absurd. In absence of further context, the beliefs of somebody coming from a very different worldview inevitably seem absurd independently of whether or not they’re well grounded.

There are two problems with beginning a conversation on a topic by expressing wildly different positions from those of one’s conversation partners. One is that this tends to damage one’s own credibility in one’s conversation partner’s eyes. The other is that doing so often carries an implicit suggestion that one’s conversation partners are very irrational. As Robin Hanson says in Disagreement is Disrespect:

...while disagreement isn’t hate, it is disrespect. When you knowingly disagree with someone you are judging them to be less rational than you, at least on that topic.

Extreme disagreement can come across as extreme disrespect. In line with what Yvain says in How to Not Lose an Argument, expressing extreme disagreement usually has the effect of putting one’s conversation partners on the defensive and is detrimental to their ability to Leave a Line of Retreat.

In a comment on my Existential Risk and Public Relations posting Vladimir_Nesov said

The level of certainty is not up for grabs. You are as confident as you happen to be, this can’t be changed. You can change the appearance, but not your actual level of confidence. And changing the apparent level of confidence is equivalent to lying.

I disagree with Vladimir_Nesov that changing one’s apparent level of confidence is equivalent to lying. There are many possible orders in which one can state one’s beliefs about the world. At least initially, presenting the factors that lead one to one’s conclusion before presenting one’s conclusion projects a lower level of confidence in one’s conclusion than presenting one’s conclusions before presenting the factors that lead one to these conclusions. Altering one’s order of presentation in this fashion is not equivalent to lying and moreover is actually conducive to rational discourse.

As Hugh Ristik said in response to Reason is not the only means of overcoming bias,

The goal of using these forms of influence and rhetoric is not to switch the person you are debating from mindlessly disagreeing with you to mindlessly agreeing with you.

[..]

One of the best ways to change the minds of people who disagree with you is to cultivate an intellectual friendship with them, where you demonstrate a willingness to consider their ideas and update your positions, if they in return demonstrate the willingness to do the same for you. Such a relationship rests on both reciprocity and liking. Not only do you make it easier for them to back down and agree with you, but you make it easier for yourself to back down and agree with them.

When you have set up a context for the discussion where one person backing down isn’t framed as admitting defeat, then it’s a lot easier to do. You can back down and state agreement with them as a way to signal open-mindedness and the willingness to compromise, in order to encourage those qualities also in your debate partner. Over time, both people’s positions will shift towards each other, though not necessarily symmetrically.

Even though this sort of discourse is full of influence, bias, and signaling, it actually promotes rational discussion between many people better than trying to act like Spock and expecting people you are debating to do the same.

I should have preceded my expression of very low confidence in the success of Eliezer’s project with a careful and systematic discussion of the factors that led me to my conclusion.

Aside from my failure to give proper background for my conclusion, I also failed to be sufficiently precise in stating my conclusion. One LW poster interpreted my reference to “Eliezer’s Friendly AI project” to be “the totality of Eliezer’s efforts to lead to the creation of a Friendly AI.” This is not the interpretation that I intended—in particular I was not including Eliezer’s networking and advocacy efforts (which may be positive and highly significant) under the umbrella of “Eliezer’s Friendly AI project.” By “Eliezer’s Friendly AI project” I meant “Eliezer’s attempt to unilaterally build a Friendly AI that will go FOOM in collaboration with a group of a dozen or fewer people.” I should have made a sharper claim to avoid the appearance of overconfidence.

Mistake #7: Failing to give sufficient context for my remarks on transparency and accountability

After I made my Transparency and Accountability posting, Yvain commented

The bulk of this is about a vague impression that SIAI isn’t transparent and accountable. You gave one concrete example of something they could improve: having a list of their mistakes on their website. This isn’t a bad idea, but AFAIK GiveWell is about the only charity that currently does this, so it doesn’t seem like a specific failure on SIAI’s part not to include this. So why the feeling that they’re not transparent and accountable?

In my own mind it was clear what I meant by transparency and accountability, but my perspective is sufficiently exotic so that it’s understandable that readers like Yvain would find my remarks puzzling or even incoherent. One aspect of the situation is that I share GiveWell’s skeptical Bayesian prior. In A conflict of Bayesian priors? Holden Karnofsky says:

When you have no information one way or the other about a charity’s effectiveness, what should you assume by default?

Our default assumption, or prior, is that a charity—at least in one of the areas we’ve studied most, U.S. equality of opportunity or international aid—is falling far short of what it promises donors, and very likely failing to accomplish much of anything (or even doing harm). This doesn’t mean we think all charities are failing—just that, in the absence of strong evidence of impact, this is the appropriate starting-point assumption.

I share GiveWell’s skeptical prior when it comes to the areas that GiveWell has studied most and feel that it’s justified when applied to the cause of existential risk reduction to an even greater extent for the reason given by prase:

The problem is, if the cause is put so far in the future and based so much on speculations, there is no fixed point to look at when countering one’s own biases, and the risk of a gross overestimation of one’s agenda becomes huge.

Because my own attitude toward the viability of philanthropic endeavors in general is so different from that of many LW posters, when I suggested that SIAI is insufficiently transparent and accountable, many LW posters felt that I was unfairly singling out SIAI. Statements originating from a skeptical Bayesian prior toward philanthropy are easily misinterpreted in this fashion. As Holden says:

This question might be at the core of our disagreements with many

[...]

Many others seem to have the opposite prior: they assume that a charity is doing great things unless it is proven not to be. These people are shocked that we hand out “0 out of 3 stars” for charities just because so little information is available about them; they feel the burden of proof is on us to show that a charity is not accomplishing good.

I should have been more precise about my explicit about my Bayesian prior before suggesting that SIAI should be more transparent and accountable. This would have made it more clear that I was not singling SIAI out. Now, in the body of my original post I attempted to allude to my skeptical Bayesian prior in the body of my posting when I said :

I agree … that in evaluating charities which are not transparent and accountable, we should assume the worst.

but this statement was itself prone to misinterpretation. In particular, some LW posters interpreted it literally when I had intended “assume the worst” to be a shorthand figure of speech for “assume that things are considerably worse than they superficially appear to be.” Eliezer responded by saying

Assuming that much of the worst isn’t rational

I totally agree with Eliezer that literally assuming the worst is not rational. I thought that my intended meaning would be clear (because the literal meaning is obviously false), but in light of contextual cues that made it appear as though I had an agenda against SIAI my shorthand was prone to misinterpretation. I should have been precise about what my prior assumption is about charities that are not transparent and accountable, saying: “my prior assumption is that funding a given charity which is not transparent and accountable has slight positive expected value which is dwarfed by the positive expected value of funding the best transparent and accountable charities.”

As Eliezer suggested, I also should have made it more clear what I consider to be an appropriate level of transparency and accountability for an existential risk reduction charity. After I read Yvain’s comment referenced above, I made an attempt to explain what I had in mind by transparency and accountability in a pair of responses to him [1], [2], but I should have done this in the body of my main post before posting. Moreover, I should have preempted his remark:

Anti-TB charities can measure how much less TB there is per dollar invested; SIAI can’t measure what percentage safer the world is, since the world-saving is still in basic research phase. You can’t measure the value of the Manhattan Project in “cities destroyed per year” while it’s still going on.

by citing Holden’s tentative list of questions for existential risk reduction charities.

Mistake #8: Mentioning developing world aid charities in juxtaposition with existential risk reduction

In the original version of my Transparency and Accountability posting I said

I believe that at present GiveWell’s top ranked charities VillageReach and StopTB are better choices than SIAI, even for donors like utilitymonster who take astronomical waste seriously and believe in the ideas expressed in the cluster of blog posts linked under Shut Up and multiply.

In fact, I meant precisely what I said and no more, but as Hanson says in Against Disclaimers, people presume that:

If you say you prefer option A to option B, you also prefer A to any option C.

Because I did not add a disclaimer, Airedale understood me to be advocating in favor of VillageReach and StopTB over all other available options. Those who know me well know that over the past six months I’ve been in the process of grappling with the question of which forms of philanthropy are most effective from a utilitarian perspective and that I’ve been searching for a good donation opportunity which is more connected with the long-term future of humanity than VillageReach’s mission is. But it was unreasonable for me to assume that my readers would know where I was coming from.

In a comment on the first of my sequence of postings orthonormal said:

whpearson mentioned this already, but if you think that the most important thing we can be doing right now is publicizing an academically respectable account of existential risk, then you should be funding the Future of Humanity Institute.

From the point of view of the typical LW poster it would have been natural for me to address orthonormal’s remark in my brief discussion of the relative merits of charities for those who take astronomical waste seriously and I did not do so. This led some [1], [2], [3] to question my seriousness of purpose and further contributed to the appearance that I have an agenda against SIAI. Shortly after I made my post Carl Shulman commented saying:

The invocation of VillageReach in addressing those aggregative utilitarians concerned about astronomical waste here seems baffling to me.

After reading over his comment and others and thinking about them, I edited my post to avoid the appearance of favoring developing world aid over existential risk reduction, but the damage had already been done. Based on the original text of my posting and my track record of donating exclusively VillageReach, many LW posters have persistently understood me to have an agenda in favor of developing world aid and against existential risk reduction charities.

The original phrasing of my post made sense from my own point of view. I believe supporting GiveWell’s recommended charities has high expected value because I believe that doing so strengthens a culture of effective philanthropy and that in the long run this will meaningfully lower existential risk. But my thinking here is highly non-obvious and it was unreasonable for me to expect that it would be evident to readers. It’s easy to forget that others can’t read our minds. I damaged my credibility by mentioning developing world aid charities in juxtaposition with existential risk reduction without offering careful explanation for why I was doing so.

My reference to developing world aid charities was also not effectiveness-oriented. As far as I know, most SIAI donors are not considering donating to developing world aid charities. As described under the heading “Mistake #3” above, I slipped up and let my desire for personal expression take precedence over actually getting things done. As I described in Missed Opportunities For Doing Well By Doing Good I personally had a great experience with discovering GiveWell and giving to VillageReach. Instead of carefully taking the time to get to know my audience, I simple-mindedly generalized from one example and erroneously assumed that my readers would be coming from a perspective similar to my own.


Conclusion:

My recent experience has given me heightened respect for the careful writing style of LW posters like Yvain and Carl Shulman. Writing in this style requires hard work and the ability to delay gratification, but it can happen that the cost is well worth it in the end. When one is writing for an audience that one doesn’t know very well there’s a substantial risk of being misinterpreted because one’s readers do not have enough context to understand what one is driving at. This risk can be mitigated by taking the time to provide detailed background for one’s readers and by taking great care to avoid making claims (whether explicit or implicit) that are too strong. In principle one can always qualify one’s remarks later on, but it’s important to remember that as komponisto said

First impressions really do matter

so that it’s preferable to avoid being misunderstood the first time around. On the flip side it’s important to remember that one may be misguided by one’s own first impressions. There are LW posters who I now understand to be acting in good faith who I initially misunderstood to have a hostile agenda against me.

My recent experience was my first writing about a controversial subject in public and has been a substantive learning experience for me. I would like to thank the Less Wrong community for giving me this opportunity. I’m especially grateful to posters CarlShulman, Airedale, steven0461, Jordan, Komponisto, Yvain, orthonormal, Unknowns, Wei_Dai, Will_Newsome, Mitchell_Porter, rhollerith_dot_com, Eneasz, Jasen and PeerInfinity for their willingness to engage with me and help me understand why some of what I said and did was subject to misinterpretation. I look forward to incorporating the lessons that I’ve learned into my future communication practices.