Some reasons to not say “Doomer”

“Doomer” has become a common term to refer to people with pessimistic views about outcomes from AI. I claim this is not a helpful term on net, and generally will cause people to think less clearly.

Reification of identity + making things tribal

I finally realized today why politics and religion yield such uniquely useless discussions...

I think what religion and politics have in common is that they become part of people’s identity, and people can never have a fruitful argument about something that’s part of their identity. By definition they’re partisan...

More generally, you can have a fruitful discussion about a topic only if it doesn’t engage the identities of any of the participants. What makes politics and religion such minefields is that they engage so many people’s identities. But you could in principle have a useful conversation about them with some people. And there are other topics that might seem harmless, like the relative merits of Ford and Chevy pickup trucks, that you couldn’t safely talk about with others.

The most intriguing thing about this theory, if it’s right, is that it explains not merely which kinds of discussions to avoid, but how to have better ideas. If people can’t think clearly about anything that has become part of their identity, then all other things being equal, the best plan is to let as few things into your identity as possible.

- Paul Graham in Keep Your Identity Small[1]

I think a big risk of the “Doomer” label is it moves something from a “given the arguments and evidence I have, I believe X” into an essential deeper property (or commitment) of a person. It reifies it as an identity. You correspondingly then get people who are “not-Doomers”, and more recently, I’ve heard the term “Foomer” too.

Because people’s beliefs are quite correlated with those immediately around them, those more pessimistic and less pessimistic about AI tend be clusters, meaning it gets easy to point at clusters and make things tribal and political.

I’m in this tribe, they’re in that tribe. My tribe is good, that tribe is bad. I think this makes us stupider in various ways:

  • We now start talking about people rather than object-level beliefs/​evidence/​reason.

    • It’s much easier to dismiss people than arguments.

  • We put up social barriers to changing your mind. People don’t like to be an odd one out among their friends, and if your friends identify as being/​not being Doomers (and perhaps having negative opinions about the other group), there will be psychological resistance to update.

    • I think this is already the case when it comes to P(Doom), that there’s social pressure to conform. I regret to say that I’ve reacted with surprise when someone expressed a P(Doom) different than I expected, in a way that exerted social pressure. I’m trying to do less of that, as I think the evidence/​reason is such that reasonable people can reasonably disagree a lot.

“Doomer” is an externally applied label and is often used pejoratively

Looking at the Wikipedia page for Doomer, it’s possible the term was first used without any mean-spirited connotation. That said, I think it’s currently very reminiscent of “Boomer”, a term that’s definitely negatively valenced in the memespace[2] these days:

“OK boomer” or “okay boomer” is a catchphrase and internet meme that has been used by Gen-X, Millennials and Gen Z to dismiss or mock attitudes typically associated with baby boomers – people born in the two decades following World War II. – Wikipedia

Not exactly surprising, but on Twitter you’ll see a lot of this usage.

source
source

Also, my sense is it’s much less common for people who meet the criteria for being Doomers to describe themselves as such vs others from the outside calling them that. Though this could be because when you’re hanging out among others with the same beliefs, you don’t have to point that out via label very much.

In general though, I think one should be cautious to apply a label to people they didn’t choose for themselves and mostly haven’t adopted. In many other domains, that’d be deeply frowned upon as pretty hostile.

People feeling at all dismissed/​ridiculed is also not going to foster healthy discourse.

To be clear! I’m not going to claim that everyone using the term means it in a negative way. Especially on Twitter where people are trying to be concise, I see the case for using a term shorter than “person with high P(Doom) who is worried about AI”. I’m not sure what would be better if you need a concise term, but “AI pessimist” feels more plainly descriptive to me.

Still, I think it’s better to avoid a term that some people use pejoratively even if you don’t mean it that way.

Reappropriation?

Sometimes a group reappropriates a label and it’s just fine. It’s possible that people started calling Rationalists “Rats” for a negative connotation (and possible some did just to save syllables). That term isn’t universally used by the community, but I don’t think it carries much negative valence currently.

Could be the same will/​would happen with “Doomer”, even if came from people looking for a slur, it gets neutralized and adopted as a convenient short label.

However, I think this would still be reification of identities, which as above, I don’t think helps people think clearly. I’m relatively more okay with it for the Rationalist identity. “Being a Rationalist” really is a clear group membership and not something that changes lightly in a way that is ideally less likely with one’s predictions about AI. Given the current state of evidence around AI, I think more lightness and less identity is warranted.

“Doomer” is ambiguous

When I started writing this post, I thought Doomer meant someone with a P(Doom) above 80% or the like.

Polling a few people at a recent party, it became clear people interpreted it moderately differently. Various definitions for AI Doomer:

  • Someone who’s P(Doom) is like 80% or higher

  • Somebody who is generally worried about AI and thinks this is a big problem worth worrying about, including if their P(Doom) were as low as 5%

  • Someone specifically in the particular MIRI cluster of researchers, e.g. Eliezer Yudkowsky, Nate Soares, and Evan Hubinger are Doomers.

Explicit definitions or “AI pessimist”/​”AI concerned” is a better alternative

My preference with this post is more to surface reasons (explain) than make a call-to-action (persuade). Before hearing feedback on this, I’m not confident enough to say hey everyone, we should all do Y instead of X, but I do think these are good reasons against using the term Doomer.

I think that being descriptive is good where possible, e.g., “people who assign 80+% to Doom” or “people who think AI is worth worrying about” given what you actually mean in context. These are longer phrases, but that might be a feature not a bug. A longer phrase is a tax on talking about people when it’s better to talk about ideas, arguments, and evidence.

If you must use a shorter phrase, I think a more neutral descriptive term is better. Perhaps “AI Pessimist” for someone who think outcomes are quite likely to be bad, and “AI concerned” for someone who thinks they could be bad enough to worry about.

I invite pushback though. There could be considerations and second order effects I’m not considering here.

  1. ^

    Also see classic LessWrong posts:

    - Use Your Identity Carefully
    - Strategic choice of identity

    and other collected in the Identity tag

  2. ^

    In fact, some defined a whole family of negative connotation “-oomer” labels. See https://​​knowyourmeme.com/​​memes/​​oomer-wojaks

Crossposted to EA Forum (28 points, 0 comments)