The Toxoplasma of AGI Doom and Capabilities?

[Epistemic Status: I’m confident that the individual facts I lay out support the main claim, but I’m not fully confident its enough evidence to make a true or useful framework for understanding the world.]

I’m going to give seven pieces of evidence to support this claim[1]:

AI Doomerism helps accelerate AI capabilities, and AI capabilities in turn proliferate the AI Doomerism meme.

If these dynamics exist, they’d be not unlike the Toxoplasma of Rage. Here’s my evidence:

  1. Sam Altman claims Eliezer “has IMO done more to accelerate AGI than anyone else”:

  2. Technical talent who hear about AI doom might decide capabilities are technically sweet, or a race, or inevitable, and decide to work on it for those reasons (doomer → capabilities transmission).

  3. Funders and executives who hear about AI doom might decide capabilities are a huge opportunity, or disruptive, or inevitable, and decide to fund it for those reasons (doomer → capabilities transmission).

  4. Capabilities amplifies the memetic relevance of doomerism (capabilities → doomer transmission).

  5. AI Doomerism says we should closely follow capabilities updates, discuss them, etc.

  6. Capabilities and doomerism gain and lose social status together—Eliezer Yudkowsky has been writing about doom for a long time, but got a Time article and TED talk only after significant capabilities advances.

  7. Memes generally benefit from conflict, and doomerism and capabilities can serve as adversaries for this purpose.

I’ve been trying to talk about “AI doomerism” here as a separate meme than “AI safety”, respectively something like “p(doom) is very large” and “we need to invest heavily into AI safety work”, though these are obviously related and often cooccur. One could no doubt make a similar case for AI safety and capabilities supporting each other, but I think the evidence I listed above applies mostly to AI doom claims (if one uses Eliezer as synecdoche for AI doomerism, which I think is reasonable).

I hope with this post I’m highlighting a something that is a combination of true and useful. Please keep in mind that the truth values of “AI doom is in a toxoplasma relationship with AI capabilities” and “AI doom is right” are independent.

  1. ^

    This post was inspired by one striking line Jan_Kulveit’s helpful Talking publicly about AI risk:

    - the AGI doom memeplex has, to some extent, a symbiotic relationship with the race toward AGI memeplex