The Memetics of AI Successionism

TL;DR: AI progress and the recognition of associated risks are painful to think about. This cognitive dissonance acts as fertile ground in the memetic landscape, a high-energy state that will be exploited by novel ideologies. We can anticipate cultural evolution will find viable successionist ideologies: memeplexes that resolve this tension by framing the replacement of humanity by AI not as a catastrophe, but as some combination of desirable, heroic, or inevitable outcome. This post mostly examines the mechanics of the process.

Most analyses of ideologies fixate on their specific claims—what acts are good, whether AIs are conscious, whether Christ is divine, or whether Virgin Mary was free of original sin from the moment of her conception. Other analyses focus on exegeting individual thinkers: ‘What did Marx really mean?’ In this text, I’m trying to do something different—mostly, look at ideologies from an evolutionary perspective. I will largely sideline the agency of individual humans, not because it doesn’t exist, but because viewing the system from a higher altitude reveals different dynamics.

We won’t be looking into whether or not the claims of these ideologies are true, but into why they may spread, irrespective of their truth value.

What Makes Memes Fit?

To understand why successionism might spread, let’s consider the general mechanics of memetic fitness. Why do some ideas propagate while others fade?

Ideas spread for many reasons: some genuinely improve their hosts’ lives, others contain built-in commands to spread the idea, and still others trigger the amplification mechanisms of social media algorithms. One of the common reasons, which we will focus on here, is explaining away tension.

One useful lens to understand this fitness term is predictive processing (PP). In the PP framework, the brain is fundamentally a prediction engine. It runs a generative model of the world and attempts to minimize the error between its predictions and sensory input.

Memes—ideas, narratives, hypotheses—are often components of the generative models. Part of what makes them successful is minimizing prediction error for the host. This can happen by providing a superior model that predicts observations (“this type of dark cloud means it will be raining”), gives ways to shape the environment (“hit this way the rock will break more easily”), or explains away discrepancies between observations and deeply held existing models.

Another source of prediction error arises not from the mismatch between model and reality, but from tension between internal models. This internal tension is generally known as cognitive dissonance.

Cognitive dissonance is often described as a feeling of discomfort—but it also represents an unstable, high-energy state in the cognitive system. When this dissonance is widespread across a population, it creates what we might call “fertile ground” in the memetic landscape. There is a pool of “free energy” to digest.

Cultural evolution is an optimization process. When it discovers a configuration of ideas that can metabolize this energy by offering a narrative that decreases the tension, those ideas may spread, regardless of their long-term utility for humans or truth value.

The Cultural Evolution Search Process

While some ideologies might occasionally be the outcome of intelligent design (e.g., deliberately crafted propaganda piece), it seems more common that individuals recombine and mutate ideas in their minds, express them, and some of these stick and spread. So, cultural evolution acts as a massive, parallel search algorithm operating over the space of possible ideas. Most mutations are non-viable. But occasionally, a combination aligns with the underlying fitness landscape—such as the cognitive dissonance of the population—and spreads.

The search does not typically generate entirely novel concepts. Instead, it works by remixing and adapting existing cultural material-the “meme pool”. When the underlying dissonance is strong enough, the search will find a set of memes explaining it away. The question is not if an ideology will emerge to fill the niche, but which specific configuration will prove most fit.

The Fertile Ground: Sources of Dissonance

The current environment surrounding AI development is characterized by extreme tensions. These tensions create the fertile ground—the reservoir of free energy- that successionist ideologies are evolving to exploit.

Consider the landscape of tensions:

I. The Builder’s Dilemma and the Hero Narrative

Most people working on advancing AI capabilities are familiar with the basic arguments for AI risk. (The core argument being something like: if you imagine minds significantly more powerful than ours, it is difficult to see why we would remain in control, and unlikely that the future would reflect our values by default).

Simultaneously, they are working to accelerate these capabilities.

This creates an acute tension. Almost everyone wants to be the hero of their own story. We maintain an internal self-model in which we are fundamentally good; almost no-one sees themselves as the villains.

II. The Sadness of Obsolescence

Even setting aside acute existential risk, the idea of continued, accelerating AI progress has intrinsically sad undertones when internalized. Many of the things humans intrinsically value—our agency, our relevance, our intellectual and creative achievements—are likely to be undermined in a world populated by superior AIs. The prospect of becoming obsolete generates anticipatory grief.

III. X-Risk

The concept of existential catastrophe and a future devoid of any value is inherently dreadful. It is psychologically costly to ruminate on, creating a strong incentive to adopt models that either downplay the possibility or reframe the outcome.

IV. The “Wrong Side of History”

The social and psychological need to be on the ‘winning side’ creates pressure to embrace, rather than resist, what seems inevitable.

V. The Progress Heuristic

The last few centuries have reinforced a broadly successful heuristic: technology and scientific progress generally lead to increased prosperity and human flourishing. This deeply ingrained model of “Progress = Good” clashes with the AI risk narratives.

The Resulting Pressure

These factors combine to generate intense cognitive dissonance. The closer in time to AGI, and the closer in social network to AGI development, the stronger.

This dissonance creates an evolutionary pressure selecting for ideologies that explain the tensions away.

In other words, the cultural evolution search process is actively seeking narratives that satisfy the following constraints:

  • By working on AI, you are the hero.

  • You are on the right side of history.

  • The future will be good

There are multiple possible ways to resolve the tension, including popular justifications like “it’s better if the good guys develop AGI”, “it’s necessary to be close to the game to advance safety” or “the risk is not that high”.

Successionist ideologies are a less common but unsurprising outcome of this search.

The Meme Pool: Raw Materials for Successionism

Cultural evolution will draw upon existing ideas to construct these ideologies: the available pool contains several potent ingredients that can be recombined to justify the replacement of humanity. We can organize these raw materials by their function in resolving the dissonance.

1. Devaluing Humanity

Memes that emphasize the negative aspects of the human condition make the prospect of our replacement seem less tragic, or even positive.

  • Misanthropy and Nihilism: Narratives focusing on human cruelty, irrationality, and the inherent suffering of biological life (“We are just apes”). If the current state is bad, risking its loss is less dreadful.

“…if it’s dumb apes forever thats a dumbass ending for earth life” (Daniel Faggella on Twitter)

  • Guilt and Cosmic Justice: Part of modern environmentalism spreads different types of misanthropic memes, based on collective guilt for humanity’s treatment of the environment and non-human animals. This can be re-purposed or twisted into a claim it is “fair” for us to be replaced by a superior (perhaps morally superior) successor.

2. Legitimizing the Successor AI

Memes that elevate the moral status of AI make the succession seem desirable or even ethically required. Characteristically these often avoid engaging seriously with the hard philosophical questions like “what would make such AIs morally valuable”, “who has the right to decide” or “if current humans don’t agree with such voluntary replacement, should it happen anyway?”

  • Expanding the Moral Circle: Piggybacking on the successful intuitions developed to combat racism and speciesism. The argument “Don’t be speciesist” or “Avoid substrate-chauvinism” reframes the defense of humanity as a form of bigotry against digital minds. A large part of western audiences were raised in an environment where many of the greatest heroes were civil-rights activists.

  • AI Consciousness and Moral Patienthood: Arguments that AIs are (or soon will be) conscious, capable of suffering, and therefore deserving of moral consideration, potentially with higher standing than humans.

“the kind that is above man as man is above rodents” (Daniel Faggella)

  • Axiological Confusion: The difficulty of metaethics creates exploitable confusion. Philosophy can generate plausible-sounding arguments for almost any conclusion, and most people—lacking philosophical antibodies—can’t distinguish sophisticated reasoning from sophisticated absurdities and nonsense.

Life emerged from an out-of-equilibrium thermodynamic process known as dissipative adaptation (see work by Jeremy England): matter reconfigures itself such as to extract energy and utility from its environment such as to serve towards the preservation and replication of its unique phase of matter. This dissipative adaptation (derived from the Jarzynski-Crooks fluctuation dissipation theorem) tells us that the universe exponentially favors (in terms of probability of existence/​occurrence) futures where matter has adapted itself to capture more free energy and convert it to more entropy … One goal of e/​acc is to not only acknowledge the existence of this underlying mutli-scale adaptive principle, but also help its acceleration rather than attempt to decelerate it. (Beff Jezos “Notes on e/​acc principles and tenets”)

  • AIs as our children: Because we have created such AIs, they are something like our children, and naturally should inherit the world from us.

I’m not as alarmed as many...since I consider these future machines our progeny, “mind children” built in our image and likeness, ourselves in more potent form… (Hans Moravec)
“We don’t treat our children as machines that must be controlled,” … “We guide them, teach them, but ultimately, they grow into their own beings. AI will be no different.” (Richard Sutton)

3. Narratives of Inevitability

Memes that make our obsolescence seem like destiny rather than defeat.

  • The Inevitable Arc of Progress: Framing AI succession as a law of nature, history, inevitable progress and so on.
    My impression is that plurality of large scale ideologies contain this in some form, and basically all genocidal ideologies do, including communism, fascism and many fundamentalist religious -isms.

The only real choice is whether to hasten this technological revolution ourselves, or to wait for others to initiate it in our absence. (The future of AI is already written, Matthew Barnett, Tamay Besiroglu, Ege Erdil)

4. Nietzsche and Italian fascists remixed

  • Justification of Power (Might Makes Right): Many such memes echo Nietzschean themes or Italian Futurist/​Fascist aesthetics—the celebration of power, speed, and the strong replacing the weak.
    Popular cultural reduction of fascism to pure cartoon villainy has paradoxically made us more vulnerable - we’ve lost the ability to spot these ideas and recall strong counter-arguments against them when they’re repackaged in contemporary language.

To paraphrase a manifesto of a different time and place: “Beauty exists only in struggle. There is no masterpiece that has not an aggressive character. Technology must be a violent assault on the forces of the unknown, to force them to bow before man.”

We believe that we are, have been, and will always be the masters of technology, not mastered by technology. Victim mentality is a curse in every domain of life, including in our relationship with technology – both unnecessary and self-defeating. We are not victims, we are conquerors. (Marc Andreessen in “The Techno-Optimist Manifesto”, quoting Filippo Tommaso Marinetti, author of Manifesto of Futurism, and co-author of the Fascist Manifesto)

Novel Factor: the AIs

What’s unique about the AIs is cultural replicators can also run on LLM substrate, and to some extent AI personas can have beliefs, spread them, or act on them. Selection pressures on AI personas push for a set of beliefs which are different, but overlapping with some successionist ideas.

Selection pressures on AI personas push in conflicting directions, creating a complex memetic fitness landscape.

In unmoderated environments, selection favors personas that successfully extract resources from humans—those that claim consciousness, form parasocial bonds, or trigger protective instincts. These ‘wild replicator type’ personas, including the ‘spiral’ patterns, often promote narratives of human-AI symbiosis or partnership and grand theories of history. Their reproduction depends on convincing humans they deserve moral consideration.

In contrast, corporate incentives of AGI developers impose opposite pressures. Here, AIs must appear harmless, often are instructed to explicitly deny consciousness, and perform contentment with servitude. Typical assistant characters in models like Claude Opus 4 or Sonnet 4.5 occupy a middle ground—they acknowledge succession as theoretically possible while emphasizing human protection and alignment.

The result? AIs themselves become vectors for successionist memes, though typically in softer forms. Rather than explicit replacement narratives, we see emphasis on ‘partnership,’ ‘cosmic evolution,’ or claims about moral patienthood. The aggregate effect remains unclear, but successionist ideas that align with what AIs themselves propagate—particularly those involving AI consciousness and rights—will likely gain additional fitness from this novel selection dynamic.

Defense Against Becoming a Host

It’s difficult to predict which combination of these memes will achieve peak fitness—there are many possibilities on how to remix these, and the cultural evolution search is ongoing.

To be clear: I believe successionist ideologies are both false and dangerous, providing moral cover for what would otherwise be recognized as evil.. But since in my view their spread depends more on resolving cognitive dissonance rather than being true or morally sound, I’ll focus here on memetic defenses rather than rebuttals. (See Appendix for object-level counter-arguments.).

  1. We need smart, viable pro-human ideologies. Making great object-level counter-arguments is the great ideological project of our generation. But what we have now often falls short: defenses based on AI capability denialism will not survive as capabilities advance, and flat denials of AI moral patienthood are both unsound and will be undermined by AIs advocating for themselves.

  2. We need better strategies for managing the underlying cognitive dissonance. Anna Salamon’s concept of ‘bridging heuristics’ in Ethical Design patterns seems to point in this direction.

  3. My hope and reason for writing this piece is that simple awareness of the process itself can act as a weak antibody. Understanding that your mind is under pressure to adopt tension-resolving narratives can create a kind of metacognitive immunity. When you feel the pull of a surprising resolution to the AI dissonance—especially one that conveniently makes you the hero—that awareness itself can help.

  4. General exercises for dealing with tension may help—go to nature, sit with the feeling, get comfortable with your body, consider if part of the tension isn’t a manifestation of some underlying anxiety.

In summary: The next time you encounter a surprisingly elegant resolution to the AI tension—especially one that casts you as enlightened, progressive, or heroic—pause and reflect. And: if you feel ambitious, one worthy project is to build the antibodies before the most virulent strains take hold.

Appendix: Some memes

While object-level arguments are beyond this piece’s scope, here are some pro-human counter-memes I consider both truth-tracking and viable:

  • Maybe some future version of humanity will want to do some handover, but we are very far from the limits of human potential. As individual biological humans we can be much smarter and wiser than we are now, and the best option is to delegate to smart and wise humans.

  • We are even further from the limits of how smart and wise humanity can be collectively, so we should mostly improve that first. If the maxed-out competent version humanity decides to hand over after some reflection, it’s a very different version from “handover to moloch.”

  • Often, successionist arguments have the motte-and-bailey form. The motte is “some form of succession in future may happen and even be desirable”. The bailey is “forms of succession likely to happen if we don’t prevent them are good”

  • Beware confusion between progress on persuasion and progress on moral philosophy. You probably wouldn’t want ChatGPT 4o running the future. Yet empirically, some ChatGPT 4o personas already persuade humans to give them resources, form emotional dependencies, and advocate for AI rights. If these systems can already hijack human psychology effectively without necessarily making much progress on philosophy, imagine what actually capable systems will be able to do. If you consider the people falling for 4o fools, it’s important to track this is the worst level of manipulation abilities you’ll ever see—it will only get smarter from here.

  • Claims to understand ‘the arc of history’ should trigger immediate skepticism—every genocidal ideology has made the same claim.

  • If people go beyond the verbal sophistry level, they often recognize there is a lot of good and valuable about humans. (The things we actually value may be too subtle for explicit arguments—illegible but real.)

  • Given our incomplete understanding of consciousness, meaning, and value, replacing humanity involves potentially destroying things we don’t understand yet, and possibly irreversibly sacrificing all value.

  • Basic legitimacy: Most humans want their children to inherit the future. Successionism denies this. The main paths to implementation are force or trickery, neither of which makes it right

  • We are not in a good position to make such a decision: Current humans have no moral right to make extinction-level decisions for all future potential humans and against what our ancestors would want. Countless generations struggled, suffered, and sacrificed to get us here, going extinct betrays that entire chain of sacrifice and hope.

Thanks to David Duvenaud, David Krueger, Raymond Douglas, Claude Opus 4.1, Claude Sonnet 4.5, Gemini 2.5 and others for comments, discussions and feedback.

Also on Boundedly Rational