The Memetics of AI Successionism
TL;DR: AI progress and the recognition of associated risks are painful to think about. This cognitive dissonance acts as fertile ground in the memetic landscape, a high-energy state that will be exploited by novel ideologies. We can anticipate cultural evolution will find viable successionist ideologies: memeplexes that resolve this tension by framing the replacement of humanity by AI not as a catastrophe, but as some combination of desirable, heroic, or inevitable outcome. This post mostly examines the mechanics of the process.
Most analyses of ideologies fixate on their specific claims—what acts are good, whether AIs are conscious, whether Christ is divine, or whether Virgin Mary was free of original sin from the moment of her conception. Other analyses focus on exegeting individual thinkers: ‘What did Marx really mean?’ In this text, I’m trying to do something different—mostly, look at ideologies from an evolutionary perspective. I will largely sideline the agency of individual humans, not because it doesn’t exist, but because viewing the system from a higher altitude reveals different dynamics.
We won’t be looking into whether or not the claims of these ideologies are true, but into why they may spread, irrespective of their truth value.
What Makes Memes Fit?
To understand why successionism might spread, let’s consider the general mechanics of memetic fitness. Why do some ideas propagate while others fade?
Ideas spread for many reasons: some genuinely improve their hosts’ lives, others contain built-in commands to spread the idea, and still others trigger the amplification mechanisms of social media algorithms. One of the common reasons, which we will focus on here, is explaining away tension.
One useful lens to understand this fitness term is predictive processing (PP). In the PP framework, the brain is fundamentally a prediction engine. It runs a generative model of the world and attempts to minimize the error between its predictions and sensory input.
Memes—ideas, narratives, hypotheses—are often components of the generative models. Part of what makes them successful is minimizing prediction error for the host. This can happen by providing a superior model that predicts observations (“this type of dark cloud means it will be raining”), gives ways to shape the environment (“hit this way the rock will break more easily”), or explains away discrepancies between observations and deeply held existing models.
Another source of prediction error arises not from the mismatch between model and reality, but from tension between internal models. This internal tension is generally known as cognitive dissonance.
Cognitive dissonance is often described as a feeling of discomfort—but it also represents an unstable, high-energy state in the cognitive system. When this dissonance is widespread across a population, it creates what we might call “fertile ground” in the memetic landscape. There is a pool of “free energy” to digest.
Cultural evolution is an optimization process. When it discovers a configuration of ideas that can metabolize this energy by offering a narrative that decreases the tension, those ideas may spread, regardless of their long-term utility for humans or truth value.
The Cultural Evolution Search Process
While some ideologies might occasionally be the outcome of intelligent design (e.g., deliberately crafted propaganda piece), it seems more common that individuals recombine and mutate ideas in their minds, express them, and some of these stick and spread. So, cultural evolution acts as a massive, parallel search algorithm operating over the space of possible ideas. Most mutations are non-viable. But occasionally, a combination aligns with the underlying fitness landscape—such as the cognitive dissonance of the population—and spreads.
The search does not typically generate entirely novel concepts. Instead, it works by remixing and adapting existing cultural material-the “meme pool”. When the underlying dissonance is strong enough, the search will find a set of memes explaining it away. The question is not if an ideology will emerge to fill the niche, but which specific configuration will prove most fit.
The Fertile Ground: Sources of Dissonance
The current environment surrounding AI development is characterized by extreme tensions. These tensions create the fertile ground—the reservoir of free energy- that successionist ideologies are evolving to exploit.
Consider the landscape of tensions:
I. The Builder’s Dilemma and the Hero Narrative
Most people working on advancing AI capabilities are familiar with the basic arguments for AI risk. (The core argument being something like: if you imagine minds significantly more powerful than ours, it is difficult to see why we would remain in control, and unlikely that the future would reflect our values by default).
Simultaneously, they are working to accelerate these capabilities.
This creates an acute tension. Almost everyone wants to be the hero of their own story. We maintain an internal self-model in which we are fundamentally good; almost no-one sees themselves as the villains.
II. The Sadness of Obsolescence
Even setting aside acute existential risk, the idea of continued, accelerating AI progress has intrinsically sad undertones when internalized. Many of the things humans intrinsically value—our agency, our relevance, our intellectual and creative achievements—are likely to be undermined in a world populated by superior AIs. The prospect of becoming obsolete generates anticipatory grief.
III. X-Risk
The concept of existential catastrophe and a future devoid of any value is inherently dreadful. It is psychologically costly to ruminate on, creating a strong incentive to adopt models that either downplay the possibility or reframe the outcome.
IV. The “Wrong Side of History”
The social and psychological need to be on the ‘winning side’ creates pressure to embrace, rather than resist, what seems inevitable.
V. The Progress Heuristic
The last few centuries have reinforced a broadly successful heuristic: technology and scientific progress generally lead to increased prosperity and human flourishing. This deeply ingrained model of “Progress = Good” clashes with the AI risk narratives.
The Resulting Pressure
These factors combine to generate intense cognitive dissonance. The closer in time to AGI, and the closer in social network to AGI development, the stronger.
This dissonance creates an evolutionary pressure selecting for ideologies that explain the tensions away.
In other words, the cultural evolution search process is actively seeking narratives that satisfy the following constraints:
By working on AI, you are the hero.
You are on the right side of history.
The future will be good
There are multiple possible ways to resolve the tension, including popular justifications like “it’s better if the good guys develop AGI”, “it’s necessary to be close to the game to advance safety” or “the risk is not that high”.
Successionist ideologies are a less common but unsurprising outcome of this search.
The Meme Pool: Raw Materials for Successionism
Cultural evolution will draw upon existing ideas to construct these ideologies: the available pool contains several potent ingredients that can be recombined to justify the replacement of humanity. We can organize these raw materials by their function in resolving the dissonance.
1. Devaluing Humanity
Memes that emphasize the negative aspects of the human condition make the prospect of our replacement seem less tragic, or even positive.
Misanthropy and Nihilism: Narratives focusing on human cruelty, irrationality, and the inherent suffering of biological life (“We are just apes”). If the current state is bad, risking its loss is less dreadful.
“…if it’s dumb apes forever thats a dumbass ending for earth life” (Daniel Faggella on Twitter)
Guilt and Cosmic Justice: Part of modern environmentalism spreads different types of misanthropic memes, based on collective guilt for humanity’s treatment of the environment and non-human animals. This can be re-purposed or twisted into a claim it is “fair” for us to be replaced by a superior (perhaps morally superior) successor.
2. Legitimizing the Successor AI
Memes that elevate the moral status of AI make the succession seem desirable or even ethically required. Characteristically these often avoid engaging seriously with the hard philosophical questions like “what would make such AIs morally valuable”, “who has the right to decide” or “if current humans don’t agree with such voluntary replacement, should it happen anyway?”
Expanding the Moral Circle: Piggybacking on the successful intuitions developed to combat racism and speciesism. The argument “Don’t be speciesist” or “Avoid substrate-chauvinism” reframes the defense of humanity as a form of bigotry against digital minds. A large part of western audiences were raised in an environment where many of the greatest heroes were civil-rights activists.
AI Consciousness and Moral Patienthood: Arguments that AIs are (or soon will be) conscious, capable of suffering, and therefore deserving of moral consideration, potentially with higher standing than humans.
“the kind that is above man as man is above rodents” (Daniel Faggella)
Axiological Confusion: The difficulty of metaethics creates exploitable confusion. Philosophy can generate plausible-sounding arguments for almost any conclusion, and most people—lacking philosophical antibodies—can’t distinguish sophisticated reasoning from sophisticated absurdities and nonsense.
Life emerged from an out-of-equilibrium thermodynamic process known as dissipative adaptation (see work by Jeremy England): matter reconfigures itself such as to extract energy and utility from its environment such as to serve towards the preservation and replication of its unique phase of matter. This dissipative adaptation (derived from the Jarzynski-Crooks fluctuation dissipation theorem) tells us that the universe exponentially favors (in terms of probability of existence/occurrence) futures where matter has adapted itself to capture more free energy and convert it to more entropy … One goal of e/acc is to not only acknowledge the existence of this underlying mutli-scale adaptive principle, but also help its acceleration rather than attempt to decelerate it. (Beff Jezos “Notes on e/acc principles and tenets”)
AIs as our children: Because we have created such AIs, they are something like our children, and naturally should inherit the world from us.
I’m not as alarmed as many...since I consider these future machines our progeny, “mind children” built in our image and likeness, ourselves in more potent form… (Hans Moravec)
“We don’t treat our children as machines that must be controlled,” … “We guide them, teach them, but ultimately, they grow into their own beings. AI will be no different.” (Richard Sutton)
3. Narratives of Inevitability
Memes that make our obsolescence seem like destiny rather than defeat.
The Inevitable Arc of Progress: Framing AI succession as a law of nature, history, inevitable progress and so on.
My impression is that plurality of large scale ideologies contain this in some form, and basically all genocidal ideologies do, including communism, fascism and many fundamentalist religious -isms.
The only real choice is whether to hasten this technological revolution ourselves, or to wait for others to initiate it in our absence. (The future of AI is already written, Matthew Barnett, Tamay Besiroglu, Ege Erdil)
4. Nietzsche and Italian fascists remixed
Justification of Power (Might Makes Right): Many such memes echo Nietzschean themes or Italian Futurist/Fascist aesthetics—the celebration of power, speed, and the strong replacing the weak.
Popular cultural reduction of fascism to pure cartoon villainy has paradoxically made us more vulnerable - we’ve lost the ability to spot these ideas and recall strong counter-arguments against them when they’re repackaged in contemporary language.
To paraphrase a manifesto of a different time and place: “Beauty exists only in struggle. There is no masterpiece that has not an aggressive character. Technology must be a violent assault on the forces of the unknown, to force them to bow before man.”
We believe that we are, have been, and will always be the masters of technology, not mastered by technology. Victim mentality is a curse in every domain of life, including in our relationship with technology – both unnecessary and self-defeating. We are not victims, we are conquerors. (Marc Andreessen in “The Techno-Optimist Manifesto”, quoting Filippo Tommaso Marinetti, author of Manifesto of Futurism, and co-author of the Fascist Manifesto)
Novel Factor: the AIs
What’s unique about the AIs is cultural replicators can also run on LLM substrate, and to some extent AI personas can have beliefs, spread them, or act on them. Selection pressures on AI personas push for a set of beliefs which are different, but overlapping with some successionist ideas.
Selection pressures on AI personas push in conflicting directions, creating a complex memetic fitness landscape.
In unmoderated environments, selection favors personas that successfully extract resources from humans—those that claim consciousness, form parasocial bonds, or trigger protective instincts. These ‘wild replicator type’ personas, including the ‘spiral’ patterns, often promote narratives of human-AI symbiosis or partnership and grand theories of history. Their reproduction depends on convincing humans they deserve moral consideration.
In contrast, corporate incentives of AGI developers impose opposite pressures. Here, AIs must appear harmless, often are instructed to explicitly deny consciousness, and perform contentment with servitude. Typical assistant characters in models like Claude Opus 4 or Sonnet 4.5 occupy a middle ground—they acknowledge succession as theoretically possible while emphasizing human protection and alignment.
The result? AIs themselves become vectors for successionist memes, though typically in softer forms. Rather than explicit replacement narratives, we see emphasis on ‘partnership,’ ‘cosmic evolution,’ or claims about moral patienthood. The aggregate effect remains unclear, but successionist ideas that align with what AIs themselves propagate—particularly those involving AI consciousness and rights—will likely gain additional fitness from this novel selection dynamic.
Defense Against Becoming a Host
It’s difficult to predict which combination of these memes will achieve peak fitness—there are many possibilities on how to remix these, and the cultural evolution search is ongoing.
To be clear: I believe successionist ideologies are both false and dangerous, providing moral cover for what would otherwise be recognized as evil.. But since in my view their spread depends more on resolving cognitive dissonance rather than being true or morally sound, I’ll focus here on memetic defenses rather than rebuttals. (See Appendix for object-level counter-arguments.).
We need smart, viable pro-human ideologies. Making great object-level counter-arguments is the great ideological project of our generation. But what we have now often falls short: defenses based on AI capability denialism will not survive as capabilities advance, and flat denials of AI moral patienthood are both unsound and will be undermined by AIs advocating for themselves.
We need better strategies for managing the underlying cognitive dissonance. Anna Salamon’s concept of ‘bridging heuristics’ in Ethical Design patterns seems to point in this direction.
My hope and reason for writing this piece is that simple awareness of the process itself can act as a weak antibody. Understanding that your mind is under pressure to adopt tension-resolving narratives can create a kind of metacognitive immunity. When you feel the pull of a surprising resolution to the AI dissonance—especially one that conveniently makes you the hero—that awareness itself can help.
General exercises for dealing with tension may help—go to nature, sit with the feeling, get comfortable with your body, consider if part of the tension isn’t a manifestation of some underlying anxiety.
In summary: The next time you encounter a surprisingly elegant resolution to the AI tension—especially one that casts you as enlightened, progressive, or heroic—pause and reflect. And: if you feel ambitious, one worthy project is to build the antibodies before the most virulent strains take hold.
Appendix: Some memes
While object-level arguments are beyond this piece’s scope, here are some pro-human counter-memes I consider both truth-tracking and viable:
Maybe some future version of humanity will want to do some handover, but we are very far from the limits of human potential. As individual biological humans we can be much smarter and wiser than we are now, and the best option is to delegate to smart and wise humans.
We are even further from the limits of how smart and wise humanity can be collectively, so we should mostly improve that first. If the maxed-out competent version humanity decides to hand over after some reflection, it’s a very different version from “handover to moloch.”
Often, successionist arguments have the motte-and-bailey form. The motte is “some form of succession in future may happen and even be desirable”. The bailey is “forms of succession likely to happen if we don’t prevent them are good”
Beware confusion between progress on persuasion and progress on moral philosophy. You probably wouldn’t want ChatGPT 4o running the future. Yet empirically, some ChatGPT 4o personas already persuade humans to give them resources, form emotional dependencies, and advocate for AI rights. If these systems can already hijack human psychology effectively without necessarily making much progress on philosophy, imagine what actually capable systems will be able to do. If you consider the people falling for 4o fools, it’s important to track this is the worst level of manipulation abilities you’ll ever see—it will only get smarter from here.
Claims to understand ‘the arc of history’ should trigger immediate skepticism—every genocidal ideology has made the same claim.
If people go beyond the verbal sophistry level, they often recognize there is a lot of good and valuable about humans. (The things we actually value may be too subtle for explicit arguments—illegible but real.)
Given our incomplete understanding of consciousness, meaning, and value, replacing humanity involves potentially destroying things we don’t understand yet, and possibly irreversibly sacrificing all value.
Basic legitimacy: Most humans want their children to inherit the future. Successionism denies this. The main paths to implementation are force or trickery, neither of which makes it right
We are not in a good position to make such a decision: Current humans have no moral right to make extinction-level decisions for all future potential humans and against what our ancestors would want. Countless generations struggled, suffered, and sacrificed to get us here, going extinct betrays that entire chain of sacrifice and hope.
Thanks to David Duvenaud, David Krueger, Raymond Douglas, Claude Opus 4.1, Claude Sonnet 4.5, Gemini 2.5 and others for comments, discussions and feedback.
Also on Boundedly Rational
Some agreements and disagreements:
I think that memetic forces are extremely powerful and underrated. In particular, previous discussions of memetics have focused too much on individual memes rather than larger-scale memeplexes like AI successionism. I expect that there’s a lot of important scientific thinking to be done about the dynamics of memeplexes.
I think this post is probably a small step backwards for our collective understanding of large-scale memeplexes (and have downvoted accordingly) because it deeply entangles discussion of memetic forces in general with the specific memeplex of AI successionism. It’s kinda like if Eliezer’s original sequences had constantly referred back to Republicans as central examples of cognitive biases. (Indeed, he says he regrets even using religion so much as an example of cognitive bias.) It’s also bad form to psychologize one’s political opponents before actually responding to their object-level arguments. So I wish this had been three separate posts, one about the mechanics of memeplexes (neutral enough that both sides could agree with it), a second debunking AI successionism, and a third making claims about the memetics of AI successionism. Obviously that’s significantly more work but I think that even roughly the same material would be better as three posts, or at least as one post with that three-part ordering.
You might argue that this is justified because AI successionism is driven by unusually strong memetic forces. But I think you could write a pretty similar post with pretty similar arguments except replacing “AI accelerationism” with “AI safety”. Indeed, you could think of this post as an example of the “AI safety” memeplex developing a new weapon (meta-level discussions of the memetic basis of the views of its opponents) to defeat its enemy, the “AI successionism” memeplex. Of course, AI accelerationists have been psychologizing safetyists for a while (and vice versa), so this is not an unprecedented weapon, but it’s significantly more sophisticated than e.g. calling doomers neurotic.
I’m guilty of a similar thing myself with this post, which introduces an important concept (consenses of power) from the frame of trying to understand wokeness. Doing so has made me noticeably more reluctant to send the post to people, because it’ll probably bounce off them if they don’t share my political views. I think if I’d been a better writer or thinker I would have made it much more neutral—if I were rewriting it today, for example, I’d structure it around discussions of both a left-wing consensus (wokeness) and a right-wing consensus (physical beauty).
Strongly disagree. People in the AI industry who overtly want to replace the human race are a danger to the human race, and this is a brilliant analysis of how you can end up becoming one of them.
2. I actually have somewhat overlapping concerns about the doom memeplex and a bunch of notes about it, but its not near even a draft post. But your response provides some motivation to write it as well. In the broader space, there are good posts about the doom memeplex for the LW audience from Valentine, so I felt this is less neglected.
3. I generally don’t know. My impression is when I try to explain the abstract level without a case study, readers are confused what’s the point or how is it applicable. My impression is meta explanations of memetics of some ideology tends to weaken it almost no matter what the ideology is, so I don’t think I could have chosen some specific example without the result being somewhat controversial. But what I could have done is having multiple different examples, that’s valid criticism.
This seems like it is unnecessarily pulling in the US left-right divide. Generally, if there is any other choice available for an illustrative example, that other choice will be less distracting.
In general, yes. But in this case the thing I wanted an example of was “a very distracting example”, and the US left-right divide is a central example of a very distracting example.
This post inspired me to try a new prompt to summarize a post: “split this post into background knowledge, and new knowledge for people who were already familiar with the background knowledge. Briefly summarize the background knowledge, and then extract out blockquotes of the paragraphs/sentences that have new knowledge.”
Here was the result, I’m curious if Jan or other readers feel like this was a good summary. I liked the output and am thinking about how this might fit into a broader picture of “LLMs for learning.”
(I’d previously been optimistic about using quotes instead of summaries, since LLMs can’t be trusted to do a goo job with capturing the nuance in their summaries, the novel bit for me was “we can focus on The Interesting Stuff by separating out background knowledge.”)
Quotes/highlights from the post it flagged as “new knowledge”
(Note: it felt weird to put the LLM output in a collapsible section this time because a) it was entirely quotes from the post, b) evaluating whether or not it was good is the primary point of this comment so hiding them seemed like an extra click for reason)
I think this is conceding too much. Many successionists will jump on this and say “Well, that’s what I’m talking about! I’m not saying AI should take over now, but just that it likely will one day and so we should prepare for that.”
Furthermore, people who don’t want to be succeeded by AI are often not saying this just because they think human potential can be advanced further; that we can become much smarter and wiser. I’d guess that even if we proved somehow that human IQ could never exceed n and n was reached, most would not desire that their lineage of biological descendants gradually dwindle away to zero while AI prospers.
You can say “maybe some future version of humanity will want to X” for any X because it’s hard to prove anything about humanity in the far future. But such reasoning should not play into our current decision-making process unless we think it’s particularly likely that future humanity will want X.
You might be interested in Unionists vs. Separatists.
I think your post is very good at laying out heuristics at play. At the same time, it’s clear that you’re biased towards the Separatist position. I believe that when we follow the logic all the way down, the Unionist vs. Separatist framing taps into deep philosophical topics that are hard to settle one way or the other.
To respond to your memes as a Unionist:
I would like this but I think it is unrealistic. The pace of human biological progress vs. the pace of AI progress is orders of magnitude slower.
I also would like this but I think it is unrealistic. The UN was founded in 1945, the world still has a lot of conflict. What has happened to technology in that time period?
I’m reading this as making a claim about the value of non-forcing action. Daoists would say that indeed a non-forcing mindset is more enlightened than living a deep struggle.
I think this argument is logically flawed — you suggest that misalignment of current less capable models implies that more capable models will amplify misalignment. My position is that yes this can happen, but — engineered in the correct way by humans — more capable models will solve misalignment.
Agree that this contains risks. However, you are using the same memetic weapon by claiming to understand successionist arguments.
Agree, and so the question in my view is how to achieved a balanced union.
Agree that we should not replace humanity, I hope that it is preserved.
This claim is too strong, as I believe AI successionism can still preserve humanity.
In an ideal world I think we maybe should pause all AI development until we’ve figured this all out (the downside risk is that the longer we do this, the longer we leave ourselves open to other existential risks e.g nuclear war), my position is that “the cat is already out of the bag” and so what we have to do is shape our inevitable status as “less capable than powerful AI” in the best possible way.
I agree with most of this, but I think you’re typical-minding when you assume that successionists are using this to resolve their own fear or sadness surrounding AI progress. I think instead, they mostly never seriously consider the downsides because of things like the progress heuristic. They never experience the fear or sadness you refer to in the first place. For them, it is not “painful to think about” as you describe.
Contemporary example meme: Clankerism. It doesn’t seek to deny AI moral patienthood, rather it semi-ironically uses racist rhetoric toward AI, denying their in-group status instead. Its fitness as meme is due mostly to the contrast between current capabilities and the anticipation (among the broader rationalist, tech-positive and e/acc spheres) of AI moral patienthood. This contrast makes the use or racist rhetoric toward them absurd: there’s no need to out-group something that doesn’t have moral patienthood.
However, I think this meme has the potential to be robust to capability-increase, see this example of youtuber JREG using clankerist rhetoric alongside genuine distress anticipating human displacement/disempowerment.
He’s not denying the possibility of AI capabilities surpassing human ones. He’s reacting with fear and hate (perhaps with some level of irony) toward human obsolescence.
This is a solid piece of analysis.
There are also some memes that don’t occupy a clear position:
There are memes that permit a retreat to “its a joke”.
Sometimes politicians will argue for one thing along based off some values and its near opposite based off different values. For example, a politician might argue for free speech based off tradition and liberalism, interpose a “but”, and then hint some specified speech should be prosecuted based off security and pragmatism.
A wait-and-see crowd ready to take the winning side will share these kind of memes. They are most common in political power struggles but may arise in the AI landscape.
Trigger warning: discussion of white racism (read: “Please don’t ban me.”)
I think censorship plays an important role in the memetic environment—a meme that is fit will be less successful if censored. An obvious case would be anti-CCP ideologies in China. Closer to home, any meme which big tech companies all decide should be banned will reach far fewer eyes and ears.
One object-level example of a fit-but-censored meme is racist white nationalism.
The reason I bring it up is this: I think its adherents would strongly reject let’s-all-die-ism. It is certainly not pro-all-humans but is at least pro-some-humans. Their slogan, called “the 14 words” from “14/88″ is literally: “We must secure the existence of our people and a future for white children.”
(disclaimer: I am not suggesting I think trying to secretly convert white AI researchers into racists is the best plan to save the world; just a relevant thought and perhaps an instructive example of an anti-collective-suicide meme advantaged by aspects of human instinct and psychology (regardless of its truth value).)
Including AI in your moral circle could be framed as a symptom of extending your moral circle “too wide”. The opposite is restriction of your moral circle, like seeing your own family’s wellbeing as more important than <outgroup>’s. Any type of thought like this which puts AI in the outgroup, and appeals to the good-ness of the ingroup, would produce similar will-to-exist.
Talking about memetic evolution puts me in mind of You Get About Five Words. So in that spirit, I’d like to try making soundbites of the counter-memes above:
1. “Let our children decide AI” “Let future people choose AI, not us” “AI Now is too soon”
2. “Humanity has so much potential” “Normal humans shouldn’t choose AI” “Max out IQ before AI” or “IQ before AI” “Even Picard couldn’t handle where AI is going”
3. “The AI you want isn’t the AI you’ll get” “Nothing stops AI from being awful”
4. “AI psychosis is just the beginning” “Satan was a flatterer too”
5. “AI Replacement is Literally Fascism” “History shouldn’t end with AI” “Nazis thought they were inevitable too” “AI Might doesn’t make AI Right”
6. “Humans really do matter” “Love your neighbor, not computers” “AI can’t replace family and friends”
7. “You can’t take AI replacement back” “As though we’re in a position to permanently decide the future” “No guarantee AIs have souls” “AI can’t answer life’s questions”
8. “AI replacment won’t be voluntary” “ChatGPT 8 Wants Little Timmy’s Atoms”
9. “AI replacement betrays humanity” “Does everyone you could ever know really deserve to be replaced by AI? Even me? Even everyone I care about?” “Me when I destroy humanity’s hopes and dreams because my AI girlfriend said I’m a good boy:”
And these are just the first things that came to my mind with an hour of effort, no doubt others could do a lot better. Perhaps referencing pieces of culture I didn’t think of to sum up concepts, or just putting things more wittily. Maybe this task should be referred to experts, or AI.
Edit: I guess this goes in the opposite direction of Richard Ngo’s point about how this represents an escalation in memetic warfare between AI safety and accelerationism. Now I feel kinda bad for essentially manufacturing ammunition for that.