Doomers Should try Much Harder to get Famous
For the purpose of this post, “getting famous” means “building a large, general (primarily online) audience of people who agree with/support you”.
If you believe that slowing down or pausing AI development is a good idea, and you believe this should be official policy, you’re going to need a large number of (if not most) people to also agree that slowing down or pausing AI is a good idea. On this issue, being correct is insufficient, you need to be correct in public, in front of as many people as possible.
In a world where ~70% of people think that AI doom is sci-fi, and that the future will be business as usual but with better healthcare, solar power, and the iPhone 37, getting anything close to an international treaty where nations agree to clip the wings of their economy is a pipe dream.
To get pauses, treaties, and sane policy, you need to convince as many people as possible that there is a significant risk of AI killing everyone. To do that, you need to get famous.
Getting famous is feasible
Becoming (internet) famous is a relatively predictable process in 2025. You simply need to make videos that a large number of people:
Click on
Watch for long periods of time
This, while difficult, is a much more tractable problem than solving AI alignment in a five-to-fifteen-year timespan in a culture of catastrophic race dynamics.
Why aren’t AI doomers trying to get famous?
For many good candidates, fame is anathema. Fame involves an audience of people who are largely clueless, because few people have a clue. Therefore, for the sake of creating content, it involves simplifying concepts that don’t simplify neatly.
Fame is ugly, anxiety-inducing, and requires loosening stringent intellectual standards.
If you actually have a high P(doom), it may be wise to suck it up and do it anyway.
The current strategy is obviously bad and is not working well
The prevailing strategy of:
Making blog posts on esoteric forums to an audience of people who already largely agree
Having a few reputable representatives go on (mostly niche) podcasts
Write to senators imploring them to take AI risk seriously
causes AI safety/notkilleveryoneism to be an obscure, tightly-knit community rather than something that lends itself to viral memetic growth.
If you don’t change direction, you are likely to end up where you’re going.
Maybe you should just try harder to get famous and then worry about the specifics later
Rob Miles is a great example of someone who has tried popularising AI safety in a digestible, virality-accessible form. He’s amassed around 160,000 subscribers on YouTube, and makes content that almost exclusively focuses on alignment. This is a clear signal that this subject has immense potential for widespread awareness and popularity. Looking at his channel, he’s made around 2 videos a year for the past 5 years. How many more people would have been exposed to these ideas if he made 2 videos a month during this period? It’s plausible it would be millions.
It’s possible he has been working on higher ROI projects during this time, but it would need to be an extremely high ROI indeed for it to justify an opportunity cost of a million+ people in the United States to wake up to the problem of AI risk.
With widespread recognition, a large following, and millions of people brought to awareness, you can leverage your way to much greater influence than you can by making lengthy blog posts preaching to 250 members of the choir.
Reputation matters, but not without reach
The advantage of notkilleveryoneism over accelerationism is the intellectual and reputational calibre of its advocates. Eliezer Yudkowsky, Geoffrey Hinton, Paul Cristiano, Ilya Sustkever, and others with a high P(doom) and solid credentials are in a position, with a moderate amount of effort, to grow a general audience in the millions over a one-to-five year span, and will have the advantage of being right and properly credentialed.
Reputation is a multiplier on the influence conferred by reach—it’s not sufficient to be reputable and right, you need to be reputable, comprehensible and visible.
No more bungling general-audience podcasts
Eliezer had a shot on Lex Fridman, and he botched it horribly. This was a priceless opportunity to win millions of people over and it was wasted. There are only so many Rogans and Fridmans in the content ecosystem. Do not create disreputation by lecturing obscurely to general audiences.
Distill the most important concepts into digestible one-liners or die.
The rough strategy for anyone with the necessary reputation
Make fortnightly (or more frequent) YouTube videos that distill important concepts in AI safety, analyse developments in AI, make mini documentaries, etc.
Write these videos for a 100 IQ audience, not a 140 IQ audience.
Write, present, film, and edit these videos engagingly rather than present them in a dry lecture format.
Clip out / independently edit segments of these videos and post them as shorts/reels on as many short-form content platforms as possible.
Do everything possible to get featured on podcasts like “Diary of a CEO”, Joe Rogan, Lex Fridman, Tom Bilyeu, and show up with prepared, digestible rhetoric intended to persuade general audiences.
You can either sacrifice clarity for purity, or purity for impact. Choose wrong, and no one will hear your warning until the lights go out.
I’ve come to believe that the most (neglected x tractable x important) thing to do right now is Spread The Word. Ordinary folks have quite a bit more power, vigor, and wisdom than they’re (implicitly) given credit for around here. (“Agency” is the trendy way to put it, I think.) Estimates of “$0.01 per view on an AI risk explainer video” (https://manifund.org/projects/scaling-ai-safety-awareness-via-content-creators) or “$0.10 per click with a 5-6% share rate” (https://manifund.org//projects/testing-and-spreading-messages-to-reduce-ai-x-risk?tab=comments#1d592d6c-09f7-47bc-8d18-5675fa76556e) seem like darn good deals to me. (Adjust of course for extreme uncertainty and book-talking.)
I intend to fund these efforts, and I would be even more interested in funding an effort run by someone with serious intentions and expertise in Going Viral (or whatever it is getting tiktok famous consists of nowadays). If AI notkilleveryoneism had 1% of the reach of abortion/climate/Holy Land content, we’d live in a much better world.
I think algorithm-optimization (along with its father, Advertising) is a dark art. But darn if I don’t want some dark artists on our side.
Edit: I neglected to mention
https://manifund.org/projects/effective-ai-awareness-improving-evidence-based-ai-risk-communication
and https://manifund.org/projects/creating-making-god-an-accessible-feature-documentary-on-risks-from-agi
I don’t agree that targeting 100 IQ individuals is an effective strategy for slowing down AI development, because 100 IQ people generally don’t decide policy. Public opinion tends to matter very little in politics, especially in areas like AI policy that have little relation to everyday life.
Convincing a few dozen influential people in tech, politics, and media is likely to have a vastly larger impact than winning over hundreds of millions of ordinary people. This blog post might help outline why: https://www.cremieux.xyz/p/the-cultural-power-of-high-skilled?utm_source=publication-search
That has been the default strategy for many years and it failed dramatically.
All the “convinced influential people in tech”, started making their own AI start-ups, while comming up with galaxy-brained rationalizations why everything will be okay with their idea in particular. We tried to be nice to them in order not to lose our influence with them. Turned out we didn’t have any. While we carefully and respectfully showed the problems with their reasoning, they likewise respectfully nodded their heads and continued to burn the AI timelines. Who could’ve though that people who have a real chance to become incredibly rich and important at the cost of dooming human civilization a bit later, are going to take this awesome opportunity?
Meanwhile, surprisingly enough, it turned out that regular “100 IQ individuals” with no prospect to become absurdly rich and powerful actually do not want an apocalypse! Too bad that we have already stained our reputation quite a bit, while appearing as bootlickers to the tech-billionares for all these years, but better late than never.
There is a lesson about naivity/cynycism and personal bias here. How it’s much more pleasant to persuade influential elites than common masses. The former feels like respectable intellectual activity, while the latter like—gah! - politics, something what crazy activists would do. And it’s good that we’ve managed to learn it, diversifying our activity and trying to appeal to common people more. Would’ve been even better if managed to win initially, instead of making this kind of fascinating mistake, but sadly we are not that good at rationality yet.
At least in democracies, convincing the people of something is an effective way to get politicians to pay attention to it—their job depends on getting these people to vote for them.
Notably in the UK, David Cameron gave the people a vote on whether to leave the EU because this was an idea that was gaining popularity. He did this despite not himself believing in the idea.
Naturally, plenty of legislation also gets passed without most people noticing, and in this respect we are better off convincing lawmakers. But I think that if we are able to convince a significant portion of the public, we will by extension convince a substantial number of lawmakers through their interaction with the public.
I have not read through the whole of the blogpost that you linked, but I disagreed with the “two important facts” used as a premise (1. People’s opinions are mostly genetic and 2. Most people’s opinions are completely random unless they’re smart.), and did not therefore trust any conclusions that might come from them.
Equally I get the impression that given the scale of the challenge, even if we were to cede that convincing the public is less important than convincing politicians, we will most likely need to do both to have a reasonable shot at passing anything that looks like good legislation.