Document below may not be updated to the latest version. Click link for latest version.
Low effort
Like, share, subscribe to my content or people publishing similar content on AI extinction risk. Can share with your friends, people in media or politics, people working at AI labs or in x-risk, anyone really.
High effort
Organise a protest in your city around AI extinction risk.
Start a social media channel to persuade people at scale about AI extinction risk. Even one video is better than zero, as it motivates other people to also come forward.
Most impactful
If you have a large social media following or high status credentials (UK, US citizens only): Run for election with AI pause as an agenda.
(Maybe) Consider supporting UBI as an agenda, as one of the largest group of single-issue voters in US is only concerned with losing their own job/income/equity. Example: Andrew Yang (signed FLI pause letter).
Invent a new ideology or religion that can unite humanity around a common position on superintelligent AI, human genetic engg, and whole brain emulation.
IMO superintelligent AI and human genetic engineering are both potentially less than 5 years away, unless people take political action otherwise. Whole brain emulation is seeing slow and steady progress, so maybe it is 30 years away.
If you have >$100k in funds: Sponsor bounties for potential whistleblowers at top AI labs and their supporting govts.
If you have >$10M in funds: Sponsor cyberattacks / social engineering from foreign soil against top AI labs and their supporting govts, and publish leaked info publicly.
At minimum, publish info relevant to AI risk, such as values, decisions and capabilities of key decision-makers.
At maximum, publish all data that Big Tech has collected on everyone to the public, thereby destroying privacy of every person on Earth with no exceptions. I am supportive of this but I’m aware this is a radical stance. Even if you don’t agree with me, please atleast publish AI-risk-related info.
Sell or lease the latest AI capabilities (code, model weights) to other top AI labs worldwide if the profitability of your operation is a significant concern.
I’m trying to figure out a better incentive mechanism than donations, but until then, donations will help.
Support me
Donate to me
Looking for people funding “outside game” strategies for fixing AI extinction risk (like mass protest, social media channels, whistleblowers) not “inside game” strategies (like alignment reseach at top AI labs, lobbying US policymakers on behalf of top AI labs). Examples: Pierre Omidyar funding The Intercept, Brian Acton funding Signal, etc
Work with me
Provide me feedback or do fact-checking for whistleblower guide. Especially interested in people with expertise in US or international law.
Correct me if i misread, but if I understand, these are incredibly bad ideas which would backfire spectacularly, aren’t they?
This comment has almost zero information. Do you actually want a discussion or is this a “boo, go away” comment?
sponsoring cyberattacks will lead to blowback that more than defeats the purpose.
starting a religion will lead to blowback that more than defeats the purpose.
if you’re at the level where you think these are great ideas to suggest, then you need to be at the level where it’s obvious to you why both are dead ends. it’s the same reason in both cases: there are enormous groups you’re attacking by either approach. generally speaking, groups don’t like being attacked, and will attack you back, which usually undoes the benefit of attacking.
in particular, starting a war over AI seems likely to simply lead to AI being pushed into use for military use much more rapidly.
you said
it might be possible that there are aliens out there somewhere with no privacy between any of them, who live in a utopia of some sort. also possibly some aliens somewhere else with no privacy who live in the ultimate dystopia. here on earth, there are a lot of hominids who really don’t like some other hominids. sudden loss of privacy would likely result in mass death of some sort, when some sort of war gets started as a response; if not, then loss of freedom is still plausible. sudden loss of privacy would be catastrophic and would cause so much chaos and damage that it’s effectively impossible to have even a vague sense of whether it’d end up stabilizing on a good world after the easily predictable hell it would create at the beginning. there are certainly many bad things people do that it would make public, but if everyone knows the terrible stuff at once, is that even negative for people who do bad things? and what about how malicious people would use it to attack pacifist targets?
I agree I will be attacking many groups by continuing on my current plan
“which usually undoes the benefit of attacking”—reason?
I have already assumed that unless the state of affairs changes significantly, by 2026 or 2027, AI capabilities will be the number one focus of the entire US defence establishment. You can read the first bullet point in this redteaming document.
My actions don’t accelerate it much because it seems to be happening regardless of what I do.
I agree that multiple governments may fall and war may be triggered.
I agree that having very confident predictions is hard. I will at some point publish more on why I think it’ll be net good. I do think it’ll be net good with decent level of confidence.
Umm .. yes? I generally think people as a whole will coordinate to prevent many of the bad things from happening. One major problem today is whenever people coordinate to get something done, the leader and org end up abusing this trust. Earning trust will become easier in such a world.
Worst case if they fail to prevent bad things from now, atleast there is much more clarity about the state of affairs, so that someone else can later fix it.
You need to see the offence-defence balance, in terms of who has power in this world. I generally think most people can coordinate to prevent a few people from attacking a few other targets.
Anyone wanna set up prediction markets on any of the claims here? I wanna be able to claim some fake internet points if I later get proven right.
(Waiting for the downvotes! And the impending rate limit!)
Happy to get in the first one!
I think rousing the hoi polloi would be counterproductive. They’re a force you cannot align. Like the sorceror’s apprentice animating a broomstick to help carry water, once called up, you cannot tell them to stop.
I agree the public is not going to take orders from you, on what they should think or do. You need to use persuasion like everyone else. I agree people will also do lots of counterproductive and unproductive things in response to info about AI risk.
I think we haven’t found the crux yet.