Support the movement against extinction risk due to AI

Link post

Document below may not be updated to the latest version. Click link for latest version.

  • Low effort

    • Like, share, subscribe to my content or people publishing similar content on AI extinction risk. Can share with your friends, people in media or politics, people working at AI labs or in x-risk, anyone really.

  • High effort

    • Organise a protest in your city around AI extinction risk.

    • Start a social media channel to persuade people at scale about AI extinction risk. Even one video is better than zero, as it motivates other people to also come forward.

  • Most impactful

    • If you have a large social media following or high status credentials (UK, US citizens only): Run for election with AI pause as an agenda.

      • (Maybe) Consider supporting UBI as an agenda, as one of the largest group of single-issue voters in US is only concerned with losing their own job/​income/​equity. Example: Andrew Yang (signed FLI pause letter).

    • Invent a new ideology or religion that can unite humanity around a common position on superintelligent AI, human genetic engg, and whole brain emulation.

      • IMO superintelligent AI and human genetic engineering are both potentially less than 5 years away, unless people take political action otherwise. Whole brain emulation is seeing slow and steady progress, so maybe it is 30 years away.

    • If you have >$100k in funds: Sponsor bounties for potential whistleblowers at top AI labs and their supporting govts.

    • If you have >$10M in funds: Sponsor cyberattacks /​ social engineering from foreign soil against top AI labs and their supporting govts, and publish leaked info publicly.

      • At minimum, publish info relevant to AI risk, such as values, decisions and capabilities of key decision-makers.

      • At maximum, publish all data that Big Tech has collected on everyone to the public, thereby destroying privacy of every person on Earth with no exceptions. I am supportive of this but I’m aware this is a radical stance. Even if you don’t agree with me, please atleast publish AI-risk-related info.

      • Sell or lease the latest AI capabilities (code, model weights) to other top AI labs worldwide if the profitability of your operation is a significant concern.

      • I’m trying to figure out a better incentive mechanism than donations, but until then, donations will help.

Support me

  • Donate to me

    • Looking for people funding “outside game” strategies for fixing AI extinction risk (like mass protest, social media channels, whistleblowers) not “inside game” strategies (like alignment reseach at top AI labs, lobbying US policymakers on behalf of top AI labs). Examples: Pierre Omidyar funding The Intercept, Brian Acton funding Signal, etc

  • Work with me

    • Provide me feedback or do fact-checking for whistleblower guide. Especially interested in people with expertise in US or international law.