Things that aren’t on your list but maybe should be:
Understand the personal (e.g. depression, broken connections), local-social (e.g. peer pressure), and global-societal (e.g. whatever metaphysical wars people thing they’re in) forces that are pushing and will push people to work on dangerous stuff. Since the arguments for taking AI risk very seriously are pretty solid, there’s maybe some reason other than logic that people aren’t, from their own inside selfish view, worried. One can say “well it’s peer pressure and monetary / status incentives” but that’s vague, doesn’t say how to change it, and doesn’t explain why those incentives point this way and not that. (Well except the money one.) With that understanding more avenues might become apparent. (On this vein, making the broader culture more healthy is good.)
Push a distinction between software 2.0 and AGI research. Things that might actually make money using AI should be separated out from AGI so that investors can learn to distinguish them. (Maybe infeasible but worth a try.) E.g. AlphaFold and Tesla self-driving are very much, IIUC, software 2.0 and not AGI (in contrast to e.g. efforts to throw an RL agent into a huge range of tasks and cranking up the compute).
Make genuine friends with AGI capabilities researchers; be truly trustworthy, besides all the AI X-risk stuff. Then they might want to listen to the reasons you’re worried.
Become extremely rich (ethically!) and then buy the research organizations, and then, you know, you’ll have their attention.
Seduce AGI capabilities researchers and then pillow talk about waves of pleasure nanobots and how unsexy it is to kill everyone.
Make memes that are sufficiently dank to be popular to normie AGI capabilities researchers, and explain arguments.
Disclaim “AI ethics” stuff that’s more about “you can’t have fun with this image generator” and less about “this might kill everyone”. Or at least distinguish them as two totally different things. Bad to conflate all anti-AI stuff together, so that from the perspective of capabilities researchers, it’s just Luddism.
Become extremely rich (ethically!) and then buy the research organizations, and then, you know, you’ll have their attention.
I have this idea for a crypocurrency exchange where we mint a token and use some of the profits to fund EA, does that sound like a good idea in the current economy?
Things that aren’t on your list but maybe should be:
Understand the personal (e.g. depression, broken connections), local-social (e.g. peer pressure), and global-societal (e.g. whatever metaphysical wars people thing they’re in) forces that are pushing and will push people to work on dangerous stuff. Since the arguments for taking AI risk very seriously are pretty solid, there’s maybe some reason other than logic that people aren’t, from their own inside selfish view, worried. One can say “well it’s peer pressure and monetary / status incentives” but that’s vague, doesn’t say how to change it, and doesn’t explain why those incentives point this way and not that. (Well except the money one.) With that understanding more avenues might become apparent. (On this vein, making the broader culture more healthy is good.)
Push a distinction between software 2.0 and AGI research. Things that might actually make money using AI should be separated out from AGI so that investors can learn to distinguish them. (Maybe infeasible but worth a try.) E.g. AlphaFold and Tesla self-driving are very much, IIUC, software 2.0 and not AGI (in contrast to e.g. efforts to throw an RL agent into a huge range of tasks and cranking up the compute).
Make genuine friends with AGI capabilities researchers; be truly trustworthy, besides all the AI X-risk stuff. Then they might want to listen to the reasons you’re worried.
Become extremely rich (ethically!) and then buy the research organizations, and then, you know, you’ll have their attention.
Seduce AGI capabilities researchers and then pillow talk about waves of
pleasurenanobots and how unsexy it is to kill everyone.Make memes that are sufficiently dank to be popular to normie AGI capabilities researchers, and explain arguments.
Disclaim “AI ethics” stuff that’s more about “you can’t have fun with this image generator” and less about “this might kill everyone”. Or at least distinguish them as two totally different things. Bad to conflate all anti-AI stuff together, so that from the perspective of capabilities researchers, it’s just Luddism.
I have this idea for a crypocurrency exchange where we mint a token and use some of the profits to fund EA, does that sound like a good idea in the current economy?