I am a volunteer organizer with PauseAI and PauseAI US, a pro forecaster, and some other things that are currently much less important.
The risk of human extinction from artificial intelligence is a near-term threat. Time is short, p(doom) is high, and anyone can take simple, practical actions right now to help prevent the worst outcomes. Contact your political representatives.
On tractability:
I am in Washington DC today and will speak with the offices of both of my senators tomorrow, with 3 others also from Arizona, to educate them on the issue and demand that they call for a global agreement to ban the creation of superintelligent AI. 50+ others are doing the same thing for their state on the same day.
My representative Greg Stanton already (quietly for now) supports an ASI ban, primarily due to my personal efforts to educate him on the topic. My state-level representative Stacey Travers introduced an AI safety transparency bill this session at my request, which I helped shape. My state-level senator Mitzi Epstein became visibly concerned about AI risk when I met with her about it. I am three for three on positive impact, with a range of effect size.
I am not an AI safety researcher, I have no ML degree, and political lobbying is approximately the furthest possible thing from what I thought I could ever succeed at.
Tractability is a question of the will to act, not of whether we have a galaxy-brained map of the complex system of politics. Research about complex systems heavily relies on empiricism. Most big political asks that are successful are seen as impossible until they suddenly become the obvious consensus. If you want to know whether an AI moratorium is feasible, lobbying your elected leaders is the requisite field work.