I am a volunteer organizer with PauseAI and PauseAI US, a pro forecaster, and some other things that are currently much less important.
The risk of human extinction from artificial intelligence is a near-term threat. Time is short, p(doom) is high, and anyone can take simple, practical actions right now to help prevent the worst outcomes. Contact your political representatives.
The mechanism isn’t [public disapproval] → [AI moratorium]. That sounds hopeless to me, too. What I’m actually relatively optimistic about is [public awareness] → [educating policymakers] + [signal to policymakers that they are allowed to do something about it] → [common knowledge of the desire to act] → [AI moratorium]. This feels very doable to me after getting back from speaking with my congressional offices in DC.
The key here is that most politicians don’t want to take these risks either. Really very few people are on board with the whole “create superintelligence we don’t know how to control or make care about us” thing. If enough politicians and policymakers learn that the “create superintelligence” thing necessarily brings a risk of human extinction, one would hope that they would act out of their own fear and desire for self-preservation. But because of how politics works (or doesn’t), especially in the US, it takes a lot to overcome the strong social fear of speaking out. They need to know that their constituents have their backs, as opposed to either supporting the status quo or having very different priorities.
So public disapproval isn’t a lever with which to move the whole world. It’s just a firm tap to send the balance in the direction of action. The whole mechanism is being built simultaneously in parallel, and the education and advocacy work gets easier the more the public is engaged with the issue. The bigger the tap, the smaller the gap.