I am a volunteer organizer with PauseAI and PauseAI US, a pro forecaster, and some other things that are currently much less important.
The risk of human extinction from artificial intelligence is a near-term threat. Time is short, p(doom) is high, and anyone can take simple, practical actions right now to help prevent the worst outcomes.
Three counterpoints:
Opportunity cost is driven to zero when AI systems and their infrastructure are near-endlessly replicable. There is no need to choose between A or B when you can just double the population and do both.
Humans will be slow and error-prone compared to AI systems. Why would they ever be trusted to do anything that matters? This will just create more work for the AIs then if they did it themselves. Why would they be paid at all for things that don’t matter? That’s just charity.
Because of AI’s extreme efficiency advantage, at some point the opportunity cost of not converting human land and goods and bodies into additional AI infrastructure will exceed the direct cost of doing so.