I am a volunteer organizer with PauseAI and PauseAI US, a top forecaster, and many other things that are currently much less important.
The risk of human extinction from artificial intelligence is a near-term threat. Time is short, p(doom) is high, and anyone can take simple, practical actions right now to help prevent the worst outcomes.
One can talk about competitive pressures and the qualitatively new prospect of global takeover, but the most straightforward answer to why humanity is charging full speed ahead is that the leaders of the top AI labs are ideologically committed to building ASI. They are utopists and power-seekers. They don’t want only 80% of a utopia any more than a venture capitalist wants only 80% of a billion dollars.