Most common antisafety arguments I see in the wild, not steel-manned but also not straw-manned:
There’s no evidence of a malign superintelligence existing currently, therefore it can be dismissed without evidence
We’re faking being worried because if we truly were, we would use violence
Yudkowsky is calling for violence
Saying something as important as the end of the world could happen could influence people to commit violence, therefore warning about the end of the world is bad
Doomers can’t provide the exact steps a superintelligence would take to eliminate humanity
When the time comes we’ll just figure it out
There were other new technologies that people warned would cause bad outcomes
We didn’t know whether nuclear experimentation would end the world but we went ahead with it anyway and we didn’t end the world (omitting that careful effort was put forth first to ensure this risk was miniscule)
My personal favorite: AI doom would happen in the future, and anything happening in the future is unfalsifiable, therefore it is not a scientific claim and should not be taken seriously.
I have been shocked by the lack of effort put into social technology to lengthen timelines. As I see it one of the only chances we have is increasing the number of people (specifically normies, as that is the group with true scale) who understand and care about the risk arguments, but almost nobody seems to be trying to achieve this. Am I missing something?