I can strongly confirm that few of the people worried about AI killing everyone, or EAs that are so worried, favor a pause in AI development at this time, or supported the pause letter or took other similar actions.
An especially small percentage (but not zero!) would favor any kind of unilateral pause, either by Anthropic or by the West, without the rest of the world.
>Holly Elmore (PauseAI): It’s kinda sweet that PauseAI is so well-represented on twitter that a lot of people >think it *is* the EA position. Sadly, it isn’t.
>The EAs want Anthropic to win the race. If they wanted Anthropic paused, Anthropic would kick those >ones out and keep going but it would be a blow.
I tried to get at this issue with polls on EA Forum and LW. For EAs, 26% want to stop or pause AI globally, 13% want to pause it even if only done unilaterally. I would not call this an especially small percentage.
My summary for EAs was: “13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/threshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab. So if I had to summarize the median respondent, it would be strong regulation for AI or pause if a particular event/threshold is met. There appears to be more evidence for the claim that EA wants AI to be paused/stopped than for the claim that EA wants AI to be accelerated.”
My summary for LW was: “the strongest support is for pausing AI now if done globally, but there’s also strong support for making AI progress slow, pausing if disaster, and pausing if greatly accelerated progress. There is only moderate support for shutting AI down for decades, and near zero support for pausing if high unemployment, pausing unilaterally, and banning AI agents. There is strong opposition to never building AGI. Of course there could be large selection bias (with only ~30 people voting), but it does appear that the extreme critics saying rationalists want to accelerate AI in order to live forever are incorrect, and also the other extreme critics saying rationalists don’t want any AGI are incorrect. Overall, rationalists seem to prefer a global pause either now or soon.”
I tried to get at this issue with polls on EA Forum and LW. For EAs, 26% want to stop or pause AI globally, 13% want to pause it even if only done unilaterally. I would not call this an especially small percentage.
My summary for EAs was: “13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/threshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab. So if I had to summarize the median respondent, it would be strong regulation for AI or pause if a particular event/threshold is met. There appears to be more evidence for the claim that EA wants AI to be paused/stopped than for the claim that EA wants AI to be accelerated.”
My summary for LW was: “the strongest support is for pausing AI now if done globally, but there’s also strong support for making AI progress slow, pausing if disaster, and pausing if greatly accelerated progress. There is only moderate support for shutting AI down for decades, and near zero support for pausing if high unemployment, pausing unilaterally, and banning AI agents. There is strong opposition to never building AGI. Of course there could be large selection bias (with only ~30 people voting), but it does appear that the extreme critics saying rationalists want to accelerate AI in order to live forever are incorrect, and also the other extreme critics saying rationalists don’t want any AGI are incorrect. Overall, rationalists seem to prefer a global pause either now or soon.”