Poll on De/​Accelerating AI

Recently, people on both ends of the de/​accelerating AI spectrum have been making claims that rationalists are on the opposite end. So I think it would be helpful to have a poll to get a better idea where rationalists stand. Since there is not a built-in poll function, I’m putting positions in comments. Please agree vote only. This should preserve the order of the questions, which reduces cognitive load as they are approximately ordered from more accelerating to more decelerating within the two categories of big picture and individual actions.

Update: the order is not being maintained by magic sorting despite people only dis/​agree voting, but it seems to be functioning adequately. You can sort by “oldest” to get the intended ordering of the questions. I think people are agreeing with things they think are good, even if not sufficient.

Update 2: Now that it’s been 3 days and voting has slowed, I thought I would summarize some of the interesting results.
Big picture: the strongest support is for pausing AI now if done globally, but there’s also strong support for making AI progress slow, pausing if disaster, pausing if greatly accelerated progress. There is only moderate support for shutting AI down for decades, and near zero support for pausing if high unemployment, pausing unilaterally, and banning AI agents. There is strong opposition to never building AGI. Of course there could be large selection bias (with only ~30 people voting), but it does appear that the extreme critics saying rationalists want to accelerate AI in order to live forever are incorrect, and also the other extreme critics saying rationalists don’t want any AGI are incorrect. Overall, rationalists seem to prefer a global pause either now or soon.
Individual actions: the relatively strong agreement positions are that it’s okay to pay for AI but not to invest in it, and it’s okay to be a safety employee at safer and less safe labs.