Poll on De/Accelerating AI
Recently, people on both ends of the de/accelerating AI spectrum have been making claims that rationalists are on the opposite end. So I think it would be helpful to have a poll to get a better idea where rationalists stand. Since there is not a built-in poll function, I’m putting positions in comments. Please agree vote only. This should preserve the order of the questions, which reduces cognitive load as they are approximately ordered from more accelerating to more decelerating within the two categories of big picture and individual actions.
Update: the order is not being maintained by magic sorting despite people only dis/agree voting, but it seems to be functioning adequately. You can sort by “oldest” to get the intended ordering of the questions. I think people are agreeing with things they think are good, even if not sufficient.
Update 2: Now that it’s been 3 days and voting has slowed, I thought I would summarize some of the interesting results.
Big picture: the strongest support is for pausing AI now if done globally, but there’s also strong support for making AI progress slow, pausing if disaster, pausing if greatly accelerated progress. There is only moderate support for shutting AI down for decades, and near zero support for pausing if high unemployment, pausing unilaterally, and banning AI agents. There is strong opposition to never building AGI. Of course there could be large selection bias (with only ~30 people voting), but it does appear that the extreme critics saying rationalists want to accelerate AI in order to live forever are incorrect, and also the other extreme critics saying rationalists don’t want any AGI are incorrect. Overall, rationalists seem to prefer a global pause either now or soon.
Individual actions: the relatively strong agreement positions are that it’s okay to pay for AI but not to invest in it, and it’s okay to be a safety employee at safer and less safe labs.
Not ok to use AI
Ok to use free AI (but not to pay for AI, or you need to offset your payment for AI)
Ok to pay for AI (but not to invest)
Ok to invest in AI companies
Ok to be a safety employee at a safer lab
Ok to be a safety employee at a less safe lab
Ok to be a capabilities employee at a safer lab for career capital/donations
Ok to be a capabilities employee at a less safe lab for career capital/donations
Ok to be a capabilities employee at a safer lab (direct impact)
A capabilities employer at a safer lab could also, say, propose an architecture that has worse interpretability than CoT, better than neuralese and is between CoT and neuralese in terms of capabilities per compute.
Ok to be a capabilities employee at a less safe lab (direct impact)
Never build AGI (Stop AI)
Alas, this is the proposal that requires coordination with China as well...
Shut AI down for decades until something changes radically, such as genetic enhancement of intelligence
Pause AI now unilaterally (one country)
Pause AI now if it is done globally
Pause AI if there is mass unemployment (say >20%)
Make AI progress very slow (heavily regulate it)
Restrict access to AI to few people (like nuclear)
Pause AI if it causes a major disaster (e.g. like Chernobyl)
I agree that it would be easier (think of the Rogue Replication Timeline). But what if there is no publicly visible Chernobyl? While the AI-2027 forecast has Agent-3 catch Agent-4 and reveal its misalignment to OpenBrain’s employees, even the forecast’s authors doubt that Agent-4 will be caught. The uncaught Agent-4 means that mankind races ahead without even realizing that the AI could be misaligned.
Ban AI agents
Ban a certain level of autonomous code writing
Ban training above a certain size
Pause AI if AI is greatly accelerating the progress on AI (e.g. 10x)
SB-1047 (liability, etc)
Responsible scaling policy or similar
Neutral (no regulations, no subsidy)
Accelerate AGI in safer lab in US (subsidy, no regulations)
Accelerate ASI in safer lab in US (subsidy, no regulations)
Accelerate AGI in less safe lab in US (subsidy, no regulations)
Accelerate ASI in less safe lab in US (subsidy, no regulations)
Accelerate AGI everywhere (subsidy, no regulations)
Accelerate ASI everywhere (subsidy, no regulations)
Only donating/working on pausing AI is ok
Not sure what this means? What is not okay if you agree-vote this?
This is the extreme deceleration end of the personal action spectrum (so it is not ok to use AI, pay for AI, invest in AI, work at labs, etc).
(this question is self-downvoted to keep it at the bottom.)
develop methods for user’s-morality-focused alignment of the kind open weights AI users would still want for their decensored model