it may be useful to briefly list some of the ways that an AI pause, or efforts to bring about such a pause, could have undesirable effects (aside from simply delaying the arrival of the benefits that successful AGI could bring)
I think this section is mostly just strawmen.
The pause occurs too early. People conclude that it was pointless, and become less willing to pause later when it would have been useful.
This might’ve been true of a Pause 2023 (in hindsight), but seems unlikely for a Pause in 2026.
The call for a pause results in poorly designed or incomplete regulation...Compliance and box-ticking crowd out substantive work on risk reduction… Work may be driven underground, or shift towards less scrupulous actors or less cooperative states....The pause has an exemption for national security, pushing AI activities away from the civilian into the military sector… An international agreement is reached on pausing, but this creates a prisoner’s dilemma in which some parties cheat
These are all just saying that a Pause might be unsuccessful. We will have failed then (by not actually getting a Pause). This doesn’t mean that accelerating isn’t strictly worse in terms of risk.
A pause is implemented, leading to economic recession and general pessimism and lowered hopes for the future. People see the world more as a zero-sum battle for a limited set of resources, increasing conflict and tribalism.
This is just a non-sequitur. Our whole global economy is very far from being dependent on (proto-)AGI. How did we possibly manage to have economic prosperity without AI before?
A pause prolongs the period during which the world is exposed to dangers from applications of already developed levels of AI (and to risks independent of AI), which more advanced AI could have helped mitigate.
Why is more advanced AI more likely to help than just cause more risk? This is just blind faith, when no one has the faintest idea how to solve superalignment or ASI control.
To enforce a pause, a strong control apparatus is created. The future shifts in a more totalitarian direction
We don’t live under global totalitarianism because nuclear material, and chemical and biological weapons, (and hell, even conventional weapons in most countries,) are controlled. Right now controlling large data centers isn’t any more difficult than that.
When the pause is eventually lifted, there is a massive compute and/or algorithm overhang that leads to explosive advances in AI that are riskier than if AI had advanced at a steadier pace throughout.
This again, is a failed Pause. A successful Pause would prevent (and taboo) further compute and algorithm progress. And any real Pause would not be lifted until there is a consensus on safety.
The world will also not have had the opportunity to learn from and adapt to living with weaker AI systems.
We are already doing that now. Those systems will not going away with a Pause in further development.
Attitudes towards AI become polarized to such an extent as to make constructive dialogue difficult and destroy the ability of institutions to pass nuanced adaptive safety policy.
Nick, you really aren’t helping with this paper.
The push for a pause galvanizes supporters of AI to push back. Leading AI firms and AI authorities close ranks to downplay risk, marginalizing AI safety researchers and policy experts concerned with AI risk, reducing their resourcing and influence.
This is already happening. The blame can’t be placed on the Pausers.
A pause, initially sold as a brief moratorium to allow social adjustments and safety work to catch up, calcifies into a de facto permaban that prevents the immense promise of superintelligence from ever being realized—or is indefinitely extended without ever being formally made permanent.
Any realistic global Pause is never going to be of a fixed, pre-determined length. It will be lifted when there is global consensus on lifting it (either through consensus on safety, or consensus on taking the risk).
I think this section is mostly just strawmen.
This might’ve been true of a Pause 2023 (in hindsight), but seems unlikely for a Pause in 2026.
These are all just saying that a Pause might be unsuccessful. We will have failed then (by not actually getting a Pause). This doesn’t mean that accelerating isn’t strictly worse in terms of risk.
This is just a non-sequitur. Our whole global economy is very far from being dependent on (proto-)AGI. How did we possibly manage to have economic prosperity without AI before?
Why is more advanced AI more likely to help than just cause more risk? This is just blind faith, when no one has the faintest idea how to solve superalignment or ASI control.
We don’t live under global totalitarianism because nuclear material, and chemical and biological weapons, (and hell, even conventional weapons in most countries,) are controlled. Right now controlling large data centers isn’t any more difficult than that.
This again, is a failed Pause. A successful Pause would prevent (and taboo) further compute and algorithm progress. And any real Pause would not be lifted until there is a consensus on safety.
We are already doing that now. Those systems will not going away with a Pause in further development.
Nick, you really aren’t helping with this paper.
This is already happening. The blame can’t be placed on the Pausers.
Any realistic global Pause is never going to be of a fixed, pre-determined length. It will be lifted when there is global consensus on lifting it (either through consensus on safety, or consensus on taking the risk).