This is going to be a nerdier article than usual. It’s a response to Anton Leicht’s blog post “Press Play To Continue”. I disagree with much of it and think it’s not very well argued.
Going section-by-section, Leicht’s claims are:
AI is going pretty damn well all things considered. The argument has two parts:
It implicitly models pausing AI as resampling from a fixed distribution of possible AI development timelines.
It argues the current situation is better than average, so resampling is a bad idea.
Specifically the arguments that things are going well are:
Minimal compute overhang
Multipolarity
Liberal democracies control the supply chain
Pausing AI isn’t and won’t be popular among centrist politicians, only among the more radical wings of both parties, and this makes it unlikely to get passed.
A “second best” pause is more likely, and would be worse than nothing.
It would be unilateral.
It would not cater to x-risk concerns, and thus will lack critical pieces like e.g. export controls or limits on internal deployment.
Proposals to pause AI don’t expand the Overton window in a helpful way, because:
A radical flank is only effective if there is a moderate faction to benefit from it.
Again, a “second best” pause would be bad.
If you still think we need a pause, you should instead support Leicht’s plan, which is a 3-part progression of:
Transparency and similar “low-hanging fruit”
3rd party auditing
Unspecified “surgical policy interventions” to address whatever is left after doing (a) and (b)
And that’s basically it… The article is largely a collection of claims and facts, rather than arguments. For instance, the contents of the introduction (which I didn’t cover above) are basically:
Bernie Sanders and AOC are proposing a datacenter moratorium. FACT.
“Advocates for such a move are Luddites, of course”—name calling, never followed up on.
“A pause of some kind is something that some policymakers are asking for. Worse, it’s something a government could enact.”—Leicht is essentially conceding the main point the article claims to argue against. This claim makes sense only if pausing is destined to turn out badly, as he later claims.
“Even if you are principally and perhaps exclusively concerned with reducing catastrophic risks, you should oppose the notion of a pause.”—Leicht never properly engages with the worldview of “ardent safetyists” until the final section, where he offers policy solutions that are clearly inadequate by their standards. The whole reason people are proposing a pause is because they consider such proposals inadequate.
I’ll now respond to claims and sections in detail; my responses are in bold.
Response to the arguments
AI is going pretty damn well all things considered. The argument has two parts:
It implicitly models pausing AI as resampling from a fixed distribution of possible AI development timelines.
The point of a pause is to buy time to improve the situation, not to resample. Modeling it as resampling is silly.
It argues the current situation is better than average, so resampling is a bad idea.
The arguments listed below are an incomplete list of considerations. For instance, we might also note that alignment isn’t solved, and that the US government seems to be extremely accelerationist and not particularly concerned with how AI could systematically disempower most Americans.
Specifically the arguments that things are going well are:
Minimal compute overhang
The compute overhang doesn’t really matter that much. The accelerations in progress that will come from taking humans out of the loop dwarf compute overhang considerations.
Multipolarity
I’m not convinced this is a good thing, as it drives race dynamics.
Liberal democracies control the supply chain
This implicitly takes the view that “who wins the race” is very important. I and most other “ardent safetyists” reject this view because we think nobody will “win the race” in a relevant sense: the race probably ends in extinction.
Pausing AI isn’t and won’t be popular among centrist politicians, only among the more radical wings of both parties, and this makes it unlikely to get passed.
This basically assumes that the political climate doesn’t change that much, in the context of a rapidly changing political climate… AI is growing in salience faster than any other issue, according to Blue Rose’s research.
It says centrist policy is more likely to “succeed”, i.e. become law. But the relevant measure of success is whether a policy actually solves the problem.
Finally, it assumes that policymakers will not, themselves, be highly concerned with addressing AI x-risk. But the goal is to have widespread acknowledgment and concern of the risk, including among policymakers.
A “second best” pause would be worse than nothing.
It would be unilateral
I expect the US national security establishment to strongly oppose a unilateral pause, and the president to either veto it, or work to negotiate an international pause instead.
It would not cater to x-risk concerns, and thus will lack critical pieces like e.g. export controls or limits on internal deployment.
Leicht seems to be imagining a situation where “ardent safetyism” has enough power to popularize the idea of a pause, but not to influence how it is implemented. But regardless, some “ardent safetyists” would be happy to take the slow-down from a datacenter moratorium, even if lacks other pieces.
Proposals to pause AI don’t expand the Overton window in a helpful way, because:
A radical flank is only effective if there is a moderate faction to benefit from it
Leicht seems to assume that arguing for an international pause only creates support for specific, watered down versions of that policy (e.g. a domestic pause), rather than awareness of the risks of AI and support for policies that address those risks, generally. In particular, if AI x-risk concerns have power, I think a second best pause is likely to look like compute governance via hardware enabled mechanisms.
Again, a “second best” pause would be bad
(Covered above)
If you still think we need a pause, you should instead support Leicht’s plan, which is a 3-part progression of:
Transparency and similar “low-hanging fruit”
3rd party auditing
Unspecified “surgical policy interventions” to address whatever is left after doing (a) and (b)
As I mentioned, these are clearly inadequate by my lights. Isn’t one of the main purposes of transparency and auditing to find out if an AI system is too dangerous? And what then? Advocates for policies short of a global pause never seem to address the question of “what happens if you later become convinced (as I already am) that a global pause is the only way of reducing the risk to an acceptable level?” The “maybe pause later” position seems incoherent. There is not and has never been time to fuss around with these weak-sauce policy asks; we’ve already wasted 3 years.
General reflections
Leicht’s arguments make sense given certain background views. But these are not the views of the people he purports to be arguing against! There is a frustrating move, made by Leicht, as well as others such as Dean Ball, and the AI Snake Oil duo, that says “I’m not dismissing the possibility of AI causing human extinction”, but then goes on to treat it as a rounding error in the policy discussion. If you’re doing this, you’re not really engaging with the conversation others are having that takes the risk seriously and you shouldn’t be pretending otherwise.
Leicht also seems to be imagining that the political climate doesn’t change much, and in particular, that AI x-risk doesn’t become a popular concern. Advocates for a pause are betting that these things (or similar enough things) happen. So far they seem to be on the winning side. Leicht’s concern is basically that AI x-risk concerns will have precisely enough power to mess things up, not enough to have their demands met. This seems like a somewhat niche, galaxy-brained concern, and I somewhat struggle to imagine worlds that look like this. I think Leicht made similar mistakes in his previous blog post “Don’t Build An AI Safety Movement”, which seemed to assume that “AI safety” could instigate a movement, but would lose control over it if it did. In reality, an anti-AI movement is happening whether he likes it or not, and the winning move is to get involved to try and steer it in productive directions.
Doesn’t the current data centre moratorium bill already have a clause about maintaining/enacting export controls?
Edit: Yes, see this summary from Sanders’ official website
(posting this on Substack + LW): Thanks for taking the time to respond! Will be somewhat selective in my response to avoid getting too deep into hashing out background views, but took three responses that seemed relevant background disagreements notwithstanding.
> The point of a pause is to buy time to improve the situation, not to resample. Modeling it as resampling is silly.
I think this is fair, and my sense of why pausing is a (biased) resampling is not well-developed in the piece. I have a few reasons for believing this. First, my understanding is that while it might be practically possible to continue current-level AI performance on the existing compute infrastructure, it seems a lot less likely that the current financial and corporate infrastructure is likely to survive something as chilling to the capex/build-out-driven market as a pause. I’d expect AI developers to take a substantial hit, some compute to get repurposed, and of course as per your design, development within this specific paradigm to get a lot slower. But I don’t think that stops the overall pursuit of advanced AI: they know this is in principle feasible now, they’ll want the revenue or strategic capability, and so I’m confident lots of motivated actors will look at other ways to achieve advanced systems (maybe even ones that by design avoid the eyes of a pause movement).
My sense is that this puts us back to the square we were on before the current paradigm emerged, ‘AGI in a cave’ becomes more likely again. All the contingent elements I note no longer apply to that new push towards advanced AI: America doesn’t get AGI-pilled first, government-first development is no longer out of the question, and the post-pause design spec is necessarily less dependent on a complex multinational compute supply chain.
>I expect the US national security establishment to strongly oppose a unilateral pause, and the president to either veto it, or work to negotiate an international pause instead.
That seems like an incredibly specific prediction about the behaviour of the natsec establishment to me that (a) assumes this establishment is powerful enough to do that; but (b) either not powerful or not interested enough to stop the pause outright; but (c) then still powerful enough to delay the actual commencement of a politically desirable pause by months to perhaps years while negotiations for a comprehensive verifiable MIRI-style pause agreement are going on. I think all of these are very unreliable predictions, especially under this administration; and they only leave a very narrow band of political success scenarios open to you: where this movement is strong enough to smash its way through all the roadblocks you think otherwise exist for good safety policy, drive this ask home with high urgency, but then not powerful enough to keep the natsec apparatus from delaying it while they start negotiating with other countries. (the same argument applies to the President, but as he’s arguably more responsive to political demands in a way that’s highly correlated with legislators, so I’m not sure I see a scenario where you convince 60 senators but not the President to do this, so I’m not sure the veto story applies.)
This is really important, because this is the objection that engages with what I think are the most important arguments of the piece, i.e. that the coalition logic will make a unilateral pause exceedingly attractive. This is also my response to your ‘second-best’ version of the pause with verifiable export controls. That’s also quite hard to do, not-at-all necessary to satisfy prosaic anti-AI sentiment, and takes a long while to set up technically and politically. Betting on outside influence to fix that problem seems very risky to me, especially since, once you’re at the point where you realise there actually is no natsec intervention to make the pause international, it’ll be too late to stop it.
> Leicht seems to be imagining a situation where “ardent safetyism” has enough power to popularize the idea of a pause, but not to influence how it is implemented. But regardless, some “ardent safetyists” would be happy to take the slow-down from a datacenter moratorium, even if lacks other pieces. (...) So far they seem to be on the winning side. Leicht’s concern is basically that AI x-risk concerns will have precisely enough power to mess things up, not enough to have their demands met. (...) In reality, an anti-AI movement is happening whether he likes it or not, and the winning move is to get involved to try and steer it in productive directions.
I strongly agree with the first half of that last sentence in that I think this movement will happen either way. I just disagree about attaching x-risk concerns to that movement (because I believe, as argued in the two pieces you’ve linked, that this attachment doesn’t do much good because the realistic amount of steering you’ll be able to do doesn’t get ayou anywhere close to the policies that would substantively help). I know you disagree with that, but on that view, I think your characterisation of ‘precisely enough power to mess things up’ makes more sense: What I’m worried you’re messing up is not the anti-AI movement in general, but the x-risk position. It would make sense to me that in a lot of worlds, x-risk advocates can be powerful enough to make their own position worse, but not powerful enough to precisely change broader movements. Depending on how precise the necessary change is (I think quite precise!), I also think it could be very plausible that you’d be powerful enough to play a minor but significant role in popularising this movement but not to change it to adopt very costly policy objectives (like the international part of the pause).