Contra Leicht on AI Pauses

Link post

This is going to be a nerdier article than usual. It’s a response to Anton Leicht’s blog post “Press Play To Continue”. I disagree with much of it and think it’s not very well argued.

Going section-by-section, Leicht’s claims are:

  1. AI is going pretty damn well all things considered. The argument has two parts:

    1. It implicitly models pausing AI as resampling from a fixed distribution of possible AI development timelines.

    2. It argues the current situation is better than average, so resampling is a bad idea.

    3. Specifically the arguments that things are going well are:

      1. Minimal compute overhang

      2. Multipolarity

      3. Liberal democracies control the supply chain

  2. Pausing AI isn’t and won’t be popular among centrist politicians, only among the more radical wings of both parties, and this makes it unlikely to get passed.

  3. A “second best” pause is more likely, and would be worse than nothing.

    1. It would be unilateral.

    2. It would not cater to x-risk concerns, and thus will lack critical pieces like e.g. export controls or limits on internal deployment.

  4. Proposals to pause AI don’t expand the Overton window in a helpful way, because:

    1. A radical flank is only effective if there is a moderate faction to benefit from it.

    2. Again, a “second best” pause would be bad.

  5. If you still think we need a pause, you should instead support Leicht’s plan, which is a 3-part progression of:

    1. Transparency and similar “low-hanging fruit”

    2. 3rd party auditing

    3. Unspecified “surgical policy interventions” to address whatever is left after doing (a) and (b)

And that’s basically it… The article is largely a collection of claims and facts, rather than arguments. For instance, the contents of the introduction (which I didn’t cover above) are basically:

  • Bernie Sanders and AOC are proposing a datacenter moratorium. FACT.

  • “Advocates for such a move are Luddites, of course”—name calling, never followed up on.

  • “A pause of some kind is something that some policymakers are asking for. Worse, it’s something a government could enact.”—Leicht is essentially conceding the main point the article claims to argue against. This claim makes sense only if pausing is destined to turn out badly, as he later claims.

  • “Even if you are principally and perhaps exclusively concerned with reducing catastrophic risks, you should oppose the notion of a pause.”—Leicht never properly engages with the worldview of “ardent safetyists” until the final section, where he offers policy solutions that are clearly inadequate by their standards. The whole reason people are proposing a pause is because they consider such proposals inadequate.

I’ll now respond to claims and sections in detail; my responses are in bold.

Response to the arguments

  1. AI is going pretty damn well all things considered. The argument has two parts:

    1. It implicitly models pausing AI as resampling from a fixed distribution of possible AI development timelines.

      1. The point of a pause is to buy time to improve the situation, not to resample. Modeling it as resampling is silly.

    2. It argues the current situation is better than average, so resampling is a bad idea.

      1. The arguments listed below are an incomplete list of considerations. For instance, we might also note that alignment isn’t solved, and that the US government seems to be extremely accelerationist and not particularly concerned with how AI could systematically disempower most Americans.

    3. Specifically the arguments that things are going well are:

      1. Minimal compute overhang

        1. The compute overhang doesn’t really matter that much. The accelerations in progress that will come from taking humans out of the loop dwarf compute overhang considerations.

      2. Multipolarity

        1. I’m not convinced this is a good thing, as it drives race dynamics.

      3. Liberal democracies control the supply chain

        1. This implicitly takes the view that “who wins the race” is very important. I and most other “ardent safetyists” reject this view because we think nobody will “win the race” in a relevant sense: the race probably ends in extinction.

  2. Pausing AI isn’t and won’t be popular among centrist politicians, only among the more radical wings of both parties, and this makes it unlikely to get passed.

    1. This basically assumes that the political climate doesn’t change that much, in the context of a rapidly changing political climate… AI is growing in salience faster than any other issue, according to Blue Rose’s research.

    2. It says centrist policy is more likely to “succeed”, i.e. become law. But the relevant measure of success is whether a policy actually solves the problem.

    3. Finally, it assumes that policymakers will not, themselves, be highly concerned with addressing AI x-risk. But the goal is to have widespread acknowledgment and concern of the risk, including among policymakers.

  3. A “second best” pause would be worse than nothing.

    1. It would be unilateral

      1. I expect the US national security establishment to strongly oppose a unilateral pause, and the president to either veto it, or work to negotiate an international pause instead.

    2. It would not cater to x-risk concerns, and thus will lack critical pieces like e.g. export controls or limits on internal deployment.

      1. Leicht seems to be imagining a situation where “ardent safetyism” has enough power to popularize the idea of a pause, but not to influence how it is implemented. But regardless, some “ardent safetyists” would be happy to take the slow-down from a datacenter moratorium, even if lacks other pieces.

  4. Proposals to pause AI don’t expand the Overton window in a helpful way, because:

    1. A radical flank is only effective if there is a moderate faction to benefit from it

      1. Leicht seems to assume that arguing for an international pause only creates support for specific, watered down versions of that policy (e.g. a domestic pause), rather than awareness of the risks of AI and support for policies that address those risks, generally. In particular, if AI x-risk concerns have power, I think a second best pause is likely to look like compute governance via hardware enabled mechanisms.

    2. Again, a “second best” pause would be bad

      1. (Covered above)

  5. If you still think we need a pause, you should instead support Leicht’s plan, which is a 3-part progression of:

    1. Transparency and similar “low-hanging fruit”

    2. 3rd party auditing

    3. Unspecified “surgical policy interventions” to address whatever is left after doing (a) and (b)

As I mentioned, these are clearly inadequate by my lights. Isn’t one of the main purposes of transparency and auditing to find out if an AI system is too dangerous? And what then? Advocates for policies short of a global pause never seem to address the question of “what happens if you later become convinced (as I already am) that a global pause is the only way of reducing the risk to an acceptable level?” The “maybe pause later” position seems incoherent. There is not and has never been time to fuss around with these weak-sauce policy asks; we’ve already wasted 3 years.

General reflections

Leicht’s arguments make sense given certain background views. But these are not the views of the people he purports to be arguing against! There is a frustrating move, made by Leicht, as well as others such as Dean Ball, and the AI Snake Oil duo, that says “I’m not dismissing the possibility of AI causing human extinction”, but then goes on to treat it as a rounding error in the policy discussion. If you’re doing this, you’re not really engaging with the conversation others are having that takes the risk seriously and you shouldn’t be pretending otherwise.

Leicht also seems to be imagining that the political climate doesn’t change much, and in particular, that AI x-risk doesn’t become a popular concern. Advocates for a pause are betting that these things (or similar enough things) happen. So far they seem to be on the winning side. Leicht’s concern is basically that AI x-risk concerns will have precisely enough power to mess things up, not enough to have their demands met. This seems like a somewhat niche, galaxy-brained concern, and I somewhat struggle to imagine worlds that look like this. I think Leicht made similar mistakes in his previous blog post “Don’t Build An AI Safety Movement”, which seemed to assume that “AI safety” could instigate a movement, but would lose control over it if it did. In reality, an anti-AI movement is happening whether he likes it or not, and the winning move is to get involved to try and steer it in productive directions.