(posting this on Substack + LW): Thanks for taking the time to respond! Will be somewhat selective in my response to avoid getting too deep into hashing out background views, but took three responses that seemed relevant background disagreements notwithstanding.
> The point of a pause is to buy time to improve the situation, not to resample. Modeling it as resampling is silly.
I think this is fair, and my sense of why pausing is a (biased) resampling is not well-developed in the piece. I have a few reasons for believing this. First, my understanding is that while it might be practically possible to continue current-level AI performance on the existing compute infrastructure, it seems a lot less likely that the current financial and corporate infrastructure is likely to survive something as chilling to the capex/build-out-driven market as a pause. I’d expect AI developers to take a substantial hit, some compute to get repurposed, and of course as per your design, development within this specific paradigm to get a lot slower. But I don’t think that stops the overall pursuit of advanced AI: they know this is in principle feasible now, they’ll want the revenue or strategic capability, and so I’m confident lots of motivated actors will look at other ways to achieve advanced systems (maybe even ones that by design avoid the eyes of a pause movement).
My sense is that this puts us back to the square we were on before the current paradigm emerged, ‘AGI in a cave’ becomes more likely again. All the contingent elements I note no longer apply to that new push towards advanced AI: America doesn’t get AGI-pilled first, government-first development is no longer out of the question, and the post-pause design spec is necessarily less dependent on a complex multinational compute supply chain.
>I expect the US national security establishment to strongly oppose a unilateral pause, and the president to either veto it, or work to negotiate an international pause instead.
That seems like an incredibly specific prediction about the behaviour of the natsec establishment to me that (a) assumes this establishment is powerful enough to do that; but (b) either not powerful or not interested enough to stop the pause outright; but (c) then still powerful enough to delay the actual commencement of a politically desirable pause by months to perhaps years while negotiations for a comprehensive verifiable MIRI-style pause agreement are going on. I think all of these are very unreliable predictions, especially under this administration; and they only leave a very narrow band of political success scenarios open to you: where this movement is strong enough to smash its way through all the roadblocks you think otherwise exist for good safety policy, drive this ask home with high urgency, but then not powerful enough to keep the natsec apparatus from delaying it while they start negotiating with other countries. (the same argument applies to the President, but as he’s arguably more responsive to political demands in a way that’s highly correlated with legislators, so I’m not sure I see a scenario where you convince 60 senators but not the President to do this, so I’m not sure the veto story applies.)
This is really important, because this is the objection that engages with what I think are the most important arguments of the piece, i.e. that the coalition logic will make a unilateral pause exceedingly attractive. This is also my response to your ‘second-best’ version of the pause with verifiable export controls. That’s also quite hard to do, not-at-all necessary to satisfy prosaic anti-AI sentiment, and takes a long while to set up technically and politically. Betting on outside influence to fix that problem seems very risky to me, especially since, once you’re at the point where you realise there actually is no natsec intervention to make the pause international, it’ll be too late to stop it.
>Leicht seems to be imagining a situation where “ardent safetyism” has enough power to popularize the idea of a pause, but not to influence how it is implemented. But regardless, some “ardent safetyists” would be happy to take the slow-down from a datacenter moratorium, even if lacks other pieces. (...) So far they seem to be on the winning side. Leicht’s concern is basically that AI x-risk concerns will have precisely enough power to mess things up, not enough to have their demands met. (...) In reality, an anti-AI movement is happening whether he likes it or not, and the winning move is to get involved to try and steer it in productive directions.
I strongly agree with the first half of that last sentence in that I think this movement will happen either way. I just disagree about attaching x-risk concerns to that movement (because I believe, as argued in the two pieces you’ve linked, that this attachment doesn’t do much good because the realistic amount of steering you’ll be able to do doesn’t get ayou anywhere close to the policies that would substantively help). I know you disagree with that, but on that view, I think your characterisation of ‘precisely enough power to mess things up’ makes more sense: What I’m worried you’re messing up is not the anti-AI movement in general, but the x-risk position. It would make sense to me that in a lot of worlds, x-risk advocates can be powerful enough to make their own position worse, but not powerful enough to precisely change broader movements. Depending on how precise the necessary change is (I think quite precise!), I also think it could be very plausible that you’d be powerful enough to play a minor but significant role in popularising this movement but not to change it to adopt very costly policy objectives (like the international part of the pause).
(posting this on Substack + LW): Thanks for taking the time to respond! Will be somewhat selective in my response to avoid getting too deep into hashing out background views, but took three responses that seemed relevant background disagreements notwithstanding.
> The point of a pause is to buy time to improve the situation, not to resample. Modeling it as resampling is silly.
I think this is fair, and my sense of why pausing is a (biased) resampling is not well-developed in the piece. I have a few reasons for believing this. First, my understanding is that while it might be practically possible to continue current-level AI performance on the existing compute infrastructure, it seems a lot less likely that the current financial and corporate infrastructure is likely to survive something as chilling to the capex/build-out-driven market as a pause. I’d expect AI developers to take a substantial hit, some compute to get repurposed, and of course as per your design, development within this specific paradigm to get a lot slower. But I don’t think that stops the overall pursuit of advanced AI: they know this is in principle feasible now, they’ll want the revenue or strategic capability, and so I’m confident lots of motivated actors will look at other ways to achieve advanced systems (maybe even ones that by design avoid the eyes of a pause movement).
My sense is that this puts us back to the square we were on before the current paradigm emerged, ‘AGI in a cave’ becomes more likely again. All the contingent elements I note no longer apply to that new push towards advanced AI: America doesn’t get AGI-pilled first, government-first development is no longer out of the question, and the post-pause design spec is necessarily less dependent on a complex multinational compute supply chain.
>I expect the US national security establishment to strongly oppose a unilateral pause, and the president to either veto it, or work to negotiate an international pause instead.
That seems like an incredibly specific prediction about the behaviour of the natsec establishment to me that (a) assumes this establishment is powerful enough to do that; but (b) either not powerful or not interested enough to stop the pause outright; but (c) then still powerful enough to delay the actual commencement of a politically desirable pause by months to perhaps years while negotiations for a comprehensive verifiable MIRI-style pause agreement are going on. I think all of these are very unreliable predictions, especially under this administration; and they only leave a very narrow band of political success scenarios open to you: where this movement is strong enough to smash its way through all the roadblocks you think otherwise exist for good safety policy, drive this ask home with high urgency, but then not powerful enough to keep the natsec apparatus from delaying it while they start negotiating with other countries. (the same argument applies to the President, but as he’s arguably more responsive to political demands in a way that’s highly correlated with legislators, so I’m not sure I see a scenario where you convince 60 senators but not the President to do this, so I’m not sure the veto story applies.)
This is really important, because this is the objection that engages with what I think are the most important arguments of the piece, i.e. that the coalition logic will make a unilateral pause exceedingly attractive. This is also my response to your ‘second-best’ version of the pause with verifiable export controls. That’s also quite hard to do, not-at-all necessary to satisfy prosaic anti-AI sentiment, and takes a long while to set up technically and politically. Betting on outside influence to fix that problem seems very risky to me, especially since, once you’re at the point where you realise there actually is no natsec intervention to make the pause international, it’ll be too late to stop it.
> Leicht seems to be imagining a situation where “ardent safetyism” has enough power to popularize the idea of a pause, but not to influence how it is implemented. But regardless, some “ardent safetyists” would be happy to take the slow-down from a datacenter moratorium, even if lacks other pieces. (...) So far they seem to be on the winning side. Leicht’s concern is basically that AI x-risk concerns will have precisely enough power to mess things up, not enough to have their demands met. (...) In reality, an anti-AI movement is happening whether he likes it or not, and the winning move is to get involved to try and steer it in productive directions.
I strongly agree with the first half of that last sentence in that I think this movement will happen either way. I just disagree about attaching x-risk concerns to that movement (because I believe, as argued in the two pieces you’ve linked, that this attachment doesn’t do much good because the realistic amount of steering you’ll be able to do doesn’t get ayou anywhere close to the policies that would substantively help). I know you disagree with that, but on that view, I think your characterisation of ‘precisely enough power to mess things up’ makes more sense: What I’m worried you’re messing up is not the anti-AI movement in general, but the x-risk position. It would make sense to me that in a lot of worlds, x-risk advocates can be powerful enough to make their own position worse, but not powerful enough to precisely change broader movements. Depending on how precise the necessary change is (I think quite precise!), I also think it could be very plausible that you’d be powerful enough to play a minor but significant role in popularising this movement but not to change it to adopt very costly policy objectives (like the international part of the pause).