I think this is very fair! In a world where (i) AGI → ASI is super fast; (ii) the military diffusion of ASI is exceptionally quick; and (iii) the marginal costs of scaling offensive capability is extremely low, then any sense of a limited/total war distinction does indeed break down and ASI will be the defining factor of military capability much, much sooner than we’d expect.
I think I’m instinctually sceptical of (iii) at least for a couple years after the advent of ASI though (the critical juncture for this strategy), where I think the modal outcome still looks like ASIs engage in routine cyberoperations all the time; are autonomously responsible for handling aerial warfare; and are fundamental to military operations/planning. But it’s still really costly to engage in a total war scenario aimed at completely crippling a state such as China. It could play out as the need to engineer tons of drones/UAVs, the extremely costly development of a superweapon, the costs of having to secure every datacentre, etc. Within the period where we have to reckon with the effects of ASI, my guess is that the modal war—even with China—is still more a function of commitment than military advantage (which makes AGI realist rhetoric a risk amplifier).
Although I wouldn’t say I’m hugely confident here, and I definitely don’t feel very calibrated on just how likely this world is where the rapid diffusion of ASI also means very little/low marginal cost of scaling offensive capabilities. Though in is world, frankly, I don’t think we avoid war at all unless there happen to be strong norms and sentiments against this kind of deployment. I guess the “maximise our ability to deploy ASI offensively” approach makes sense if the approach is “we must win the eventual war with China” built on relatively high credences we’re in this rapid-diffusion-low-marginal-costs worlds. But given uncertainties about whether we’re in this world; the potentially catastrophic consequences of war; and the fact that at least maintaining a competitive advantage isn’t mutually exclusive from equally attempting strong norm-forming against war—the AGI realist rhetoric still makes me uneasy.
But I at least share that no other proposed approach seems great. I’m just conscious it seems not enough people in the relevant circles are even thinking about other approaches because they’ve already bought into a frame I think will only worsen the chances of catastrophic risk.
I’d say I agree with just about all of that, and I’m glad to see it laid out so clearly!
I just also wouldn’t be hugely surprised if it turns out something like designing and building remote-controllable self-replicating globally-deployable nanotech (as one example) is in some sense fundamentally “easy” for even an early ASI/modestly superhuman AGI. Say that’s the case, and we build a few for the ASI, and then we distribute them across the world, in a matter of weeks. They do what controlled self-replicating nanobots do. Then after a few months the ASI already has an off switch or sleep mode button buried in everyone’s brain. My guess is that then none of those hard steps of a war with China come into play.
To be clear, I don’t think this story is likely. But in a broad sense, I am generally of the opinion that most people greatly overestimate how much new data we need to answer new questions or create (some kinds of) new things, and underestimate what can be done with clever use of existing data, even among humans, let alone as we approach the limits of cleverness.
I think this is very fair! In a world where (i) AGI → ASI is super fast; (ii) the military diffusion of ASI is exceptionally quick; and (iii) the marginal costs of scaling offensive capability is extremely low, then any sense of a limited/total war distinction does indeed break down and ASI will be the defining factor of military capability much, much sooner than we’d expect.
I think I’m instinctually sceptical of (iii) at least for a couple years after the advent of ASI though (the critical juncture for this strategy), where I think the modal outcome still looks like ASIs engage in routine cyberoperations all the time; are autonomously responsible for handling aerial warfare; and are fundamental to military operations/planning. But it’s still really costly to engage in a total war scenario aimed at completely crippling a state such as China. It could play out as the need to engineer tons of drones/UAVs, the extremely costly development of a superweapon, the costs of having to secure every datacentre, etc. Within the period where we have to reckon with the effects of ASI, my guess is that the modal war—even with China—is still more a function of commitment than military advantage (which makes AGI realist rhetoric a risk amplifier).
Although I wouldn’t say I’m hugely confident here, and I definitely don’t feel very calibrated on just how likely this world is where the rapid diffusion of ASI also means very little/low marginal cost of scaling offensive capabilities. Though in is world, frankly, I don’t think we avoid war at all unless there happen to be strong norms and sentiments against this kind of deployment. I guess the “maximise our ability to deploy ASI offensively” approach makes sense if the approach is “we must win the eventual war with China” built on relatively high credences we’re in this rapid-diffusion-low-marginal-costs worlds. But given uncertainties about whether we’re in this world; the potentially catastrophic consequences of war; and the fact that at least maintaining a competitive advantage isn’t mutually exclusive from equally attempting strong norm-forming against war—the AGI realist rhetoric still makes me uneasy.
But I at least share that no other proposed approach seems great. I’m just conscious it seems not enough people in the relevant circles are even thinking about other approaches because they’ve already bought into a frame I think will only worsen the chances of catastrophic risk.
I’d say I agree with just about all of that, and I’m glad to see it laid out so clearly!
I just also wouldn’t be hugely surprised if it turns out something like designing and building remote-controllable self-replicating globally-deployable nanotech (as one example) is in some sense fundamentally “easy” for even an early ASI/modestly superhuman AGI. Say that’s the case, and we build a few for the ASI, and then we distribute them across the world, in a matter of weeks. They do what controlled self-replicating nanobots do. Then after a few months the ASI already has an off switch or sleep mode button buried in everyone’s brain. My guess is that then none of those hard steps of a war with China come into play.
To be clear, I don’t think this story is likely. But in a broad sense, I am generally of the opinion that most people greatly overestimate how much new data we need to answer new questions or create (some kinds of) new things, and underestimate what can be done with clever use of existing data, even among humans, let alone as we approach the limits of cleverness.