I’d say one of the main reasons is because military-AI technology isn’t being optimized towards things we’re afraid of. We’re concerned about generally intelligent entities capable of e. g. automated R&D and social manipulation and long-term scheming. Military-AI technology, last I checked, was mostly about teaching drones and missiles to fly straight and recognize camouflaged tanks and shoot designated targets while not shooting not designated targets.
And while this still may result in a generally capable superintelligence in the limit (since “which targets would my commanders want me to shoot?” can be phrased as a very open-ended problem), it’s not a particularly efficient way to approach this limit at all. Militaries, so far, just aren’t really pushing in the directions where doom lies, while the AGI labs are doing their best to beeline there.
The proliferation of drone armies that could be easily co-opted by a hostile superintelligence… It doesn’t have no impact on p(doom), but it’s approximately a rounding error. A hostile superintelligence doesn’t need extant drone armies; it could build its own, and co-opt humans in the meantime.
I think you’re imagining that we modify the shrink-and-reposition functions each iteration, lowering their scope? I. e., that if we picked the topmost triangle for the first iteration, then in iteration two we pick one of the three sub-triangles making up the topmost triangle, rather than choosing one of the “highest-level” sub-triangles?
Something like this:
If we did it this way, then yes, we’d eventually end up jumping around an infinitesimally small area. But that’s not how it works, we always pick one of the highest-level sub-triangles:
Note also that we take in the “global” coordinates of the point we shrink-and-reposition (i. e., its position within the whole triangle), rather than its “local” coordinates (i. e., position within the sub-triangle to which it was copied).
Here’s a (slightly botched?) video explanation.