I don’t think this is the foremost existential risk because I think it’s reasonable to believe that:
if aligned AGIs (or anything able to destroy the world) exist, the most powerful ones will still be under control of governments or large companies. Even if the model itself was leaked, anyone with a lot of resources would have an advantage by holding a lot more compute than the average Joe;
if these aligned AGIs were not so superhuman and transformative to instantaneously destroy the world (which they shouldn’t be, or we’d already be in some kind of singularity anyway), then holding more of them with more compute should constitute a significant defensive advantage.
Yes, in general offence is easier than defence and destruction easier than creation. But what precisely would this hypothetical terrorist or nihilistic madman do to be so unstoppable that nothing else—not even all the other AGIs—stand a chance to counter it? Bioweapons can be fought, contained, countered; even more so if you have lots of artificial smarts on your side. Any attempt at FOOMing should be detectable by sharper intellects, and anyway, if AIs could FOOM that way, the bigger ones likely would have already (for good or bad). Pretty simple measures can be taken to protect things like control over nuclear weapons, and again, aligned government AGIs would be on the front line of defence against any attempts at hacking or such, so the individual AGI would still find itself outgunned.
So yeah, I think this really falls into the “we must be really stupid and drop the ball to go extinct to this sort of mishap” category. That said, people would still be able to do a lot of damage and I don’t like what that would do to our society as a whole. Instead of school shooters we’d have the occasional guy who managed to turn a city into nanites before being stopped, or some such insanity. You’d soon have everyone asking for total AGI surveillance to stop that sort of thing, and goodbye freedom and privacy. But I wouldn’t expect extinction from it.
I don’t think this is the foremost existential risk because I think it’s reasonable to believe that:
if aligned AGIs (or anything able to destroy the world) exist, the most powerful ones will still be under control of governments or large companies. Even if the model itself was leaked, anyone with a lot of resources would have an advantage by holding a lot more compute than the average Joe;
if these aligned AGIs were not so superhuman and transformative to instantaneously destroy the world (which they shouldn’t be, or we’d already be in some kind of singularity anyway), then holding more of them with more compute should constitute a significant defensive advantage.
Yes, in general offence is easier than defence and destruction easier than creation. But what precisely would this hypothetical terrorist or nihilistic madman do to be so unstoppable that nothing else—not even all the other AGIs—stand a chance to counter it? Bioweapons can be fought, contained, countered; even more so if you have lots of artificial smarts on your side. Any attempt at FOOMing should be detectable by sharper intellects, and anyway, if AIs could FOOM that way, the bigger ones likely would have already (for good or bad). Pretty simple measures can be taken to protect things like control over nuclear weapons, and again, aligned government AGIs would be on the front line of defence against any attempts at hacking or such, so the individual AGI would still find itself outgunned.
So yeah, I think this really falls into the “we must be really stupid and drop the ball to go extinct to this sort of mishap” category. That said, people would still be able to do a lot of damage and I don’t like what that would do to our society as a whole. Instead of school shooters we’d have the occasional guy who managed to turn a city into nanites before being stopped, or some such insanity. You’d soon have everyone asking for total AGI surveillance to stop that sort of thing, and goodbye freedom and privacy. But I wouldn’t expect extinction from it.