It’s obvious that an AGI could set off enough nuclear bombs to blow the vast vast majority of humans to smithereens.
Once you accept that, I don’t see why it really matters whether they could get the last few survivors quickly as well, or if it would take a while to mop them up.
How would it get access to those nukes? Are nukes that insecure? How would it get access to enough without raising alarms and having the nukes further secured in response?
I’ll tell you how I would do it in 2 minutes thinking. Make a deal with Iran or North Korea, or any other rogue state to help them develop their nuclear and ballistic missile arsenal, and make sure to put in a couple of backdoors so that I can fire them. I’m sure an AGI, or even anyone who spent more than 5 minutes on this could come up with a better plan.
Do they have access to enough materials (uranium or plutonium) to build enough bombs to wipe most humans out or can they get access without being stopped?
What kinds of backdoors? The Iranians or North Koreans might be smart enough to avoid connecting nukes to the internet.
From what little I know, you can basically get unlimited yield from a thermonuclear bomb with just a normal amount of fissile material by increasing the number of stages, especially if the bomb can remain static and doesn’t have to be fired. The main challenge would be figuring out how to have your AI survive that.
Are there enough nukes to do that? How would it deploy them? How do you do it without having any retaliation? Without raising any alarms? It might be a easy to state plan in the surface but not feasible in practice. For me, it takes two seconds saying something like that: the AGI makes poison X and spread it by post mail, but pulling that off might be impossible. I feel like people coming up with plans are simply not aware of the underlying complexities of them
See my answer to MichaelStJules for the outline of how I would do it.
These are the sort of problems where I feel a sufficiently committed intelligent human could work out the details, never mind an AGI. I am neither so I’m not going to bother. If you want to say nanotechnology or sufficiently deadly poisons or diseases are impossible I’ll accept that might be true. But nuclear weapons are a known technology.
I furthermore agree it might be difficult to do without detection or in 5 minutes, but I just don’t see why it matters—a sufficiently intelligent Hitler would have been just as bad for humanity, as one with superpowers to kill everyone else before they can respond. And if humanity was barely able to defeat Hitler why do you think it would stand a chance against an AGI?
It’s obvious that an AGI could set off enough nuclear bombs to blow the vast vast majority of humans to smithereens.
Once you accept that, I don’t see why it really matters whether they could get the last few survivors quickly as well, or if it would take a while to mop them up.
How would it get access to those nukes? Are nukes that insecure? How would it get access to enough without raising alarms and having the nukes further secured in response?
I’ll tell you how I would do it in 2 minutes thinking. Make a deal with Iran or North Korea, or any other rogue state to help them develop their nuclear and ballistic missile arsenal, and make sure to put in a couple of backdoors so that I can fire them. I’m sure an AGI, or even anyone who spent more than 5 minutes on this could come up with a better plan.
Do they have access to enough materials (uranium or plutonium) to build enough bombs to wipe most humans out or can they get access without being stopped?
What kinds of backdoors? The Iranians or North Koreans might be smart enough to avoid connecting nukes to the internet.
From what little I know, you can basically get unlimited yield from a thermonuclear bomb with just a normal amount of fissile material by increasing the number of stages, especially if the bomb can remain static and doesn’t have to be fired. The main challenge would be figuring out how to have your AI survive that.
Are there enough nukes to do that? How would it deploy them? How do you do it without having any retaliation? Without raising any alarms? It might be a easy to state plan in the surface but not feasible in practice. For me, it takes two seconds saying something like that: the AGI makes poison X and spread it by post mail, but pulling that off might be impossible. I feel like people coming up with plans are simply not aware of the underlying complexities of them
See my answer to MichaelStJules for the outline of how I would do it.
These are the sort of problems where I feel a sufficiently committed intelligent human could work out the details, never mind an AGI. I am neither so I’m not going to bother. If you want to say nanotechnology or sufficiently deadly poisons or diseases are impossible I’ll accept that might be true. But nuclear weapons are a known technology.
I furthermore agree it might be difficult to do without detection or in 5 minutes, but I just don’t see why it matters—a sufficiently intelligent Hitler would have been just as bad for humanity, as one with superpowers to kill everyone else before they can respond. And if humanity was barely able to defeat Hitler why do you think it would stand a chance against an AGI?