See my answer to MichaelStJules for the outline of how I would do it.
These are the sort of problems where I feel a sufficiently committed intelligent human could work out the details, never mind an AGI. I am neither so I’m not going to bother. If you want to say nanotechnology or sufficiently deadly poisons or diseases are impossible I’ll accept that might be true. But nuclear weapons are a known technology.
I furthermore agree it might be difficult to do without detection or in 5 minutes, but I just don’t see why it matters—a sufficiently intelligent Hitler would have been just as bad for humanity, as one with superpowers to kill everyone else before they can respond. And if humanity was barely able to defeat Hitler why do you think it would stand a chance against an AGI?
See my answer to MichaelStJules for the outline of how I would do it.
These are the sort of problems where I feel a sufficiently committed intelligent human could work out the details, never mind an AGI. I am neither so I’m not going to bother. If you want to say nanotechnology or sufficiently deadly poisons or diseases are impossible I’ll accept that might be true. But nuclear weapons are a known technology.
I furthermore agree it might be difficult to do without detection or in 5 minutes, but I just don’t see why it matters—a sufficiently intelligent Hitler would have been just as bad for humanity, as one with superpowers to kill everyone else before they can respond. And if humanity was barely able to defeat Hitler why do you think it would stand a chance against an AGI?