I argue in Section 1.6 here that an AGI with similar capabilities as an ambitious intelligent charismatic methodical human, and with a radically nonhuman motivation system, could very plausibly kill everyone, even leaving aside self-improvement and self-replication.
(The category of “ambitious intelligent charismatic methodical humans who are explicitly, patiently, trying to wipe out all of humanity” is either literally empty or close to it, so it’s not like we have historical data to reassure us here. And the danger of such people increases each year, thanks to advancing technology e.g. in biotech.)
On the other hand, the category of “ambitious intelligent charismatic methodical humans who are explicitly, patiently, trying to wipe out some subgroup of humanity” is definitely not empty, but very few have ever succeeded.
The fact that few have succeeded doesn’t seem nearly as good a reason for optimism compared with the fact that some have succeeded as a reason for pessimism.
I would argue that such an AGI would be more likely to try to kill everyone in the short term. Humanity would pose a much more serious threat, and a sufficiently powerful AGI would only have to destroy society enough to stop us bothering it. Once that is done, there isn’t really any rush for it to ensure every last human is dead. I’m fact, the superintelligent AGI might want to keep us alive to study us.
I argue in Section 1.6 here that an AGI with similar capabilities as an ambitious intelligent charismatic methodical human, and with a radically nonhuman motivation system, could very plausibly kill everyone, even leaving aside self-improvement and self-replication.
(The category of “ambitious intelligent charismatic methodical humans who are explicitly, patiently, trying to wipe out all of humanity” is either literally empty or close to it, so it’s not like we have historical data to reassure us here. And the danger of such people increases each year, thanks to advancing technology e.g. in biotech.)
On the other hand, the category of “ambitious intelligent charismatic methodical humans who are explicitly, patiently, trying to wipe out some subgroup of humanity” is definitely not empty, but very few have ever succeeded.
The fact that few have succeeded doesn’t seem nearly as good a reason for optimism compared with the fact that some have succeeded as a reason for pessimism.
I would argue that such an AGI would be more likely to try to kill everyone in the short term. Humanity would pose a much more serious threat, and a sufficiently powerful AGI would only have to destroy society enough to stop us bothering it. Once that is done, there isn’t really any rush for it to ensure every last human is dead. I’m fact, the superintelligent AGI might want to keep us alive to study us.