The relevant target is not every individual human but human civilization and its ability to react. If the AI can kill large enough numbers of people, that would be enough for the AI to continue its work unimpeded, and it can kill the rest of us at its leisure. In fact, the AI could destroy civilization’s ability to respond without killing a single person, by simply destroying enough industry and infrastructure that humans are no longer able to engage in science/engineering/military action. (A bit like EY’s melt-all-GPUs nanotech concept.)
That said, all of avturchin’s scenorios are either implausible IMO or require a future with a lot more automation than we have today.
The relevant target is not every individual human but human civilization and its ability to react
If that’s what he meant, it would have been better if he’d said that explicitly. For example, these five could cause extinction and these ten could remove our ability to react.
The relevant target is not every individual human but human civilization and its ability to react. If the AI can kill large enough numbers of people, that would be enough for the AI to continue its work unimpeded, and it can kill the rest of us at its leisure. In fact, the AI could destroy civilization’s ability to respond without killing a single person, by simply destroying enough industry and infrastructure that humans are no longer able to engage in science/engineering/military action. (A bit like EY’s melt-all-GPUs nanotech concept.)
That said, all of avturchin’s scenorios are either implausible IMO or require a future with a lot more automation than we have today.
If that’s what he meant, it would have been better if he’d said that explicitly. For example, these five could cause extinction and these ten could remove our ability to react.