The relevant target is not every individual human but human civilization and its ability to react. If the AI can kill large enough numbers of people, that would be enough for the AI to continue its work unimpeded, and it can kill the rest of us at its leisure. In fact, the AI could destroy civilization’s ability to respond without killing a single person, by simply destroying enough industry and infrastructure that humans are no longer able to engage in science/engineering/military action. (A bit like EY’s melt-all-GPUs nanotech concept.)
That said, all of avturchin’s scenorios are either implausible IMO or require a future with a lot more automation than we have today.
The relevant target is not every individual human but human civilization and its ability to react
If that’s what he meant, it would have been better if he’d said that explicitly. For example, these five could cause extinction and these ten could remove our ability to react.
No, the plan was not a really good plan. You might be fooling yourself into believing that it was a really a good plan, but I bet that if you sat down for 5 minutes and look actively for reasons why the plan might fail you would find them.
It seems like a lot of those plans wouldn’t be sufficient to kill everyone, as opposed to a lot of people.
The relevant target is not every individual human but human civilization and its ability to react. If the AI can kill large enough numbers of people, that would be enough for the AI to continue its work unimpeded, and it can kill the rest of us at its leisure. In fact, the AI could destroy civilization’s ability to respond without killing a single person, by simply destroying enough industry and infrastructure that humans are no longer able to engage in science/engineering/military action. (A bit like EY’s melt-all-GPUs nanotech concept.)
That said, all of avturchin’s scenorios are either implausible IMO or require a future with a lot more automation than we have today.
If that’s what he meant, it would have been better if he’d said that explicitly. For example, these five could cause extinction and these ten could remove our ability to react.
Actually I deleted a really good plan in a comment below.
No, the plan was not a really good plan. You might be fooling yourself into believing that it was a really a good plan, but I bet that if you sat down for 5 minutes and look actively for reasons why the plan might fail you would find them.