My intuitive feeling is that this makes too many unscientific assumptions that it does concretely support. The main issue I have is the dismissal of the techniques to kill off humanity. While it is true that it is by definition impossible to try to reason as a superhuman entity it is still the case that a rogue AI would only have a limited set of tools to try and cause a mass extinction. Wouldn’t the first step be to cite a counterexample strategy that an AI could use?
The authors think these are not existential only because “we have concluded that a single nuclear winter is unlikely to be an extinction risk”. Hardly comforting and, as argued above, existentially unconvincing.
I fail to see exactly where this is argued above, loss of control does not make wiping Earth of humanity easier with nukes, but it also moves the goalpost. RAND has deliberately chosen a limited scope that focuses on extinction level techniques, and I don’t think that they are trying to offer any comfort on the horrors of nuclear war.
While I am critical of the applicability of AI research on robotics, I agree that RAND’s assumption that robots need much more development to be used for spreading pathogens is probably wrong. It is likely that drone technology has already achieved such a level, and that an advanced AI could control a global swarm of them—either directly or through human actors—to spread disease together with some sort of supply-chain attack.
Appreciate your comment. Loss of control does make killing all humans easier, doesn’t it? Once someone/something has control (sovereignty) over a population, by definition, they can do whatever they want. For example, they could demand part of the population kills the other part, ask a (tiny) part of the population to create weapons (possibly for a bogus reason) and use them against the entire population, etc. etc. Even with low tech, it’s easy to kill off a population once you have control (sovereignty), this has been demonstrated at many historical genocides. With high tech, it becomes trivial. Note there’s no hurry: once we’ve lost control, this will likely remain the case, so an AI would have billions of years to carry out whatever plan they want to.
My intuitive feeling is that this makes too many unscientific assumptions that it does concretely support. The main issue I have is the dismissal of the techniques to kill off humanity. While it is true that it is by definition impossible to try to reason as a superhuman entity it is still the case that a rogue AI would only have a limited set of tools to try and cause a mass extinction. Wouldn’t the first step be to cite a counterexample strategy that an AI could use?
I fail to see exactly where this is argued above, loss of control does not make wiping Earth of humanity easier with nukes, but it also moves the goalpost. RAND has deliberately chosen a limited scope that focuses on extinction level techniques, and I don’t think that they are trying to offer any comfort on the horrors of nuclear war.
While I am critical of the applicability of AI research on robotics, I agree that RAND’s assumption that robots need much more development to be used for spreading pathogens is probably wrong. It is likely that drone technology has already achieved such a level, and that an advanced AI could control a global swarm of them—either directly or through human actors—to spread disease together with some sort of supply-chain attack.
Appreciate your comment. Loss of control does make killing all humans easier, doesn’t it? Once someone/something has control (sovereignty) over a population, by definition, they can do whatever they want. For example, they could demand part of the population kills the other part, ask a (tiny) part of the population to create weapons (possibly for a bogus reason) and use them against the entire population, etc. etc. Even with low tech, it’s easy to kill off a population once you have control (sovereignty), this has been demonstrated at many historical genocides. With high tech, it becomes trivial. Note there’s no hurry: once we’ve lost control, this will likely remain the case, so an AI would have billions of years to carry out whatever plan they want to.