Few of those seem likely to wipe out large enough chunks of humanity, enough to combine to wipe us all out. I think you really need a weapon (or delivery system) that is targeting humans and versatile enough to get into/through buildings without us being able to stop it, etc.. Or something that can be spread undetected across huge chunks of the population before it’s noticed and we take precautions.
I think most humans rarely take medications, letalone a specific medication, and things like infertility or high death rates would be noticed before a decent chunk of the human population is affected.
Messing with food/water (putting things in them, or nuclear winter causing massive crop failures), and infectious diseases seem more plausible as sources of wiping out large chunks of the population, but it still doesn’t seem clearly very likely that an AGI would succeed.
The relevant target is not every individual human but human civilization and its ability to react. If the AI can kill large enough numbers of people, that would be enough for the AI to continue its work unimpeded, and it can kill the rest of us at its leisure. In fact, the AI could destroy civilization’s ability to respond without killing a single person, by simply destroying enough industry and infrastructure that humans are no longer able to engage in science/engineering/military action. (A bit like EY’s melt-all-GPUs nanotech concept.)
That said, all of avturchin’s scenorios are either implausible IMO or require a future with a lot more automation than we have today.
The relevant target is not every individual human but human civilization and its ability to react
If that’s what he meant, it would have been better if he’d said that explicitly. For example, these five could cause extinction and these ten could remove our ability to react.
No, the plan was not a really good plan. You might be fooling yourself into believing that it was a really a good plan, but I bet that if you sat down for 5 minutes and look actively for reasons why the plan might fail you would find them.
If you have many plans, then even 50 per cent probability of failure for each doesn’t matter, just combine them.
However, I spent effort in thinking why AI may not be as interested in killing us as it is often presented. In EY scenario, after creating of nanobots, AI becomes invulnerable to any human action and the utility of killing humans declines.
The problem is that the probability of failure for those plans is (in my opinion) nowhere close to 50%, and the probability of humans hitting back the machine once they are being attacked is really high
In that case, can you imagine an AGI that, given that I can’t attack and kill all humans (it is unwise), is coerced into given a human readable solution to the alignment problem? If no, why not?
But more generally speaking, AI-kill-all-scenarios boil down to the possibility of any other anthropogenic existential risks. If grey goo is possible, AI turns into nanobots. If multipandemic is possible, AI helps to design viruses. If nuclear war + military robots (Terminator scenario) can kill everybody, AI is here to help it works smooth.
Removing the scenario really annoys me. Whether it’s novel or not, and whether it’s likely or not, it seems VANISHINGLY unlikely that posting it makes it more likely, rather than less (or neutral). The exception would be if it’s revealing insider knowledge or secret/classified information, and in that case you should probably just delete it without comment rather than SAYING there’s something to investigate.
I got scolded in a different post by the LW moderators by saying that there is a policy of not brainstorming about different ways to end the world because it is considered an info hazard. I think this makes sense and we should be careful doing that
I think we should not discuss the details here in the open, so I am more than happy to keep the conversation in private if you fancy. For the public record, I find this scenario very unlikely too
I listed dozens of ways how AI may kill us, so over-concentration on nanobots seems implausible.
It could use exiting nuclear weapons
or help a terrorist to design many viruses,
or give a bad advise in mitigating global warming,
or control everything and then suffer from internal error halting all machinery,
or explore everyones cellphone,
or make self-driving cars hunt humans,
or takeover military robots and drone army, as well as home robots
or explode every nuclear power station
or design super-addictive drug which also turn people into super-aggressive zombies
or start fires everywhere by connecting home electricity to 3kV lines, while locking everyone in their homes
or design a new supplement which make everyone secretly infertile.
… so if nanobots creation is the difficult step, there are many more ways.
Few of those seem likely to wipe out large enough chunks of humanity, enough to combine to wipe us all out. I think you really need a weapon (or delivery system) that is targeting humans and versatile enough to get into/through buildings without us being able to stop it, etc.. Or something that can be spread undetected across huge chunks of the population before it’s noticed and we take precautions.
I think most humans rarely take medications, letalone a specific medication, and things like infertility or high death rates would be noticed before a decent chunk of the human population is affected.
Messing with food/water (putting things in them, or nuclear winter causing massive crop failures), and infectious diseases seem more plausible as sources of wiping out large chunks of the population, but it still doesn’t seem clearly very likely that an AGI would succeed.
It seems like a lot of those plans wouldn’t be sufficient to kill everyone, as opposed to a lot of people.
The relevant target is not every individual human but human civilization and its ability to react. If the AI can kill large enough numbers of people, that would be enough for the AI to continue its work unimpeded, and it can kill the rest of us at its leisure. In fact, the AI could destroy civilization’s ability to respond without killing a single person, by simply destroying enough industry and infrastructure that humans are no longer able to engage in science/engineering/military action. (A bit like EY’s melt-all-GPUs nanotech concept.)
That said, all of avturchin’s scenorios are either implausible IMO or require a future with a lot more automation than we have today.
If that’s what he meant, it would have been better if he’d said that explicitly. For example, these five could cause extinction and these ten could remove our ability to react.
Actually I deleted a really good plan in a comment below.
No, the plan was not a really good plan. You might be fooling yourself into believing that it was a really a good plan, but I bet that if you sat down for 5 minutes and look actively for reasons why the plan might fail you would find them.
Thank you for the list. Have you spent the same time and effort thinking of why those plans you are writing down might fail?
If you have many plans, then even 50 per cent probability of failure for each doesn’t matter, just combine them.
However, I spent effort in thinking why AI may not be as interested in killing us as it is often presented. In EY scenario, after creating of nanobots, AI becomes invulnerable to any human action and the utility of killing humans declines.
The problem is that the probability of failure for those plans is (in my opinion) nowhere close to 50%, and the probability of humans hitting back the machine once they are being attacked is really high
That is why wise AI will not try to attack humans at all at early stages—and will not need to do it in later stages of its development.
In that case, can you imagine an AGI that, given that I can’t attack and kill all humans (it is unwise), is coerced into given a human readable solution to the alignment problem? If no, why not?
[scenario removed]
But more generally speaking, AI-kill-all-scenarios boil down to the possibility of any other anthropogenic existential risks. If grey goo is possible, AI turns into nanobots. If multipandemic is possible, AI helps to design viruses. If nuclear war + military robots (Terminator scenario) can kill everybody, AI is here to help it works smooth.
Removing the scenario really annoys me. Whether it’s novel or not, and whether it’s likely or not, it seems VANISHINGLY unlikely that posting it makes it more likely, rather than less (or neutral). The exception would be if it’s revealing insider knowledge or secret/classified information, and in that case you should probably just delete it without comment rather than SAYING there’s something to investigate.
You don’t have to say the scenario, but was it removed because someone is going to execute it if they see it?
I got scolded in a different post by the LW moderators by saying that there is a policy of not brainstorming about different ways to end the world because it is considered an info hazard. I think this makes sense and we should be careful doing that
I think we should not discuss the details here in the open, so I am more than happy to keep the conversation in private if you fancy. For the public record, I find this scenario very unlikely too
Do you think any anthropogenic human extinction risks are possible at all?
In 20 years time? No, I don’t think so. We can make a bet if you want
I will delete my comment, but there are even more plausible ideas in that direction.
It might sound like a snarky reply but it is not, it is an honest question.