The strongest argument against AI doom I can imagine runs as follows:
AI can kill all humans for two main reasons: to (a) prevent a threat to itself and (b) to get human’s atoms.
But:
(a)
AI will not kill humans as a threat before it creates powerful human-independent infrastructure (nanotech) as in that case, it will run out of electricity etc.
AI will also not kill humans after it creates nanotech, as we can’t destroy nanotech (even with nukes).
Thus, AI will not kill humans to prevent the threat neither before, nor after nanotech, – so it will never happens for this reason.
(b)
Human atoms constitute 10E-24 of all atoms in the Solar system.
Humans may have small instrumental value for trade with aliens, for some kinds of work or as training data sources.
Even a small instrumental value of humans will be larger than the value of their atoms, as the value of atoms is very-very small.
Humans will not be killed for atoms.
Thus humans will not be killed either as a threat or for atoms.
But there are other ways how AI catastrophe can kill everybody: wrongly aligned AI performs wireheading, Singleton halts, or there will be war between several AIs. Each of this risk is not necessary outcome.But together they have high probability mass.
they are not a big threat, but they are annoying (it costs resources to fix the damage they do)
a side effect of e.g. changing the atmosphere
Also, the AI may destroy human civilization without exterminating all humans, e.g. by taking away most of our resources. If the civilization collapses because the cities and factories are taken over by robots, most humans will starve to death, but maybe 100000 will survive in various forests as hunters and gatherers, with no chance to develop civilization again in the future… that’s also quite bad.
It all collapses to the (2) “atoms utility” vs “human instrumental utility.” Preventing starvation or pollution effect for a large group of humans is relatively cheap. Just put all them on a large space station, may be 1 km long.
But disempowerment of humanity and maybe even Earth-destruction are far more likely. Even if we will get small galactic empire of 1000 stars, but will live there as pets devoted any power about Universe future, it is not very good outcome.
The strongest argument against AI doom I can imagine runs as follows:
AI can kill all humans for two main reasons: to (a) prevent a threat to itself and (b) to get human’s atoms.
But:
(a)
AI will not kill humans as a threat before it creates powerful human-independent infrastructure (nanotech) as in that case, it will run out of electricity etc.
AI will also not kill humans after it creates nanotech, as we can’t destroy nanotech (even with nukes).
Thus, AI will not kill humans to prevent the threat neither before, nor after nanotech, – so it will never happens for this reason.
(b)
Human atoms constitute 10E-24 of all atoms in the Solar system.
Humans may have small instrumental value for trade with aliens, for some kinds of work or as training data sources.
Even a small instrumental value of humans will be larger than the value of their atoms, as the value of atoms is very-very small.
Humans will not be killed for atoms.
Thus humans will not be killed either as a threat or for atoms.
But there are other ways how AI catastrophe can kill everybody: wrongly aligned AI performs wireheading, Singleton halts, or there will be war between several AIs. Each of this risk is not necessary outcome.But together they have high probability mass.
Other reasons to kill humans:
they are not a big threat, but they are annoying (it costs resources to fix the damage they do)
a side effect of e.g. changing the atmosphere
Also, the AI may destroy human civilization without exterminating all humans, e.g. by taking away most of our resources. If the civilization collapses because the cities and factories are taken over by robots, most humans will starve to death, but maybe 100000 will survive in various forests as hunters and gatherers, with no chance to develop civilization again in the future… that’s also quite bad.
It all collapses to the (2) “atoms utility” vs “human instrumental utility.” Preventing starvation or pollution effect for a large group of humans is relatively cheap. Just put all them on a large space station, may be 1 km long.
But disempowerment of humanity and maybe even Earth-destruction are far more likely. Even if we will get small galactic empire of 1000 stars, but will live there as pets devoted any power about Universe future, it is not very good outcome.