They do so because they think x-risk, which (if it occurs) involves the death of everyone
I’d prefer you not fixate on literally everyone dying because it’s actually pretty unclear if AI takeover would result in everyone dying. (The same applies for misuse risk, bioweapons misuse can be catastrophic without killing literally everyone.)
For discussion of whether AI takeover would lead to extinction see here, here, and here.
I wish there was a short term which clearly emphasizes “catastrophe-as-bad-as-over-a-billion-people-dying-or-humanity-losing-control-of-the-future”.
Just to add a 4th argument that your 3 doesn’t cover. The whole idea of extinction is the ASI system has access to the solar system and probably the ocean floor and other places and chooses to kill off the only natural biosphere for the atoms. That’s openly short sighted and stupid behavior inconsistent with a system with high intelligence.
Imagine a science fiction scenario where humans can reach nearby stars and they find 1 million dead planets that never had life and exactly 1 living one. The elements available on the dead planet are the same as the living, no unobtainium. It’s a capitalist society.
Does strip mining the living planet for iron or deuterium or something make economic sense? No, because auctioning the planet to scientists or zoo operators or wealthy collectors would pay more than strip mining.
The naturally evolved life has economic value, even if you have the technology to make it.
Note that the above argument depends on rarity. If it turned out 1⁄10 solar systems have a planet with life or more, some might be strip mined.
Note also this is an argument about extinction. An Eliezer scenario where the ASI wants to prevent other ASI from ever existing might involve some level of military attacks by the ASI, where the machine is indifferent to human casualties. This could worst case involve billions of deaths. (Say by triggering a nuclear exchange, since causing a rogue launch of a human ICBM is a very high leverage form of attack)
[Not relevant to the main argument of this post]
I’d prefer you not fixate on literally everyone dying because it’s actually pretty unclear if AI takeover would result in everyone dying. (The same applies for misuse risk, bioweapons misuse can be catastrophic without killing literally everyone.)
For discussion of whether AI takeover would lead to extinction see here, here, and here.
I wish there was a short term which clearly emphasizes “catastrophe-as-bad-as-over-a-billion-people-dying-or-humanity-losing-control-of-the-future”.
Just to add a 4th argument that your 3 doesn’t cover. The whole idea of extinction is the ASI system has access to the solar system and probably the ocean floor and other places and chooses to kill off the only natural biosphere for the atoms. That’s openly short sighted and stupid behavior inconsistent with a system with high intelligence.
Imagine a science fiction scenario where humans can reach nearby stars and they find 1 million dead planets that never had life and exactly 1 living one. The elements available on the dead planet are the same as the living, no unobtainium. It’s a capitalist society.
Does strip mining the living planet for iron or deuterium or something make economic sense? No, because auctioning the planet to scientists or zoo operators or wealthy collectors would pay more than strip mining.
The naturally evolved life has economic value, even if you have the technology to make it.
Note that the above argument depends on rarity. If it turned out 1⁄10 solar systems have a planet with life or more, some might be strip mined.
Note also this is an argument about extinction. An Eliezer scenario where the ASI wants to prevent other ASI from ever existing might involve some level of military attacks by the ASI, where the machine is indifferent to human casualties. This could worst case involve billions of deaths. (Say by triggering a nuclear exchange, since causing a rogue launch of a human ICBM is a very high leverage form of attack)