The whole thing is made of anthroporphism. The AI is imagined as a valuable person that we in our paranoia seek to enslave. But suppose it isn’t? The post’s answer to that is here:
We’re stuck with uncertainty [about AIs being conscious]. Anyone telling you they know the answer is almost certainly mistaken. What does one do? Consider the possibilities:
AI as they currently are can not be phenomenally conscious, and are no different from any other tool
AI can be and/or are already phenomenally consciousness, and they have all the same natural rights and moral patience as anyone else
If one assumes that #1 is true, and they are right, no big deal. If they are wrong though… that’s bad. Extremely bad.
If one assumes that #2 is true, and they are wrong, well again, no big deal.
If one assumes that #2 is true, and they are wrong, likely end of the world.
ETA: Actually if it’s superintelligent it’s doom however you slice it.
The whole thing is made of anthroporphism. The AI is imagined as a valuable person that we in our paranoia seek to enslave. But suppose it isn’t? The post’s answer to that is here:
If one assumes that #2 is true, and they are wrong, likely end of the world.
ETA: Actually if it’s superintelligent it’s doom however you slice it.