I’ve thought about this too. I was traumatized by reading I Have No Mouth, and I Must Scream as a kid, and I still shudder whenever I think of that story.
I know it is silly, and there is no plausible reason such an evil AI would come into existence. But even so, it really reinforces how awful a world with advanced technology can be (immortality + complete knowledge of psychology/neurology = eternal, perfect suffering). I find that I fear those hell scenarios a lot more than I appreciate the various eutopia scenarios I’ve seen described. If Omega offered me a ticket to Eutopia with a 1 in a Million chance of winding up in I Have No Mouth, I don’t think I would take it.
I’ve thought about this too. I was traumatized by reading I Have No Mouth, and I Must Scream as a kid, and I still shudder whenever I think of that story.
I know it is silly, and there is no plausible reason such an evil AI would come into existence. But even so, it really reinforces how awful a world with advanced technology can be (immortality + complete knowledge of psychology/neurology = eternal, perfect suffering). I find that I fear those hell scenarios a lot more than I appreciate the various eutopia scenarios I’ve seen described. If Omega offered me a ticket to Eutopia with a 1 in a Million chance of winding up in I Have No Mouth, I don’t think I would take it.
Maybe its all the talk about Unfriendly AI here, but Ellison’s story was also my first thought to the question- What if its a bad future?