The Three Laws are most decidedly not safe, and in fact, should be discarded and discredited. The first law in specific, “do not allow through inaction a human to come to harm”, can be trivially interpreted in various bad-end ways. Read The Metamorphosis of Prime Intellect for a fictional sample.
I never read the original source, but wasn’t the very story that introduced the Three Laws an exercise in discrediting those laws? If so, how the heck does everyone keep coming to the opposite conclusion? It seems similar to using 1984 as example of why we should have ubiquitous surveillance.
Not just the original story, but literally hundreds of other stories went on to make the same point—the three laws fail in hundreds of unique ways, depending upon the situation.
But really in the universe Asimov was portraying, these were still mostly the exceptions, and the vast majority of robots were safe because of the Three Laws. So his stories weren’t really “discrediting those laws” at all.
In multiple cases it was the newly advanced one that was different in kind than others. Toasters work fine under the three laws, even in Terminator the humans are shown with obedient guns and didn’t insist on fighting bare-handed.
In other cases, the robot was the same model as well behaved ones, and it had an error making it conscious, or something like that.
You’re right that the stories can’t all be characterized the way I characterized them. There was a lot of variety, he made a career of them and didn’t do it by writing the same story again and again.
The Three Laws are most decidedly not safe, and in fact, should be discarded and discredited. The first law in specific, “do not allow through inaction a human to come to harm”, can be trivially interpreted in various bad-end ways. Read The Metamorphosis of Prime Intellect for a fictional sample.
I never read the original source, but wasn’t the very story that introduced the Three Laws an exercise in discrediting those laws? If so, how the heck does everyone keep coming to the opposite conclusion? It seems similar to using 1984 as example of why we should have ubiquitous surveillance.
Talk about Streisand Effect.
Not just the original story, but literally hundreds of other stories went on to make the same point—the three laws fail in hundreds of unique ways, depending upon the situation.
But really in the universe Asimov was portraying, these were still mostly the exceptions, and the vast majority of robots were safe because of the Three Laws. So his stories weren’t really “discrediting those laws” at all.
In multiple cases it was the newly advanced one that was different in kind than others. Toasters work fine under the three laws, even in Terminator the humans are shown with obedient guns and didn’t insist on fighting bare-handed.
In other cases, the robot was the same model as well behaved ones, and it had an error making it conscious, or something like that.
You’re right that the stories can’t all be characterized the way I characterized them. There was a lot of variety, he made a career of them and didn’t do it by writing the same story again and again.