Apologies to Hanson, Breazeal, Yudkowsky and SIAI for paraphrasing their complex philosophies so succinctly, but to my point: these people are essentially saying intelligent machines can be okay as long as the machines like us. Isn’t that the Three Laws of Robotics under a new name? Whether it’s slave-like obedience or child-like concern for their parents, we’re putting our hopes on the belief that intelligent machines can be designed such that they won’t end humanity. That’s a nice dream, but I just don’t see it as a guarantee.
I don’t think anyone is presenting any guarantees at this stage.
The article says:
I don’t think anyone is presenting any guarantees at this stage.