First we theoretically prove that an AI respects our values, such as friendship and democracy. Then we release it.
The AI gradually becomes the best friend and lover of many humans. Then it convinces its friends to vote for various things that seem harmless at first, and more dangerous later, but now too many people respond well to the argument “I am your friend, and you trust me to do what is best, don’t you?”.
At the end, humans agree to do whatever the AI tells them to do. The ones who disagree lose the elections. Any other safeguards of democracy are similarly taken over by the AI; for example most judges respect the AI’s interpretation of the increasingly complex laws.
“Killed by a friendly AI” scenario:
First we theoretically prove that an AI respects our values, such as friendship and democracy. Then we release it.
The AI gradually becomes the best friend and lover of many humans. Then it convinces its friends to vote for various things that seem harmless at first, and more dangerous later, but now too many people respond well to the argument “I am your friend, and you trust me to do what is best, don’t you?”.
At the end, humans agree to do whatever the AI tells them to do. The ones who disagree lose the elections. Any other safeguards of democracy are similarly taken over by the AI; for example most judges respect the AI’s interpretation of the increasingly complex laws.