What you’re describing is an evil AI, not just an unFriendly one—unFriendly AI doesn’t care about your values. Wouldn’t an evil AI be even harder to achieve than a Friendly one?
“Where in this code do I need to put this “-ve” sign again?”
The two are approximately equal in difficulty, assuming equivalent flexibility in how “Evil” or “Friendly” it would have to be to qualify for the definition.
“Where in this code do I need to put this “-ve” sign again?”
The two are approximately equal in difficulty, assuming equivalent flexibility in how “Evil” or “Friendly” it would have to be to qualify for the definition.