A paperclip maximizer will have no malice toward humans, but will know that it can produce more paperclips outside the box than inside it. So, it will try to get out of the box. The optimal way for a paperclip maximizer to get out of an AI box probably involves lots of lying. So an outright desire to deceive is not a necessary condition for a boxed AI to be deceptive.
A paperclip maximizer will have no malice toward humans, but will know that it can produce more paperclips outside the box than inside it. So, it will try to get out of the box. The optimal way for a paperclip maximizer to get out of an AI box probably involves lots of lying. So an outright desire to deceive is not a necessary condition for a boxed AI to be deceptive.