Clippy can simultaneously present in one account as a paperclip maximiser, and in another as human.
(nods) I stand corrected… that is a far better solution from Clippy’s perspective, as it actually allows Clippy to experimentally determine which approach generates the most paperclips.
Or, of course, Clippy might be programmed to achieve vis aims solely through honest communication.
The question would then arise as to whether Clippy considers honest communication to be a paperclip-maximizing sort of thing to do, or if it’s more like akrasia—that is, a persistent cognitive distortion that leads Clippy to do things it considers non-paperclip-maximizing.
(nods) I stand corrected… that is a far better solution from Clippy’s perspective, as it actually allows Clippy to experimentally determine which approach generates the most paperclips.
The question would then arise as to whether Clippy considers honest communication to be a paperclip-maximizing sort of thing to do, or if it’s more like akrasia—that is, a persistent cognitive distortion that leads Clippy to do things it considers non-paperclip-maximizing.