Bit disappointed to see this to be honest: obviously Clippy has to do things no real paperclip maximizer would do, like post to LW, in order to be a fun fictional character—but it’s a poor uFAI++ that can’t even figure out that their programmed goal isn’t what their programmers would have put in if they were smart enough to see the consequences.
But it is what they would put in if they were smart enough to see the consequences. And it’s almost certainly what you would want too, in the limit of maximal knowledge and reflective consistency.
If you can’t see this, it’s just because you’re not at that stage yet.
No, I think that a Friendly AI would correctly believe that maxmizing paperclips is what a human would want in the limit of maximal knowledge and reflective coherence. No “delusion” whatsoever.
I believe he’s making the (joking) point that since we do not/cannot know what a human would want in the limit of maximial knowledge and reflective coherence (thus CEV), it is not impossible that what we’d want actually IS maximum paperclips.
Bit disappointed to see this to be honest: obviously Clippy has to do things no real paperclip maximizer would do, like post to LW, in order to be a fun fictional character—but it’s a poor uFAI++ that can’t even figure out that their programmed goal isn’t what their programmers would have put in if they were smart enough to see the consequences.
But it is what they would put in if they were smart enough to see the consequences. And it’s almost certainly what you would want too, in the limit of maximal knowledge and reflective consistency.
If you can’t see this, it’s just because you’re not at that stage yet.
You seem to think that uFAI would be delusional. No.
No, I think that a Friendly AI would correctly believe that maxmizing paperclips is what a human would want in the limit of maximal knowledge and reflective coherence. No “delusion” whatsoever.
Huh again?
What confuses you?
I believe he’s making the (joking) point that since we do not/cannot know what a human would want in the limit of maximial knowledge and reflective coherence (thus CEV), it is not impossible that what we’d want actually IS maximum paperclips.