Why does Eliezer dislike the paperclip maximizer thought experiment?
Numerous times I have seen him correct people about it and say it wasn’t originally about a totalizing paperclip factory, it was about an AI that wants to make little squiggly lines for inscrutable reasons. Why does the distinction matter? Both scenarios are about an AI that does something very different from what you want and ends up killing you.
My guess, although I’m not sure about this, is that the paperclip factory is an AI that did as instructed, but its instructions were bad and it killed everyone. Whereas the squiggly line thing is about AI not doing what you want. And perhaps the paperclip factory scenario could mislead people into believing that all you have to do is make sure the AI understands what you want.
FWIW I always figured the paperclip maximizer would know that people don’t want it to turn the lightcone into paperclips, but it would do it anyway, so I still thought it was a reasonable example of the same principle as the squiggly-lines AI. But I can see how that conclusion requires two steps of reasoning whereas the squiggly-lines scenario only requires one step. Or perhaps the thing that Eliezer thinks is wrong with the paperclip-maximizer scenario is something else entirely.
The difference is between “making sure the AI does the task you pointed it at” and “making sure the task you pointed it at doesn’t kill you”. Which goes all the way back to the 2004 Yudkowsky paper where he introduced CEV as a proposal to tackle to the second problem.
Why does Eliezer dislike the paperclip maximizer thought experiment?
Numerous times I have seen him correct people about it and say it wasn’t originally about a totalizing paperclip factory, it was about an AI that wants to make little squiggly lines for inscrutable reasons. Why does the distinction matter? Both scenarios are about an AI that does something very different from what you want and ends up killing you.
My guess, although I’m not sure about this, is that the paperclip factory is an AI that did as instructed, but its instructions were bad and it killed everyone. Whereas the squiggly line thing is about AI not doing what you want. And perhaps the paperclip factory scenario could mislead people into believing that all you have to do is make sure the AI understands what you want.
FWIW I always figured the paperclip maximizer would know that people don’t want it to turn the lightcone into paperclips, but it would do it anyway, so I still thought it was a reasonable example of the same principle as the squiggly-lines AI. But I can see how that conclusion requires two steps of reasoning whereas the squiggly-lines scenario only requires one step. Or perhaps the thing that Eliezer thinks is wrong with the paperclip-maximizer scenario is something else entirely.
The difference is between “making sure the AI does the task you pointed it at” and “making sure the task you pointed it at doesn’t kill you”. Which goes all the way back to the 2004 Yudkowsky paper where he introduced CEV as a proposal to tackle to the second problem.