Then you need a very narrow range of bad understandings, for the AI to understand that the statement means converting universe into paperclips, but not understand that it is also implied that you only need as many paperclips as you want, that you don’t want quark sized paperclips, et cetera.
That’s a good point, but once we develop AIs that can cross the gap of understanding, how do you guarantee that no one asks their AI to convert the universe into paperclips, intentionally or not?
(I’ve made all these arguments before on LessWrong and it doesn’t seem to have done anything. You’re being a lot more patient than I was, though, so perhaps you’ll have better luck.
That’s a good point, but once we develop AIs that can cross the gap of understanding, how do you guarantee that no one asks their AI to convert the universe into paperclips, intentionally or not?
I find it really dubious that you could make an AI that would just do in the real world what ever you vaguely ask it to do.
(I’ve made all these arguments before on LessWrong and it doesn’t seem to have done anything. You’re being a lot more patient than I was, though, so perhaps you’ll have better luck.
By the way, The Polynomial is pretty awesome.)