How about: an AI can be smart enough to realize all of those things, and it still won’t change its utility function. Then link Eliezer’s short story about that exact scenario. (Can’t find it in two minutes, but it’s the one where the dude wakes up with a construct designed to be his perfect mate, and he rejects her because she’s not his wife.)
How about: an AI can be smart enough to realize all of those things, and it still won’t change its utility function. Then link Eliezer’s short story about that exact scenario. (Can’t find it in two minutes, but it’s the one where the dude wakes up with a construct designed to be his perfect mate, and he rejects her because she’s not his wife.)
http://lesswrong.com/lw/xu/failed_utopia_42/