In contrast to Eliezer I think it’s (remotely) possible to train an AI to reliably recognize human mind states underlying expressions of happiness. But this would still not imply that the machine’s primary, innate emotion is unconditional love for all humans. The machines would merely be addicted to watching happy humans.
Personally, I’d rather not be an object of some quirky fetishism.
Monthy Python has, of course, realized it long ago:
http://www.youtube.com/watch?v=HoRY3ZjiNLU http://www.youtube.com/watch?v=JTMXtJvFV6E
I strongly second Marcello here. When you wrote “The fact that a subgoal is convergent [] doesn’t lend the subgoal magical powers in any specific goal system” in CFAI that about settled the matter in a single sentence. Why the long, “lay audience” posts, now, eight years later ?