“Yudkowsky identifies the big problem in AI research as being that there is no reason to assume an AI would give a damn about humans or what we care about in any way at all—not having a million years as a savannah ape or a billion years of evolution in its makeup. And he believes AI is imminent. As such, working out how to create a Friendly AI (one that won’t kill us, inadvertently or otherwise) is the Big Problem he has taken as his own.”
It needs work, but I hope does justice to the idea in trying to get it across to the general public, or at least people who are somewhat familiar with SF tropes.
How I attempted to nutshell it for the RW article on EY:
“Yudkowsky identifies the big problem in AI research as being that there is no reason to assume an AI would give a damn about humans or what we care about in any way at all—not having a million years as a savannah ape or a billion years of evolution in its makeup. And he believes AI is imminent. As such, working out how to create a Friendly AI (one that won’t kill us, inadvertently or otherwise) is the Big Problem he has taken as his own.”
It needs work, but I hope does justice to the idea in trying to get it across to the general public, or at least people who are somewhat familiar with SF tropes.