I found Brin’s post flowery and worthless. Michael Anissimov correctly points out the problems with it in the comments. (Is he registered on LW, I wonder?)
People seem to have serious problems grasping the idea that AIs are machines that will stupidly want anything we program them to want, not elf-children that will magically absorb our “kindness” and “understanding”. When I first came across Eliezer’s writings about FAI, that point seemed absolutely obvious… I guess David Brin hasn’t read it, that’s about the only possible explanation.
Wait. AIs are neither of the things you say. they’re not elf-children, but they’re not stupid machines. They’re smart machines, much like humans (but much faster to adapt).
It’s very hard to believe that we will be able to understand the source code of a real AI any better than we understand our own or each others’. In that sense, yes—finding a way to include “kindness” and “empathy” as meta-values for the bootstrapping machine is exactly what some of us have in mind.
I agree. I find his optimism refreshing (though probably still naive) when applied to humans, where at least there’s some hope of his intuitions working, but I’m not sure that he’s thought about AI deeply.
I found Brin’s post flowery and worthless. Michael Anissimov correctly points out the problems with it in the comments. (Is he registered on LW, I wonder?)
People seem to have serious problems grasping the idea that AIs are machines that will stupidly want anything we program them to want, not elf-children that will magically absorb our “kindness” and “understanding”. When I first came across Eliezer’s writings about FAI, that point seemed absolutely obvious… I guess David Brin hasn’t read it, that’s about the only possible explanation.
Wait. AIs are neither of the things you say. they’re not elf-children, but they’re not stupid machines. They’re smart machines, much like humans (but much faster to adapt).
It’s very hard to believe that we will be able to understand the source code of a real AI any better than we understand our own or each others’. In that sense, yes—finding a way to include “kindness” and “empathy” as meta-values for the bootstrapping machine is exactly what some of us have in mind.
Yes, Anissimov is registered here.
I agree. I find his optimism refreshing (though probably still naive) when applied to humans, where at least there’s some hope of his intuitions working, but I’m not sure that he’s thought about AI deeply.