so it seems reasonable that an AGI designed to more closely resemble human thought processes would be more amenable to direct value loading—simply structure the built-in processes to roughly similar to the best we know about human neural science based psychology, then test and iterate.
Your link is broken (correct version): you need to escape underscores in URLs outside a link with a backslash, see formatting help. (Amusingly, the copy-pasted version in this comment looks to work fine.)
Kaj, sorry for the delay. I was on vacation and read your proposal on my phone, but a small touch screen keyboard wasn’t the ideal mechanism to type a response.
This is the type of research I wish MIRI was spending at least half it’s money on.
The mechanisms of concept generation are extremely critical to human morality. For most people most of the time, decisions about whether to pursue a course of action are not made based on whether it is morally justified or not. Indeed we collectively spend a good deal of time and money on workforce training to make sure people in decision making roles consciously think about these things, something which wouldn’t be necessary at all if this was how we naturally operate.
No, we do not tend to naturally think about our principles. Rather, we think within them. Our moral principles are a meta abstraction of our mental structure, which itself guides concept generation such that the things we choose to do comply with our moral principles because—most of the time—only compliant possibilities were generated and considered in the first place. Understanding how this concept generation would occur in a real human-like AGI is critical to understanding how value learning or value loading might actually occur in a real design.
We might even find out that we can create a sufficiently human-like intelligence that we can have it learn morality in the same way we do—by instilling a relatively small number of embodied instincts/drives, and placing it in a protected learning environment with loving caretakers and patient teachers. Certainly this is what the OpenCog foundation would like to do.
Did you submit this as a research proposal somewhere? Did you get a response yet?
Did you submit this as a research proposal somewhere? Did you get a response yet?
I submitted it as a paper to the upcoming AI and Ethics workshop, where it was accepted to be presented in a poster session. I’m not yet sure of the follow-up: I’m currently trying to decide what to do with my life after I graduate with my MSc, and one of the potential paths would involve doing a PhD and developing the research program described in the paper, but I’m not yet entirely sure of whether I’ll follow that path.
Part of what will affect my decision is how useful people feel that this line of research would be, so I appreciate getting your opinion. I hope to gather more data points at the workshop.
Well if academic achievement is you goal, don’t think my opinion should carry much weight—I’m an industry engineer (bitcoin developer) that does AI work in my less than copius spare t.ime. I don’t know how well respected this work would be in academia. To reiterate my own opinion, I think it is the most important AGI work we could be doing however.
Have you posted to the OpenCog mailing list? You’d find some like-minded academics there who can give you some constructive feedback, including naming potential advisors.
EDIT: Gah I wish I knew about that workshop sooner. I’m going to be in Peurto Rico for the Financial Crypto ’15 conference, but I could have swung by on my way. Do you know kanzure from ##hplusroadmap on freenode (Bryan Bishop)? He’s a like-minded transhumanist hacker in the Austin area. You should meet up while you’re there. He’s very knowledgeable on what people are working on, and good at providing connections to help people out.
Thanks for the suggestion! I’m not on that mailing list (though I used to be), but I sent a Ben Goertzel and another OpenCog guy a copy of the paper. The other guy said he’d read it, but that was all that I heard back. Might post it to the mailing list as well.
Do you know kanzure from ##hplusroadmap on freenode (Bryan Bishop)? He’s a like-minded transhumanist hacker in the Austin area. You should meet up while you’re there. He’s very knowledgeable on what people are working on, and good at providing connections to help people out.
I’d be curious to hear your opinion about my recent paper.
Your link is broken (correct version): you need to escape underscores in URLs outside a link with a backslash, see formatting help. (Amusingly, the copy-pasted version in this comment looks to work fine.)
Kaj, sorry for the delay. I was on vacation and read your proposal on my phone, but a small touch screen keyboard wasn’t the ideal mechanism to type a response.
This is the type of research I wish MIRI was spending at least half it’s money on.
The mechanisms of concept generation are extremely critical to human morality. For most people most of the time, decisions about whether to pursue a course of action are not made based on whether it is morally justified or not. Indeed we collectively spend a good deal of time and money on workforce training to make sure people in decision making roles consciously think about these things, something which wouldn’t be necessary at all if this was how we naturally operate.
No, we do not tend to naturally think about our principles. Rather, we think within them. Our moral principles are a meta abstraction of our mental structure, which itself guides concept generation such that the things we choose to do comply with our moral principles because—most of the time—only compliant possibilities were generated and considered in the first place. Understanding how this concept generation would occur in a real human-like AGI is critical to understanding how value learning or value loading might actually occur in a real design.
We might even find out that we can create a sufficiently human-like intelligence that we can have it learn morality in the same way we do—by instilling a relatively small number of embodied instincts/drives, and placing it in a protected learning environment with loving caretakers and patient teachers. Certainly this is what the OpenCog foundation would like to do.
Did you submit this as a research proposal somewhere? Did you get a response yet?
Glad to hear that!
I submitted it as a paper to the upcoming AI and Ethics workshop, where it was accepted to be presented in a poster session. I’m not yet sure of the follow-up: I’m currently trying to decide what to do with my life after I graduate with my MSc, and one of the potential paths would involve doing a PhD and developing the research program described in the paper, but I’m not yet entirely sure of whether I’ll follow that path.
Part of what will affect my decision is how useful people feel that this line of research would be, so I appreciate getting your opinion. I hope to gather more data points at the workshop.
Well if academic achievement is you goal, don’t think my opinion should carry much weight—I’m an industry engineer (bitcoin developer) that does AI work in my less than copius spare t.ime. I don’t know how well respected this work would be in academia. To reiterate my own opinion, I think it is the most important AGI work we could be doing however.
Have you posted to the OpenCog mailing list? You’d find some like-minded academics there who can give you some constructive feedback, including naming potential advisors.
https://groups.google.com/forum/#!forum/opencog
EDIT: Gah I wish I knew about that workshop sooner. I’m going to be in Peurto Rico for the Financial Crypto ’15 conference, but I could have swung by on my way. Do you know kanzure from ##hplusroadmap on freenode (Bryan Bishop)? He’s a like-minded transhumanist hacker in the Austin area. You should meet up while you’re there. He’s very knowledgeable on what people are working on, and good at providing connections to help people out.
Thanks for the suggestion! I’m not on that mailing list (though I used to be), but I sent a Ben Goertzel and another OpenCog guy a copy of the paper. The other guy said he’d read it, but that was all that I heard back. Might post it to the mailing list as well.
Thanks, I sent him a message. :)
I would actually post to the list. It’s a pretty big and disparate community there, so you’re likely to get a diverse collection of responses.