Could you help me understand the relation of this article to mine? My theory doesn’t involve anything like a universally compelling argument, or any universally compelling moral theory. I’m saying, after all, that the basic competence involved in making an intelligent thing friendly was in place several thousand years ago and has nothing to do with moral theorizing (at least not in the sense of philosophical moral theorizing).
You’re right that children are in a state to accept moral instruction, but this is because they’ve been given some prior moral instruction, and so on and so on. There’s no ‘first moment’ of education.
The relation is that you’ve assumed that the AI will accept some kind of moral teaching, or any teaching at all for that matter. You can not nurture a rock into being moral or logical, it has to be created already in motion.
If you are able to create an AI that could be nurtured at all like a human, you would have had to create the basic machinery of morality already and that’s the hard part. If you had an AI that had the required properties of FAI, except a moal goal, you would just say “go do CEV” and it would go do it. (maybe it would be a bit more complex, but it would be the easy part.)
You are way out of your depth with this post. Go read the sequences thoroughly, read the current understanding of the FAI problem, including all the arguments against proposals like yours. If you still think you have something to add then, please do.
No argument there, though I’ve spent a fair amount of time with the sequences. I just found myself with a lot of unanswered questions. I figured a decent way to get those answered would be to post a view and allow people to respond. So while I appreciate your comment, I would appreciate even more links to specific sequences you have in mind, and some discussion as to their meaning and the quality of their argumentation. This is a great demand on your time, of course, so I couldn’t expect you humor me in this way. But that’s what the discussion section of this site is for, no?
If you are able to create an AI that could be nurtured at all like a human, you would have had to create the basic machinery of morality already and that’s the hard part.
A premise here is that human beings come with some basic machinery of morality/rationality. I don’t doubt this, but what sort of machinery do you have in mind exactly?
Good points, asking questions about confusing stuff is the first step in the direction of figuring it out!
By the way, I’m not sure what the proper place on LW is to ask questions like “I have a theory, I’m not sure it’s correct, pls comment”… things you wouldn’t post as top level, but still think it would make an interesting discussion.
(Note: your post also started interesting discussions, confusing stuff was hopefully cleared up, but downvotes are still there for you. Monthly open threads seemed to be OK for that purpose, but (at least for me) they are a little chaotic compared to nice headlines in Discussion. Maybe there should be a “newcomers” section, with downvotes only for attitude but not for content?)
A premise here is that human beings come with some basic machinery of morality/rationality. I don’t doubt this, but what sort of machinery do you have in mind exactly?
See the metaethics stuff, lawful intelligence. If you must choose one, understand the metaethics. I can’t think of any others you have a pressing need to read.
Could you help me understand the relation of this article to mine? My theory doesn’t involve anything like a universally compelling argument, or any universally compelling moral theory. I’m saying, after all, that the basic competence involved in making an intelligent thing friendly was in place several thousand years ago and has nothing to do with moral theorizing (at least not in the sense of philosophical moral theorizing).
You’re right that children are in a state to accept moral instruction, but this is because they’ve been given some prior moral instruction, and so on and so on. There’s no ‘first moment’ of education.
The relation is that you’ve assumed that the AI will accept some kind of moral teaching, or any teaching at all for that matter. You can not nurture a rock into being moral or logical, it has to be created already in motion.
If you are able to create an AI that could be nurtured at all like a human, you would have had to create the basic machinery of morality already and that’s the hard part. If you had an AI that had the required properties of FAI, except a moal goal, you would just say “go do CEV” and it would go do it. (maybe it would be a bit more complex, but it would be the easy part.)
You are way out of your depth with this post. Go read the sequences thoroughly, read the current understanding of the FAI problem, including all the arguments against proposals like yours. If you still think you have something to add then, please do.
No argument there, though I’ve spent a fair amount of time with the sequences. I just found myself with a lot of unanswered questions. I figured a decent way to get those answered would be to post a view and allow people to respond. So while I appreciate your comment, I would appreciate even more links to specific sequences you have in mind, and some discussion as to their meaning and the quality of their argumentation. This is a great demand on your time, of course, so I couldn’t expect you humor me in this way. But that’s what the discussion section of this site is for, no?
A premise here is that human beings come with some basic machinery of morality/rationality. I don’t doubt this, but what sort of machinery do you have in mind exactly?
Good points, asking questions about confusing stuff is the first step in the direction of figuring it out!
By the way, I’m not sure what the proper place on LW is to ask questions like “I have a theory, I’m not sure it’s correct, pls comment”… things you wouldn’t post as top level, but still think it would make an interesting discussion.
(Note: your post also started interesting discussions, confusing stuff was hopefully cleared up, but downvotes are still there for you. Monthly open threads seemed to be OK for that purpose, but (at least for me) they are a little chaotic compared to nice headlines in Discussion. Maybe there should be a “newcomers” section, with downvotes only for attitude but not for content?)
That’s a nice idea, though the downvotes for content do amount to worthwhile feedback.
See the metaethics stuff, lawful intelligence. If you must choose one, understand the metaethics. I can’t think of any others you have a pressing need to read.