I was more getting to that it narrows down the problem instead of generalising it. It reduces the responsibilities of the AI and widens those of humans. If you solved this problem you would only get up to the most virtous human (which isn’t exactly bad). Going beyond would require ethics competency that would have to be added as we are tying it’s hands in this department.
I take the point in practice, but there’s no reason we couldn’t design something to follow a path towards ultra-ethicshood that had the conservation property. For instance, if we could implement “as soon as you know your morals would change, then change them”, this would give us a good part of the “conservation” law.
An interesting point, hinting that my approach at moral updating ( http://lesswrong.com/lw/jxa/proper_value_learning_through_indifference/ ) may be better than I supposed.
I was more getting to that it narrows down the problem instead of generalising it. It reduces the responsibilities of the AI and widens those of humans. If you solved this problem you would only get up to the most virtous human (which isn’t exactly bad). Going beyond would require ethics competency that would have to be added as we are tying it’s hands in this department.
I take the point in practice, but there’s no reason we couldn’t design something to follow a path towards ultra-ethicshood that had the conservation property. For instance, if we could implement “as soon as you know your morals would change, then change them”, this would give us a good part of the “conservation” law.