Mike Blume: Do you claim that the CEV of a pygmy father would assert that his daughter’s clitoris should not be sliced off? Or that the CEV of a petty thief would assert that he should not possess my iPod?
Mike, a coherent extrapolated volition is generally something you do with more than one extrapolated volition at once, though I suppose you could extrapolate a single human’s volition into a spread of outcomes and look for coherence in the spread. But this level of metaethics is of interest primarily to FAIfolk, I would think.
With that said, if I were building a Friendly AI, I would probably be aiming to construe ‘extrapolated volitions’ across at least the same kind of gaps that separate Archimedes from the modern world. Whether you can do this on a strictly individual extrapolation—whether Archimedes, alone in a spacesuit and thinking, would eventually cross the gap on his own—is an interesting question.
At the very least, you should imagine the pygmy father having full knowledge of the alternate lives his daughter would lead, as though he had lived them himself—though that might or might not imply full empathy, it would at the least imply full knowledge.
And at the very least, imagine the petty thief reading through everything ever written in the Library of Congress, including everything ever written about morality.
This advice is hardly helpful in day-to-day moral reasoning, of course, unless you’re actually building an AI with that kind of extrapolative power.
Vladimir Nesov: ‘Same moral arguments as before’ doesn’t seem like an answer, in the same sense as ‘you should continue as before’ is not a good advice for cavemen (who could benefit from being brought into modern civilization). If cavemen can vaguely describe what they want from environment, this vague explanation can be used to produce optimized environment by sufficiently powerful optimization process that is external to cavemen...
At this point you’re working with Friendly AI. Then, indeed, you have legitimate cause to dip into metaethics and make it a part of your conversation.
Mike Blume: Do you claim that the CEV of a pygmy father would assert that his daughter’s clitoris should not be sliced off? Or that the CEV of a petty thief would assert that he should not possess my iPod?
Mike, a coherent extrapolated volition is generally something you do with more than one extrapolated volition at once, though I suppose you could extrapolate a single human’s volition into a spread of outcomes and look for coherence in the spread. But this level of metaethics is of interest primarily to FAIfolk, I would think.
With that said, if I were building a Friendly AI, I would probably be aiming to construe ‘extrapolated volitions’ across at least the same kind of gaps that separate Archimedes from the modern world. Whether you can do this on a strictly individual extrapolation—whether Archimedes, alone in a spacesuit and thinking, would eventually cross the gap on his own—is an interesting question.
At the very least, you should imagine the pygmy father having full knowledge of the alternate lives his daughter would lead, as though he had lived them himself—though that might or might not imply full empathy, it would at the least imply full knowledge.
And at the very least, imagine the petty thief reading through everything ever written in the Library of Congress, including everything ever written about morality.
This advice is hardly helpful in day-to-day moral reasoning, of course, unless you’re actually building an AI with that kind of extrapolative power.
Vladimir Nesov: ‘Same moral arguments as before’ doesn’t seem like an answer, in the same sense as ‘you should continue as before’ is not a good advice for cavemen (who could benefit from being brought into modern civilization). If cavemen can vaguely describe what they want from environment, this vague explanation can be used to produce optimized environment by sufficiently powerful optimization process that is external to cavemen...
At this point you’re working with Friendly AI. Then, indeed, you have legitimate cause to dip into metaethics and make it a part of your conversation.