Note that “punt to a human” isn’t just infeasible, it’s undesirable, unless you can choose a human that gives the answer you want. Why do FAI folks treat humanity as the baseline, when it’s clear that many (and perhaps all) humans are not friendly to start with?
I’m starting to consider whether the idea of individuality strongly implies that there can be no objective agreement on some ethical topics, and therefore that conflict simply has to be included in any ethics that accepts independence of agent beliefs.
With that in mind, it’s not errors in aggregation that would be the problem, it’s accepted (and valued, even: memicide is repugnant at first glance, though I’m not sure I can say that anything is strictly forbidden in all cases) that agents’ preferences cannot be aggregated. You then hit Arrow’s Theorem and have no choice but to decide which of the desirable aggregation properties you don’t need.
I suggest the “There are several voting systems that side-step these requirements by using cardinal utility (which conveys more information than rank orders)” solution.
Fair point, and I now see that this is exactly what the post is about: assuming we have cardinal measurements, the problem becomes weighting in order to make interpersonal utility comparisons.
Note that “punt to a human” isn’t just infeasible, it’s undesirable, unless you can choose a human that gives the answer you want. Why do FAI folks treat humanity as the baseline, when it’s clear that many (and perhaps all) humans are not friendly to start with?
I’m starting to consider whether the idea of individuality strongly implies that there can be no objective agreement on some ethical topics, and therefore that conflict simply has to be included in any ethics that accepts independence of agent beliefs.
With that in mind, it’s not errors in aggregation that would be the problem, it’s accepted (and valued, even: memicide is repugnant at first glance, though I’m not sure I can say that anything is strictly forbidden in all cases) that agents’ preferences cannot be aggregated. You then hit Arrow’s Theorem and have no choice but to decide which of the desirable aggregation properties you don’t need.
I suggest the “There are several voting systems that side-step these requirements by using cardinal utility (which conveys more information than rank orders)” solution.
Fair point, and I now see that this is exactly what the post is about: assuming we have cardinal measurements, the problem becomes weighting in order to make interpersonal utility comparisons.