Aren’t utility functions kind of… invariant to scaling and addition of a constant value?
That is, you can say that “I would like A more than B” but not “having A makes me happier than you would be having it”. Neither “I’m neither happy or unhappy, so me not existing wouldn’t change anything”. It’s just not defined.
Actually, the only place different people’s utility functions can be added up is in a single person’s mind, that is, “I value seeing X and Y both feeling well twice as much as just X being in such a state”. So “destroying beings with less than average utility” would appeal to those who tend to average utilities instead of summing them. And, of course, it also depends on what they think of those utility functions.
(that is, do we count the utility function of the person before or after giving them antidepressants?)
Of course, the additional problem is that no one sums up utility functions the same way, but there seems to be just enough correllation between individual results that we can start debates over the “right way of summing utiliity functions”.
It’s hard to do utilitarian ethics without commensurate utility functions, and so utilitarian ethical calculations, in the comparatively rare cases where they’re implemented with actual numbers, often use a notion of cardinal utility. (The Wikipedia article’s kind of a mess, unfortunately.) As far as I can tell this has nothing to do with cardinal numbers in mathematics, but it does provide for commensurate utility scales; in this case, you’d probably be mapping preference orderings over possible world-states onto the reals in some way.
There do seem to be some interesting things you could do with pure preference orderings, analogous to decision criteria for ranked-choice voting in politics. As far as I know, though, they haven’t received much attention in the ethics world.
Aren’t utility functions kind of… invariant to scaling and addition of a constant value?
That is, you can say that “I would like A more than B” but not “having A makes me happier than you would be having it”. Neither “I’m neither happy or unhappy, so me not existing wouldn’t change anything”. It’s just not defined.
Actually, the only place different people’s utility functions can be added up is in a single person’s mind, that is, “I value seeing X and Y both feeling well twice as much as just X being in such a state”. So “destroying beings with less than average utility” would appeal to those who tend to average utilities instead of summing them. And, of course, it also depends on what they think of those utility functions.
(that is, do we count the utility function of the person before or after giving them antidepressants?)
Of course, the additional problem is that no one sums up utility functions the same way, but there seems to be just enough correllation between individual results that we can start debates over the “right way of summing utiliity functions”.
It’s hard to do utilitarian ethics without commensurate utility functions, and so utilitarian ethical calculations, in the comparatively rare cases where they’re implemented with actual numbers, often use a notion of cardinal utility. (The Wikipedia article’s kind of a mess, unfortunately.) As far as I can tell this has nothing to do with cardinal numbers in mathematics, but it does provide for commensurate utility scales; in this case, you’d probably be mapping preference orderings over possible world-states onto the reals in some way.
There do seem to be some interesting things you could do with pure preference orderings, analogous to decision criteria for ranked-choice voting in politics. As far as I know, though, they haven’t received much attention in the ethics world.