I have struggled for years about what I want to say about maximizing net aggregated benefits vs minimizing net inequality in cases where tradeoffs are necessary.
My provisional solution for this: I want to maximize net aggregated benefits. I don’t want to minimize net inequality per se, but a useful heuristic is that if X is worse off than Y, then you can probably get more net aggregated benefits per unit resources by helping X (or refraining from harming X) than by helping Y (or refraining for harming Y).
Yeah, I’ve considered this. It doesn’t work for me, because I do seem to want to minimize inequality (in addition to maximizing benefit), and simply ignoring one of my wants is unsatisfying.
That said, I’m not exactly sure why I want to minimize inequality. I’m pretty sure I don’t just value equality for its own sake, for example, though some people claim they do.
One answer that often seems plausible to me is because I am aware that inequalities create an environment that facilitates various kinds of abuse, and what I actually want is to minimize those abuses; a system of inequality among agents who can be relied upon not to abuse one another would be all right with me.
Another answer that often seems plausible to me is because I want everyone to like me, and I’m convinced that inequalities foster resentment.
Other answers pop up from time to time. (And of course there’s always the potential confusion between wanting X and wanting to signal membership in a class characterized by wanting X.)
My provisional solution for this: I want to maximize net aggregated benefits. I don’t want to minimize net inequality per se, but a useful heuristic is that if X is worse off than Y, then you can probably get more net aggregated benefits per unit resources by helping X (or refraining from harming X) than by helping Y (or refraining for harming Y).
Yeah, I’ve considered this. It doesn’t work for me, because I do seem to want to minimize inequality (in addition to maximizing benefit), and simply ignoring one of my wants is unsatisfying.
That said, I’m not exactly sure why I want to minimize inequality. I’m pretty sure I don’t just value equality for its own sake, for example, though some people claim they do.
One answer that often seems plausible to me is because I am aware that inequalities create an environment that facilitates various kinds of abuse, and what I actually want is to minimize those abuses; a system of inequality among agents who can be relied upon not to abuse one another would be all right with me.
Another answer that often seems plausible to me is because I want everyone to like me, and I’m convinced that inequalities foster resentment.
Other answers pop up from time to time. (And of course there’s always the potential confusion between wanting X and wanting to signal membership in a class characterized by wanting X.)