Most-Moral-Minority Morality

In this post, I discuss a theoretical strategy for finding a morally optimum world in which—regardless of my intrinsic moral preferences—I fold to the preferences of the most moral minority.


I don’t personally find X to be intrinsically immoral. I know that if some people knew this about me, they might feel shocked, sad and disgusted. I can understand how they would feel because I feel that Y is immoral and not everyone does, even though they should.

These are unpleasant feelings and, combined with the fear that immoral events will happen more frequently due to apathy, I’m willing to fold X into my category of things that shouldn’t happen. Not because of X itself, but because I know it makes people feel bad.

This is more than the gaming strategy I’ll be anti-X if they’ll be anti-Y. This is a reflection that the most moral world is a world in which people’s moral preferences are maximally satisfied, so that no one needs to feel that their morality is marginalized and suffer the feelings of disgust and sadness.

Ideal Application: Nested Morality Model

The sentiment and strategy just described is ideal in the case of a nested model of moralities in which preferences can be roughly universally ranked from most immoral to least immoral: X1, X2, X3, X4, … . Every one has an immoral threshold where they no longer care. For example, all humans consider the first few elements to be immoral. However, only the most morally sensitive humans care about the elements after the first few thousand. In such a world where this model was accurate, it would be ideal to fold to the morality of the most morally sensitive. Not only would you be satisfying the morality of everyone, you could be certain that you were also satisfying the morality of your most moral, future selves, especially by extending the fold a little further out.

Figure: Hierarchy of Moral Preferences in the Nested Morality Model

Note that in this model it doesn’t actually matter if individual humans would rank the preferences differently. Since they’re all satisfied, the ordering of preferences doesn’t matter. Folding to the most moral minority should solve all moral conflicts that result from varying sensitivity to a moral issue, regardless of differences in relative rankings. For example, by such a strategy I should become a vegetarian (although I’m not).

Real Life Application: Very Limited

However, in reality, moral preferences aren’t nested in sensitivity, but conflicting. Someone may have a moral preference for Y, while someone else may have a clear preference for ~Y. Such conflicts are not uncommon and may represent the majority of moral conflicts in the world.

Secondly, even if a person is indifferent about the moral value of Y and ~Y, they may value the freedom or the diversity of having both Y and ~Y in the world.

When it comes to the latter conflicts, I think that the world would be a happier place if freedom and diversity suffered a little bit for very strong (albeit minority) moral preferences. However, freedom and diversity should not suffer too much for very weak or very small sample size preferences. With such a trade-off situation, an optimum can not be found since I don’t expect to be able to place relative weights on ‘freedom’, ‘diversity’ and an individual’s moral preference in a general case.

For now, I think I will simply resolve to (consider) folding to the moral preference Z of a fellow human in the simplest case where I am apathetic about Z and also indifferent to the freedom and diversity of Z and ~Z.