Most-Moral-Minority Morality

In this post, I dis­cuss a the­o­ret­i­cal strat­egy for find­ing a morally op­ti­mum world in which—re­gard­less of my in­trin­sic moral prefer­ences—I fold to the prefer­ences of the most moral minor­ity.

I don’t per­son­ally find X to be in­trin­si­cally im­moral. I know that if some peo­ple knew this about me, they might feel shocked, sad and dis­gusted. I can un­der­stand how they would feel be­cause I feel that Y is im­moral and not ev­ery­one does, even though they should.

Th­ese are un­pleas­ant feel­ings and, com­bined with the fear that im­moral events will hap­pen more fre­quently due to ap­a­thy, I’m will­ing to fold X into my cat­e­gory of things that shouldn’t hap­pen. Not be­cause of X it­self, but be­cause I know it makes peo­ple feel bad.

This is more than the gam­ing strat­egy I’ll be anti-X if they’ll be anti-Y. This is a re­flec­tion that the most moral world is a world in which peo­ple’s moral prefer­ences are max­i­mally satis­fied, so that no one needs to feel that their moral­ity is marginal­ized and suffer the feel­ings of dis­gust and sad­ness.

Ideal Ap­pli­ca­tion: Nested Mo­ral­ity Model

The sen­ti­ment and strat­egy just de­scribed is ideal in the case of a nested model of moral­ities in which prefer­ences can be roughly uni­ver­sally ranked from most im­moral to least im­moral: X1, X2, X3, X4, … . Every one has an im­moral thresh­old where they no longer care. For ex­am­ple, all hu­mans con­sider the first few el­e­ments to be im­moral. How­ever, only the most morally sen­si­tive hu­mans care about the el­e­ments af­ter the first few thou­sand. In such a world where this model was ac­cu­rate, it would be ideal to fold to the moral­ity of the most morally sen­si­tive. Not only would you be satis­fy­ing the moral­ity of ev­ery­one, you could be cer­tain that you were also satis­fy­ing the moral­ity of your most moral, fu­ture selves, es­pe­cially by ex­tend­ing the fold a lit­tle fur­ther out.

Figure: Hier­ar­chy of Mo­ral Prefer­ences in the Nested Mo­ral­ity Model

Note that in this model it doesn’t ac­tu­ally mat­ter if in­di­vi­d­ual hu­mans would rank the prefer­ences differ­ently. Since they’re all satis­fied, the or­der­ing of prefer­ences doesn’t mat­ter. Fold­ing to the most moral minor­ity should solve all moral con­flicts that re­sult from vary­ing sen­si­tivity to a moral is­sue, re­gard­less of differ­ences in rel­a­tive rank­ings. For ex­am­ple, by such a strat­egy I should be­come a veg­e­tar­ian (al­though I’m not).

Real Life Ap­pli­ca­tion: Very Limited

How­ever, in re­al­ity, moral prefer­ences aren’t nested in sen­si­tivity, but con­flict­ing. Some­one may have a moral prefer­ence for Y, while some­one else may have a clear prefer­ence for ~Y. Such con­flicts are not un­com­mon and may rep­re­sent the ma­jor­ity of moral con­flicts in the world.

Se­condly, even if a per­son is in­differ­ent about the moral value of Y and ~Y, they may value the free­dom or the di­ver­sity of hav­ing both Y and ~Y in the world.

When it comes to the lat­ter con­flicts, I think that the world would be a hap­pier place if free­dom and di­ver­sity suffered a lit­tle bit for very strong (albeit minor­ity) moral prefer­ences. How­ever, free­dom and di­ver­sity should not suffer too much for very weak or very small sam­ple size prefer­ences. With such a trade-off situ­a­tion, an op­ti­mum can not be found since I don’t ex­pect to be able to place rel­a­tive weights on ‘free­dom’, ‘di­ver­sity’ and an in­di­vi­d­ual’s moral prefer­ence in a gen­eral case.

For now, I think I will sim­ply re­solve to (con­sider) fold­ing to the moral prefer­ence Z of a fel­low hu­man in the sim­plest case where I am ap­a­thetic about Z and also in­differ­ent to the free­dom and di­ver­sity of Z and ~Z.