I think answering “how should you behave when you’re sharing resources with people with different values?” is one of the projects of contractarian ethics, which is why I’m a fan.
A known problem in contractarian ethics is how people with more altruistic preferences can get screwed over by egalitarian procedures that give everyone’s preferences equal weight (like simple majority votes). For example, imagine the options in the poll were “A: give one ice cream to everyone” and “B: give two ice creams, only to the people whose names begin with consonants”. If Selfish Sally is in the minority, she’ll probably defect because she wants ice cream. When Altruistic Ally is in the minority, she reasons that more total utility is created by option B—since consonant names are in the majority, and they get twice as much ice cream—so she won’t defect and she’ll miss out on ice cream. Maybe she’s even totally fine with this outcome, because she has tuistic preferences (she prefers other people to be happy, not as a way of negotiating with them, simply as an end-in-itself) satisfied by giving Sally ice cream. But maybe this implies that, iterated over many such games, nice altruistic kind people will systematically be given less ice cream than selfish mean people! That might not be a characteristic that we want our moral system to have; we might even want to reward people for being nice.
So we could tell Ally to disregard her tuistic preference (her preference for Sally to receive ice cream as an end-in-itself) and vote like a Homo economicus, since that’s what Sally will do and we want a fair outcome for Ally. But maybe then, iterated over many games, Ally won’t be happy with the actual outcomes involved—because we’re asking her to disregard genuine altruistic preferences that she actually has, and she might be unhappy if someone else gets screwed over by that.
In this game you have an additional layer of complexity, since some people might have made their initial vote by asking, “What value do I think has the most universal benefit for everyone?” and others might have made the vote by asking, “What’s my personal favourite value?”—Those people are then facing very different moral decisions when asked, “Do you want to force your value on everyone else?”
If people who made their initial decision by considering the best value for everyone are also less likely to choose to force their value on everyone, while people who made their initial decision selfishly are also more likely to choose to force it on others, then we’d have an interesting problem. Luckily it looks similar to this existing known problem; unluckily, I don’t think the contractarians have a great solution for us yet.
I think answering “how should you behave when you’re sharing resources with people with different values?” is one of the projects of contractarian ethics, which is why I’m a fan.
A known problem in contractarian ethics is how people with more altruistic preferences can get screwed over by egalitarian procedures that give everyone’s preferences equal weight (like simple majority votes). For example, imagine the options in the poll were “A: give one ice cream to everyone” and “B: give two ice creams, only to the people whose names begin with consonants”. If Selfish Sally is in the minority, she’ll probably defect because she wants ice cream. When Altruistic Ally is in the minority, she reasons that more total utility is created by option B—since consonant names are in the majority, and they get twice as much ice cream—so she won’t defect and she’ll miss out on ice cream. Maybe she’s even totally fine with this outcome, because she has tuistic preferences (she prefers other people to be happy, not as a way of negotiating with them, simply as an end-in-itself) satisfied by giving Sally ice cream. But maybe this implies that, iterated over many such games, nice altruistic kind people will systematically be given less ice cream than selfish mean people! That might not be a characteristic that we want our moral system to have; we might even want to reward people for being nice.
So we could tell Ally to disregard her tuistic preference (her preference for Sally to receive ice cream as an end-in-itself) and vote like a Homo economicus, since that’s what Sally will do and we want a fair outcome for Ally. But maybe then, iterated over many games, Ally won’t be happy with the actual outcomes involved—because we’re asking her to disregard genuine altruistic preferences that she actually has, and she might be unhappy if someone else gets screwed over by that.
In this game you have an additional layer of complexity, since some people might have made their initial vote by asking, “What value do I think has the most universal benefit for everyone?” and others might have made the vote by asking, “What’s my personal favourite value?”—Those people are then facing very different moral decisions when asked, “Do you want to force your value on everyone else?”
If people who made their initial decision by considering the best value for everyone are also less likely to choose to force their value on everyone, while people who made their initial decision selfishly are also more likely to choose to force it on others, then we’d have an interesting problem. Luckily it looks similar to this existing known problem; unluckily, I don’t think the contractarians have a great solution for us yet.