Issues where people disagree are likely to be mixed issues, where making changes will do harm as well as benefit. That is exactly why people disagree.
Harm and benefit are two-place words; harm is always to someone, and according to someone’s values or goals.
If two people have different values—which can be as simple as each wanting the same resource for themselves, or as complex as different religious beliefs—then harm to the one can be benefit to the other. It might not be a zero-sum game because their utility functions aren’t exact inverses, but it’s still a tradeoff between the two, and each prefers their own values over the other’s.
On this view, such issues where people disagree are tautologically those where each change one of them wants benefits themselves and harms the other. Any changes that benefit everyone are quickly implemented until there aren’t any left.
If you share the values of one of these people, then working on the problem will result in benefit (by your values), and you won’t care about the harm (by some other person’s values).
If on most or all such divisive issues, you don’t side with any established camp, that is a very surprising fact that makes you an outlier. Can you build an EA movement out of altruists who don’t care about most divisive issues?
Harm and benefit are two-place words; harm is always to someone, and according to someone’s values or goals.
If two people have different values—which can be as simple as each wanting the same resource for themselves, or as complex as different religious beliefs—then harm to the one can be benefit to the other. It might not be a zero-sum game because their utility functions aren’t exact inverses, but it’s still a tradeoff between the two, and each prefers their own values over the other’s.
On this view, such issues where people disagree are tautologically those where each change one of them wants benefits themselves and harms the other. Any changes that benefit everyone are quickly implemented until there aren’t any left.
If you share the values of one of these people, then working on the problem will result in benefit (by your values), and you won’t care about the harm (by some other person’s values).
If on most or all such divisive issues, you don’t side with any established camp, that is a very surprising fact that makes you an outlier. Can you build an EA movement out of altruists who don’t care about most divisive issues?