I guess you could read this as a satire of global utilitarianism or some types of effective altruism. In truth we tend to act to optimize our own utility, which includes varying weights usually less than 1 (much less in most cases) for the utility of others. When we’re talking about policy that goes beyond our personal actions, it comes down to a political/game-theoretic negotiation where hypothetical utility monsters are largely irrelevant.
In truth we tend to act to optimize our own utility, which includes varying weights usually less than 1 (much less in most cases) for the utility of others.
I fully agree. But the point about the utility monster concept is that their marginal utility derived from resource allocation has a far higher multiplier than our own marginal utiliy of the same resources, which is additionally diminishing as we get individually richer. Less than 1 != zero.
When we’re talking about policy that goes beyond our personal actions, it comes down to a political/game-theoretic negotiation where hypothetical utility monsters are largely irrelevant.
I agree that this is true mostly, but not completely. Consider, for instance, that there are some restrictions on animal abuse, even though nonhuman animals are also largely irrelevant in the political/game-theoretic negotiation game. The reason is that some players who are relevant do have preferences concerning the wellbeing of nonhuman animals, even if those are typically low-cost/low-priority preferences.
In principle, the same could be true of utility monsters once they are no longer hypothetical, and exist in forms which are emotionally attractive to humans.
I fully agree. But the point about the utility monster concept is that their marginal utility derived from resource allocation has a far higher multiplier than our own marginal utiliy of the same resources, which is additionally diminishing as we get individually richer. Less than 1 != zero.
Well, even if the utility monster is like e^e^e^x, the weight could be like log(log(log(x))), or just 0. At any rate I don’t find myself inclined to make one (aside from curiosity’s sake) and mathematical utility functions seem like they should be descriptive rather than prescriptive.
In principle, the same could be true of utility monsters once they are no longer hypothetical, and exist in forms which are emotionally attractive to humans.
We might make cute virtual pets or even virtual friends, but I still not going to give them a bunch of money (etc.) just because they would enjoy it much more than me.
edit: In fact I’m leaning towards the idea that in general, a utility function that values others’ utility directly is not safe and probably not a good model of something that evolved. (And also seemingly has loop problems when others do the same).
I guess you could read this as a satire of global utilitarianism or some types of effective altruism. In truth we tend to act to optimize our own utility, which includes varying weights usually less than 1 (much less in most cases) for the utility of others. When we’re talking about policy that goes beyond our personal actions, it comes down to a political/game-theoretic negotiation where hypothetical utility monsters are largely irrelevant.
I fully agree. But the point about the utility monster concept is that their marginal utility derived from resource allocation has a far higher multiplier than our own marginal utiliy of the same resources, which is additionally diminishing as we get individually richer. Less than 1 != zero.
I agree that this is true mostly, but not completely. Consider, for instance, that there are some restrictions on animal abuse, even though nonhuman animals are also largely irrelevant in the political/game-theoretic negotiation game. The reason is that some players who are relevant do have preferences concerning the wellbeing of nonhuman animals, even if those are typically low-cost/low-priority preferences.
In principle, the same could be true of utility monsters once they are no longer hypothetical, and exist in forms which are emotionally attractive to humans.
Well, even if the utility monster is like e^e^e^x, the weight could be like log(log(log(x))), or just 0. At any rate I don’t find myself inclined to make one (aside from curiosity’s sake) and mathematical utility functions seem like they should be descriptive rather than prescriptive.
We might make cute virtual pets or even virtual friends, but I still not going to give them a bunch of money (etc.) just because they would enjoy it much more than me.
edit: In fact I’m leaning towards the idea that in general, a utility function that values others’ utility directly is not safe and probably not a good model of something that evolved. (And also seemingly has loop problems when others do the same).