Well, whether you really want (in the extrapolated volition sense) to sacrifice 10^{whatever} lives to save your family is a whole big calculation involving interpersonal morality, bounded rationality/virtue ethics, TDT/game theory, etc. The point that I was echoing is that if you really would want to make that trade, there’s nothing monstery about your family—you just {love them that much}/{love others that little}. The utility monster is an objection to the social morality theory called “utilitarianism”; the utility monster becomes gibberish when phrased as an objection to “any set of preferences can in principal be completely specified by a utility function, to be handed to a generic decision process, resulting in optimal decision making”. Like, “Oh no, oh no, I found this monster, and it is soooo soooo good to feed it humans! It is even more better every time I feed it another human! Woe is me! Goooood!!”.
Now, the utility monster makes perfect sense as an objection to humans actually making decisions purely using explicit quantitative expected utility calculations. But that doesn’t say anything about utility as a formalized version of “good”. Rather, that’s some sort of comment about the capricious quality of bounded reasoning under uncertainty—you always worry about strong conclusions that make you do particularly effective things, because a mistake in your calculations means you are doing particularly effective bad things. One particular sort of dangerously strong conclusion would be concluding that, e.g., the marginal utility of {UMonster eating an additional human} is larger than and grows faster than the marginal utility of {another humans gets eaten alive}.
Well, whether you really want (in the extrapolated volition sense) to sacrifice 10^{whatever} lives to save your family is a whole big calculation involving interpersonal morality, bounded rationality/virtue ethics, TDT/game theory, etc. The point that I was echoing is that if you really would want to make that trade, there’s nothing monstery about your family—you just {love them that much}/{love others that little}. The utility monster is an objection to the social morality theory called “utilitarianism”; the utility monster becomes gibberish when phrased as an objection to “any set of preferences can in principal be completely specified by a utility function, to be handed to a generic decision process, resulting in optimal decision making”. Like, “Oh no, oh no, I found this monster, and it is soooo soooo good to feed it humans! It is even more better every time I feed it another human! Woe is me! Goooood!!”.
Now, the utility monster makes perfect sense as an objection to humans actually making decisions purely using explicit quantitative expected utility calculations. But that doesn’t say anything about utility as a formalized version of “good”. Rather, that’s some sort of comment about the capricious quality of bounded reasoning under uncertainty—you always worry about strong conclusions that make you do particularly effective things, because a mistake in your calculations means you are doing particularly effective bad things. One particular sort of dangerously strong conclusion would be concluding that, e.g., the marginal utility of {UMonster eating an additional human} is larger than and grows faster than the marginal utility of {another humans gets eaten alive}.