My model of utility (and the standard one, as far as I can tell) doesn’t work that way. No rational agent ever gives up a utilon—that is the thing they are maximizing. I think of it as “how many utilons do you get from thinking about John Doe’s increased satisfaction (not utilons, as you have no access to his, though you could say “inferred utilons”) compared to the direct utilons you would otherwise get”.
Those moral weights are “just” terms in your utility function.
And, since humans aren’t actually rational, and don’t have consistent utility functions, actions that imply moral weights are highly variable and contextual.
My model of utility (and the standard one, as far as I can tell) doesn’t work that way. No rational agent ever gives up a utilon—that is the thing they are maximizing. I think of it as “how many utilons do you get from thinking about John Doe’s increased satisfaction (not utilons, as you have no access to his, though you could say “inferred utilons”) compared to the direct utilons you would otherwise get”.
Those moral weights are “just” terms in your utility function.
And, since humans aren’t actually rational, and don’t have consistent utility functions, actions that imply moral weights are highly variable and contextual.
Ah yeah, that makes sense. I guess utility isn’t really the right term to use here.