To help someone, you don’t need him to have an utility function, just preferences. Those preferences do have to have some internal consistency. But the consistency criteria you need to in order to help someone seem strictly weaker than the ones needed to establish an utility function. Among the von Neumann-Morgenstern axioms, maybe only completeness and transitivity are needed.
For example, suppose I know someone who currently faces choices A and B, and I know that if I also offer him choice C, his preferences will remain complete and transitive. Then I’d be helping him, or at least not hurting him, if I offered him choice C, without knowing anything else about his beliefs or values.
Or did you have some other notion of “help” in mind?
Furthermore, utility functions actually aren’t too bad as a descriptive match when you are primarily concerned about aggregate outcomes. They may be almost useless when you try to write one that describes your own choices and preferences perfectly, but they are a good enough approximation that they are useful for understanding how the choices of individuals aggregate: see the discipline of economics. This is a good place for the George Box quote: “All models are wrong, but some are useful.”
They may be a bad descriptive match. But in prescriptive terms, how
do you “help” someone without a utility function?
Isn’t “helping” a situation where the prescription is derived from the
description? Are you suggesting we lie about others’ desires so we can
more easily claim to help satisfy them?
Helping others can be very tricky. I like to wait until someone has
picked a specific, short term goal. Then I decide whether to help them
with that goal, and how much.
I think Eliezer is simply saying: “I can’t do everything, therefore I must decide where I think the marginal benefits are greatest. This is equivalent to attempting to maximize some utility function.”
A simple utility function can be descriptive in simple economic models, but taken as descriptive, such function doesn’t form a valid foundation for the (accurate) prescriptive model.
On the other hand, when you start from an accurate description of human behavior, it’s not easy to extract from it a prescriptive model that could be used as a criterion for improvement, but utility function (plus prior) seems to be a reasonable format for such a prescriptive model if you manage to construct it somehow.
They may be a bad descriptive match. But in prescriptive terms, how do you “help” someone without a utility function?
To help someone, you don’t need him to have an utility function, just preferences. Those preferences do have to have some internal consistency. But the consistency criteria you need to in order to help someone seem strictly weaker than the ones needed to establish an utility function. Among the von Neumann-Morgenstern axioms, maybe only completeness and transitivity are needed.
For example, suppose I know someone who currently faces choices A and B, and I know that if I also offer him choice C, his preferences will remain complete and transitive. Then I’d be helping him, or at least not hurting him, if I offered him choice C, without knowing anything else about his beliefs or values.
Or did you have some other notion of “help” in mind?
Furthermore, utility functions actually aren’t too bad as a descriptive match when you are primarily concerned about aggregate outcomes. They may be almost useless when you try to write one that describes your own choices and preferences perfectly, but they are a good enough approximation that they are useful for understanding how the choices of individuals aggregate: see the discipline of economics. This is a good place for the George Box quote: “All models are wrong, but some are useful.”
Isn’t “helping” a situation where the prescription is derived from the description? Are you suggesting we lie about others’ desires so we can more easily claim to help satisfy them?
Helping others can be very tricky. I like to wait until someone has picked a specific, short term goal. Then I decide whether to help them with that goal, and how much.
I think Eliezer is simply saying: “I can’t do everything, therefore I must decide where I think the marginal benefits are greatest. This is equivalent to attempting to maximize some utility function.”
Not necessarily. There are lots of plausible moral theories under which individuals’ desires don’t determine their well-being.
Derivation of prescription from description isn’t trivial.
That’s the difference between finding the best plan, and conceding for a suboptimal plan because you ran out of thought.
I agree with both those statements, but I’m not completely sure how you’re relating them to what I wrote.
Do you mean that the difficulty of going from a full description to a prescription justifies using this particular simpler description instead?
It might. I doubt it because utility functions seem so different in spirit from the reality, but it might. Just remember it’s not the only choice.
A simple utility function can be descriptive in simple economic models, but taken as descriptive, such function doesn’t form a valid foundation for the (accurate) prescriptive model.
On the other hand, when you start from an accurate description of human behavior, it’s not easy to extract from it a prescriptive model that could be used as a criterion for improvement, but utility function (plus prior) seems to be a reasonable format for such a prescriptive model if you manage to construct it somehow.
In that case, we disagree about whether the format seems reasonable (for this purpose).