If you have a (VNM expected) utility function and those subdivisions are also (VNM expected) utility functions, the only reasonable way to aggregate them is linear weighting.
Otherwise, the big utility function won’t agree with the small utility functions about which lotteries are best.
I acknowledge that this is a problem, but my claim is that this is less of a problem than allowing one broken small utility function to take over the whole utility function by rescaling itself.
Why do you think that the big utility function has to have problems?
I suppose because we’re constructing it out of clearly-defined-but-wrong approximations to the small utility functions.
In which case, we should deviate from addition in accordance with the flaws in those approximations.
Suppose that we expect small functions to sometimes break. Then E(actual utility|calculated utility=x) looks similar to x when |x| is small, but is much closer to 0 when |X| is large. If we can estimate this S-curve, we make our method more robust against this particular problem.
Another inference we can make is that, when |x| is large, investigating whether or not expected utility is closely approximating actual utility becomes more useful, and any systems that could do this are better ideas.
We should, usually, construct the analysis of further possible problems, such as problems with this approximation, in the same manner: By looking at what deviations between estimated utility and actual utility occur.
If you have a (VNM expected) utility function and those subdivisions are also (VNM expected) utility functions, the only reasonable way to aggregate them is linear weighting.
Otherwise, the big utility function won’t agree with the small utility functions about which lotteries are best.
I acknowledge that this is a problem, but my claim is that this is less of a problem than allowing one broken small utility function to take over the whole utility function by rescaling itself.
Why do you think that the big utility function has to have problems?
I suppose because we’re constructing it out of clearly-defined-but-wrong approximations to the small utility functions.
In which case, we should deviate from addition in accordance with the flaws in those approximations.
Suppose that we expect small functions to sometimes break. Then E(actual utility|calculated utility=x) looks similar to x when |x| is small, but is much closer to 0 when |X| is large. If we can estimate this S-curve, we make our method more robust against this particular problem.
Another inference we can make is that, when |x| is large, investigating whether or not expected utility is closely approximating actual utility becomes more useful, and any systems that could do this are better ideas.
We should, usually, construct the analysis of further possible problems, such as problems with this approximation, in the same manner: By looking at what deviations between estimated utility and actual utility occur.