Are we just trying to model our preferences for the purposes of making predictions, or are we also trying to figure out how to make recommendations to ourselves… CEV
The latter.
If the latter, then we can’t make any progress if we do not use some information other than revealed preferences.
Yes. I may have been unclear. I don’t mean to refer to revealed preference, I mean that refinements on the possible utility function are to be judged based on the preferences that they entail, not on anything else. For example, utilitarianism should be judged by it’s (repugnant) conclusions, not by the elegance of linear aggregation or whatever.
I think that other information takes a variety of forms, stuff like revealed preference, what philosophers think, neuroscience, etc. The trick is to define a prior that relates these things to desired preferences, and then what our preferences in a state of partial knowledge are.
The OP work has a few other problems as well that have me now leaning towards building this thing up from that (indirect normativity) base instead of going in with this “set of utility functions with probabilities” business.
Anyway, I am merely suggesting that when we only have a vague idea of what we want (which humans tend to do, and which is the motivation for the problem in the first place), it is not as simple as declaring that each km should be exactly what we want it to be.
The latter.
Yes. I may have been unclear. I don’t mean to refer to revealed preference, I mean that refinements on the possible utility function are to be judged based on the preferences that they entail, not on anything else. For example, utilitarianism should be judged by it’s (repugnant) conclusions, not by the elegance of linear aggregation or whatever.
I think that other information takes a variety of forms, stuff like revealed preference, what philosophers think, neuroscience, etc. The trick is to define a prior that relates these things to desired preferences, and then what our preferences in a state of partial knowledge are.
The OP work has a few other problems as well that have me now leaning towards building this thing up from that (indirect normativity) base instead of going in with this “set of utility functions with probabilities” business.
Ok, because we don’t know what we want it to be.
Ok. It sounds like we mostly agree at this point.