Are we just trying to model our preferences for the purposes of making predictions, or are we also trying to figure out how to make recommendations to ourselves that we think would improve our actions with respect to what we think we would want if we understood ourselves better? If the former, then we shouldn’t assume VNM anyway, since humans do not obey the VNM axioms. If the latter, then we can’t make any progress if we do not use some information other than revealed preferences.
And just to be clear, I am not suggesting that we should use only the structure of the utility functions themselves to come up with the normalization, without taking how we think about them into account. I think your example of normalizing the meat/vegetarian utility functions to a variance of 1 is a good example of what tends to go wrong when you insist on something so restrictive, and I seem to recall someone posting about some theorem on LW a while ago saying that no such normalization scheme has all of some set of desirable properties. Anyway, I am merely suggesting that when we only have a vague idea of what we want (which humans tend to do, and which is the motivation for the problem in the first place), it is not as simple as declaring that each km should be exactly what we want it to be.
Are we just trying to model our preferences for the purposes of making predictions, or are we also trying to figure out how to make recommendations to ourselves… CEV
The latter.
If the latter, then we can’t make any progress if we do not use some information other than revealed preferences.
Yes. I may have been unclear. I don’t mean to refer to revealed preference, I mean that refinements on the possible utility function are to be judged based on the preferences that they entail, not on anything else. For example, utilitarianism should be judged by it’s (repugnant) conclusions, not by the elegance of linear aggregation or whatever.
I think that other information takes a variety of forms, stuff like revealed preference, what philosophers think, neuroscience, etc. The trick is to define a prior that relates these things to desired preferences, and then what our preferences in a state of partial knowledge are.
The OP work has a few other problems as well that have me now leaning towards building this thing up from that (indirect normativity) base instead of going in with this “set of utility functions with probabilities” business.
Anyway, I am merely suggesting that when we only have a vague idea of what we want (which humans tend to do, and which is the motivation for the problem in the first place), it is not as simple as declaring that each km should be exactly what we want it to be.
Are we just trying to model our preferences for the purposes of making predictions, or are we also trying to figure out how to make recommendations to ourselves that we think would improve our actions with respect to what we think we would want if we understood ourselves better? If the former, then we shouldn’t assume VNM anyway, since humans do not obey the VNM axioms. If the latter, then we can’t make any progress if we do not use some information other than revealed preferences.
And just to be clear, I am not suggesting that we should use only the structure of the utility functions themselves to come up with the normalization, without taking how we think about them into account. I think your example of normalizing the meat/vegetarian utility functions to a variance of 1 is a good example of what tends to go wrong when you insist on something so restrictive, and I seem to recall someone posting about some theorem on LW a while ago saying that no such normalization scheme has all of some set of desirable properties. Anyway, I am merely suggesting that when we only have a vague idea of what we want (which humans tend to do, and which is the motivation for the problem in the first place), it is not as simple as declaring that each km should be exactly what we want it to be.
The latter.
Yes. I may have been unclear. I don’t mean to refer to revealed preference, I mean that refinements on the possible utility function are to be judged based on the preferences that they entail, not on anything else. For example, utilitarianism should be judged by it’s (repugnant) conclusions, not by the elegance of linear aggregation or whatever.
I think that other information takes a variety of forms, stuff like revealed preference, what philosophers think, neuroscience, etc. The trick is to define a prior that relates these things to desired preferences, and then what our preferences in a state of partial knowledge are.
The OP work has a few other problems as well that have me now leaning towards building this thing up from that (indirect normativity) base instead of going in with this “set of utility functions with probabilities” business.
Ok, because we don’t know what we want it to be.
Ok. It sounds like we mostly agree at this point.