By definition, if you choose X over Y, then X is a higher utility option than Y. That means utility represents wanting and not liking. But good utilitarians (and, presumably, artificial intelligences) try to maximize utility. This correlates contingently with maximizing happiness, but not necessarily
You are equivocating on the term ‘utility’ here, as have so many other commenters before in this forum. In the first sentence above, ‘utility’ is used in the sense given
to that term by axiomatic utility theory. When the preferences of an individual conform to a set of axioms, they can be represented by a ‘utility function’. The ‘utilities’ of this individual are the values of that function. By contrast, when ethicists discuss utilitarianism, what they mean by ‘utility’ is either pleasure or good. The empirical studies you cite, therefore, do not pose problems for utility theory or utilitarianism. They only pose problems for the muddled view on which utility functions represent that which hedonistic utilitarians think we ought to maximize.
You are equivocating on the term ‘utility’ here, as have so many other commenters before in this forum.
That seems to me to be an unfair reading. Nowhere does Yvain say that he’s using the axiomatic theory of utility. It’s true that he writes, “By definition, if you choose X over Y, then X is a higher utility option than Y.” However, this definition can hold in other theoretical frameworks besides axiomatic utility theory. In particular, the definition plausibly holds in the framework used by some ethical utilitarians. Yvain can therefore be read as using the same definition for utility throughout.
I accept Benthamite’s criticism as valid. It may not be obvious from the text, but in my mind I was definitely equivocating.
If we can’t use preference to determine ethical utility, it makes ethical utilitarianism a lot harder, but that might be something we have to live with. I don’t remember very much about Coherent Extrapolated Volition, but my vague memories say it makes that a lot harder too.
If we can’t use preference to determine ethical utility, it makes ethical utilitarianism a lot harder [...]
The way “preference” tends to be used in this community (as a more general word for “utility”, communicating the same idea without explicit reference to expected utility maximization), this isn’t right either. The actual decisions should be higher in utility than their alternatives, it is preferable if they are higher utility, but the correspondence is far from being factual, let alone “by definition” (Re: “By definition, if you choose X over Y, then X is a higher utility option than Y”). One can go a fair amount from actions to revealed preference, but only modulo human craziness and stupidity.
You are equivocating on the term ‘utility’ here, as have so many other commenters before in this forum. In the first sentence above, ‘utility’ is used in the sense given to that term by axiomatic utility theory. When the preferences of an individual conform to a set of axioms, they can be represented by a ‘utility function’. The ‘utilities’ of this individual are the values of that function. By contrast, when ethicists discuss utilitarianism, what they mean by ‘utility’ is either pleasure or good. The empirical studies you cite, therefore, do not pose problems for utility theory or utilitarianism. They only pose problems for the muddled view on which utility functions represent that which hedonistic utilitarians think we ought to maximize.
That seems to me to be an unfair reading. Nowhere does Yvain say that he’s using the axiomatic theory of utility. It’s true that he writes, “By definition, if you choose X over Y, then X is a higher utility option than Y.” However, this definition can hold in other theoretical frameworks besides axiomatic utility theory. In particular, the definition plausibly holds in the framework used by some ethical utilitarians. Yvain can therefore be read as using the same definition for utility throughout.
I accept Benthamite’s criticism as valid. It may not be obvious from the text, but in my mind I was definitely equivocating.
If we can’t use preference to determine ethical utility, it makes ethical utilitarianism a lot harder, but that might be something we have to live with. I don’t remember very much about Coherent Extrapolated Volition, but my vague memories say it makes that a lot harder too.
I observe that you might have caught this mistake earlier via this heuristic: “Using the phrase “by definition”, anywhere outside of math, is among the most alarming signals of flawed argument I’ve ever found. It’s right up there with “Hitler”, “God”, “absolutely certain” and “can’t prove that”.” I should probably rewrite “math” as “pure math” just to make this clearer.
The way “preference” tends to be used in this community (as a more general word for “utility”, communicating the same idea without explicit reference to expected utility maximization), this isn’t right either. The actual decisions should be higher in utility than their alternatives, it is preferable if they are higher utility, but the correspondence is far from being factual, let alone “by definition” (Re: “By definition, if you choose X over Y, then X is a higher utility option than Y”). One can go a fair amount from actions to revealed preference, but only modulo human craziness and stupidity.