One problem with the FAQ: The standard metaethics around here, at least EY’s metaethics, is not utilitarianism. Utilitarianism says maximize aggregate utility, with “aggregate” defined in some suitable way. EY’s metaethics says maximize your own utility (with the caveat that you only have partial information of your utility function), and that all humans have sufficiently similar utility functions.
Utilitarianism isn’t a metaethic in the first place; it’s a family of ethical systems. Metaethical systems and ethical systems aren’t comparable objects. “Maximize your utility function” says nothing, for the reasons given by benelliott, and isn’t a metaethical claim (nor a correct summary of EY’s metaethic); metaethics deals with questions like:
What does moral language mean? Do moral facts exist? If so, what are they like, and are they reducible to natural facts? How can we know whether moral judgments are true or false? Is there a connection between making a moral judgment and being motivated to abide by it? Are moral judgments objective or subjective, relative or absolute? Does it make sense to talk about moral progress?
EY’s metaethic approaches those questions as an unpacking of “should” and other moral symbols. While it does give examples of some of the major object-level values we’d expect to find in ethical systems, it doesn’t generate a brand of utilitarianism or a specific utility function.
(And “utility” as in what an agent with a (VNM) utility function maximizes (in expectation), and “utility” as in what a utilitarian tries to maximize in aggregate over some set of beings, aren’t comparable objects either, and they should be kept cognitively separate.)
Utilitarianism isn’t a metaethic in the first place; it’s a family of ethical systems.
Good point. Here’s the intuition behind my comment. Classical utilitarianism starts with “maximize aggregate utility” and jumps off from there (Mill calls it obvious, then gives a proof that he admits to be flawed). This opens them up to a slew of standard criticisms (e.g. utility monsters). I’m not very well versed on more modern versions of utilitarianism, but the impression I get is that they do something similar. But, as you point out, all the utilitarian is saying is which utility function you should be maximizing (answer: the aggregate of the utility functions of all suitable agents).
EY’s metaethics, on the other hand, eventually says something like “maximize this specific utility function that we don’t know perfectly. Oh yeah, it’s your utility function, and most everyone else’s.” With a suitable utility function, EY’s metaethics seems completely compatable with utilitarianism, I admit, but that seems unlikely. The utilitarian has to take into account the murderer’s preference for murder, should that preference actually exist (and not be a confusion). It seems highly unlikely to me that I and most of my fellow humans (which is where the utility function in question exists) care about someone’s preference for murder. Even assuming that I/we thought faster, more rationally, etc.
Oh, and a note on the “maximize your own utility function” language that I used. I tend to think about ethics in the first person: what should I do. Well, maximize my own utility function/preferences, whatever they are. I only start worrying about your preferences when I find out that they are information about my own preferences (or if I specifically care about your preferences in my own.) This is an explanation of how I’m thinking, but I should know better than to use this language on LW where most people haven’t seen it before and so will be confused.
all the utilitarian is saying is which utility function you should be maximizing (answer: the aggregate of the utility functions of all suitable agents)
The answer is the aggregate of some function for all suitable agents, but that function needn’t itself be a decision-theoretic utility function. It can be something else, like pleasure minus pain or even pleasure-not-derived-from-murder minus pain.
Ah, I was equating preference utilitarianism with utilitarianism.
I still think that calling yourself a utilitarian can be dangerous if only because it instantly calls to mind a list of stock objects (in some interlocutors) that just don’t apply given EY’s metaethics. It may be worth sticking to the terminology despite the cost though.
Utilitarianism isn’t a metaethic in the first place; it’s a family of ethical systems. Metaethical systems and ethical systems aren’t comparable objects. “Maximize your utility function” says nothing, for the reasons given by benelliott, and isn’t a metaethical claim (nor a correct summary of EY’s metaethic); metaethics deals with questions like:
EY’s metaethic approaches those questions as an unpacking of “should” and other moral symbols. While it does give examples of some of the major object-level values we’d expect to find in ethical systems, it doesn’t generate a brand of utilitarianism or a specific utility function.
(And “utility” as in what an agent with a (VNM) utility function maximizes (in expectation), and “utility” as in what a utilitarian tries to maximize in aggregate over some set of beings, aren’t comparable objects either, and they should be kept cognitively separate.)
Good point. Here’s the intuition behind my comment. Classical utilitarianism starts with “maximize aggregate utility” and jumps off from there (Mill calls it obvious, then gives a proof that he admits to be flawed). This opens them up to a slew of standard criticisms (e.g. utility monsters). I’m not very well versed on more modern versions of utilitarianism, but the impression I get is that they do something similar. But, as you point out, all the utilitarian is saying is which utility function you should be maximizing (answer: the aggregate of the utility functions of all suitable agents).
EY’s metaethics, on the other hand, eventually says something like “maximize this specific utility function that we don’t know perfectly. Oh yeah, it’s your utility function, and most everyone else’s.” With a suitable utility function, EY’s metaethics seems completely compatable with utilitarianism, I admit, but that seems unlikely. The utilitarian has to take into account the murderer’s preference for murder, should that preference actually exist (and not be a confusion). It seems highly unlikely to me that I and most of my fellow humans (which is where the utility function in question exists) care about someone’s preference for murder. Even assuming that I/we thought faster, more rationally, etc.
Oh, and a note on the “maximize your own utility function” language that I used. I tend to think about ethics in the first person: what should I do. Well, maximize my own utility function/preferences, whatever they are. I only start worrying about your preferences when I find out that they are information about my own preferences (or if I specifically care about your preferences in my own.) This is an explanation of how I’m thinking, but I should know better than to use this language on LW where most people haven’t seen it before and so will be confused.
The answer is the aggregate of some function for all suitable agents, but that function needn’t itself be a decision-theoretic utility function. It can be something else, like pleasure minus pain or even pleasure-not-derived-from-murder minus pain.
Ah, I was equating preference utilitarianism with utilitarianism.
I still think that calling yourself a utilitarian can be dangerous if only because it instantly calls to mind a list of stock objects (in some interlocutors) that just don’t apply given EY’s metaethics. It may be worth sticking to the terminology despite the cost though.