I am confused by discussions about utilitarianism on LessWrong. My understanding, which comes mostly from the SEP article, was that pretty much all variants of utilitarianism are based on the idea that each person’s quality of life can be quantified—i.e., that person’s “utility”—and these utilities can be aggregated. Under preference utilitarianism, a person’s utility is determined based on whether their values are being fulfilled. Under all of the classical formulations of utilitarianism, everyone’s utility function has the same weight when the aggregation is performed, hence the catchy phrase “greatest good for the greatest number”.
However, I have also seen LW posts and comments talk about utilitarianism in relation to how much you should value the lives of people close to you compared to other people, and how much you should value abstract things like “freedom” relative to people’s lives. This comment thread is one example. These discussions about valuing the lives of others and quantifying abstract values sounds a lot like utility maximization under rational choice theory rather than utilitarianism.
So are people conflating utility maximization and utilitarianism, am I getting confused and misunderstanding the distinction, or is something else going on?
It’s true that people often conflate utilitarianism with consequentialism, but I don’t think that’s what’s going on here. I think it is quite reasonable to include under utilitarianism moral theories that are pretty close, like weighting people when aggregating. If people think that raw utilitarianism doesn’t describe human morality, isn’t it more useful for the term to describe people departing from the outpost, rather than the single theory? Abstract values that are not per-person are more problematic to include in the umbrella, but searching for “free” in that post doesn’t turn up an example. If your definition is so narrow that you reject Nozick’s utility monster as having to do with utilitarianism, then your definition is too narrow. Also, the lack of a normalization means that giving everyone “the same weight” does not clearly pin it down.
This confused me for a long time too. I ultimately came to the conclusion that “utilitarianism” as that word is usually used by LessWrongers doesn’t have the standard meaning of “an ethical theory that holds some kind of maximization of utils in the world to be the good”, and instead uses it as something largely synonymous with “consequentialism”.
“Consequentialism” is too broad, “utilitarianism” is too narrow, and “VNM rationality” is too clumsy and not generally thought of as a school of ethical thought.
I am confused by discussions about utilitarianism on LessWrong. My understanding, which comes mostly from the SEP article, was that pretty much all variants of utilitarianism are based on the idea that each person’s quality of life can be quantified—i.e., that person’s “utility”—and these utilities can be aggregated. Under preference utilitarianism, a person’s utility is determined based on whether their values are being fulfilled. Under all of the classical formulations of utilitarianism, everyone’s utility function has the same weight when the aggregation is performed, hence the catchy phrase “greatest good for the greatest number”.
However, I have also seen LW posts and comments talk about utilitarianism in relation to how much you should value the lives of people close to you compared to other people, and how much you should value abstract things like “freedom” relative to people’s lives. This comment thread is one example. These discussions about valuing the lives of others and quantifying abstract values sounds a lot like utility maximization under rational choice theory rather than utilitarianism.
So are people conflating utility maximization and utilitarianism, am I getting confused and misunderstanding the distinction, or is something else going on?
Often, yes.
It’s true that people often conflate utilitarianism with consequentialism, but I don’t think that’s what’s going on here. I think it is quite reasonable to include under utilitarianism moral theories that are pretty close, like weighting people when aggregating. If people think that raw utilitarianism doesn’t describe human morality, isn’t it more useful for the term to describe people departing from the outpost, rather than the single theory? Abstract values that are not per-person are more problematic to include in the umbrella, but searching for “free” in that post doesn’t turn up an example. If your definition is so narrow that you reject Nozick’s utility monster as having to do with utilitarianism, then your definition is too narrow. Also, the lack of a normalization means that giving everyone “the same weight” does not clearly pin it down.
This confused me for a long time too. I ultimately came to the conclusion that “utilitarianism” as that word is usually used by LessWrongers doesn’t have the standard meaning of “an ethical theory that holds some kind of maximization of utils in the world to be the good”, and instead uses it as something largely synonymous with “consequentialism”.
“Consequentialism” is too broad, “utilitarianism” is too narrow, and “VNM rationality” is too clumsy and not generally thought of as a school of ethical thought.
It sounds like certain forms of egoism.
Egoism, perhaps?