“My baby is dead. Six months old and she’s dead.” “Take solace in the knowledge that this is all part of the Corn God’s plan.” ”Your god’s plan involves dead babies?” “If you’re gonna make an omelette, you’re gonna have to break a few children.” ”I’m not entirely sure I want to eat that omelette.”
I have no idea what people mean when they say they are against utilitarianism. My current interpretation is that they don’t think people should be VNM-rational, and I haven’t seen a cogent argument supporting this. Why isn’t this quote just establishing that the utility of babies is high?
I aspire to be VNM rational, but not a utilitarian.
It’s all very confusing because they both use the word “utility” but they seem to be different concepts. “Utilitarianism” is a particular moral theory that (depending on the speaker) assumes consequentialism, linearish aggregation of “utility” between people, independence and linearity of utility function components, utility is proportional to “happyness” or “well-being” or preference fulfillment, etc. I’m sure any given utilitarian will disagree with something in that list, but I’ve seen all of them claimed.
VNM utility only assumes that you assign utilities to possibilities consistently, and that your utilities aggregate by expectation. It also assumes consequentialism in some sense, but it’s not hard to make utility assignments that aren’t really usefully described as consequentialist.
I reject “utilitarianism” because it is very vague, and because I disagree with many of its interpretations.
Thanks for the explanation. Reading through the Wikipedia article on utilitarianism, it seems like this is one of those words that has been muddled by the presence of too many authors using it. In the future I guess I should refer to the concept I had in mind as VNM-utilitarianism.
Probably best not to refer to it with the word “utilitarianism”, since it isn’t a form of that. Calling it “consequentialism” is arguably enough, since (making appropriate asumptions about what a rational agent must do) a rational consequentialist must use a VNM utility function. But I guess not everyone does in fact agree with those assumptions, so perhaps “utility-function based consequentialism”. Or perhaps “VNM-consequentialism”.
A bounded utility function that places a lot of value on signaling/being “a good person” and desirable associate, getting some “warm glow” and “mostly doing the (deontologically) right thing” seems like a pretty good approximation.
Okay. So none of that is an argument against VNM-rationality, it’s an argument against a bunch of other ideas that have historically been attached to the label “utilitarian,” right? The main thing I got out of that post is that utilitarianism is hard, not that it’s wrong.
I don’t know what you have in mind by your allusion to Morgenstern-von Neumann. The theorem is descriptive, right? It says you can model a certain broad class of decision-making entities as maximizing a utility function. What is VNM-rationality, and what does it mean to argue for it or against it?
If your goal is “to do the greatest good for the greatest number,” or a similar utilitarian goal, I am not sure how the VNM theorem helps you.
What do you think of the “interpersonal utility comparison” problem? Vladimir_M regards it as something close to a defeater of utilitarianism.
I don’t know what you have in mind by your allusion to Morgenstern-von Neumann. The theorem is descriptive, right? It says you can model a certain broad class of decision-making entities as maximizing a utility function. What is VNM-rationality, and what does it mean to argue for it or against it?
“People should aim to be VNM-rational.” I think of this as a weak claim, which is why I didn’t understand why people appeared to be arguing against it. I concluded that they probably weren’t, and instead meant something else by utilitarianism, which is why I switched to a different term.
If your goal is “to do the greatest good for the greatest number,” or a similar utilitarian goal, I am not sure how the VNM theorem helps you.
Yes, that’s why I think of “people should aim to be VNM-rational” as a weak claim and didn’t understand why people appeared to be against it.
What do you think of the “interpersonal utility comparison” problem? Vladimir_M regards it as something close to a defeater of utilitarianism.
It seems like a very hard problem, but nobody claimed that ethics was easy. What does Vladimir_M think we should be doing instead?
What definition of “should” are you using here? Do you mean that people deontologically should aim to be VNM-rational? Or do you mean that people should be VNM-rational in order to maximize some (which?) utility function?
“People should aim for their behavior to satisfy the VNM axioms.”
OK. But this seems funny to me as a moral prescription. In fact a standard premise of economics is that people’s behavior does satisfy the VNM axioms, or at least that deviations from them are random and cancel each other out at large scales. That’s sort of the point of the VNM theorem: you can model people’s behavior as though they were maximizing something, even if that’s not the way an individual understands his own behavior.
Even if you don’t buy that premise, it’s hard for me to see why famous utilitarians like Bentham or Singer would be pleased if people hewed more closely to the VNM axioms. Couldn’t they do so, and still make the world worse by valuing bad things?
If your goal is “to do the greatest good for the greatest number,” or a similar utilitarian goal, I am not sure how the VNM theorem helps you.
Yes, that’s why I think of “people should aim to be VNM-rational” as a weak claim and didn’t understand why people appeared to be against it.
Is “people should aim for their behavior to satisfy the VNM axioms” all that you meant originally by utilitarianism? From what you’ve written elsewhere in this thread it sounds like you might mean something more, but I could be misunderstanding.
Even if you don’t buy that premise, it’s hard for me to see why famous utilitarians like Bentham or Singer would be pleased if people hewed more closely to the VNM axioms. Couldn’t they do so, and still make the world worse by valuing bad things?
Yes, but if I think that optimal moral behavior means using a specific utility function, somebody who isn’t being VNM-rational is incapable of optimal moral behavior.
Is “people should aim for their behavior to satisfy the VNM axioms” all that you meant originally by utilitarianism? From what you’ve written elsewhere in this thread it sounds like you might mean something more, but I could be misunderstanding.
It’s all I originally meant. I gathered from all of the responses that this is not how other people use the term, so I stopped using it that way.
I also have no idea what people mean when they say they are deontologists. I’ve read Alicorn’s Deontology for Consequentialists and I still really have no idea. My current interpretation is that a deontologist will make a decision that makes everything worse if it upholds some moral principle, which just seems like obviously a bad idea to me. I think it’s reasonable to argue that deontology and virtue ethics describe heuristics for carrying out moral decisions in practice, but heuristics are heuristics because they break down, and I don’t see a reasonable way to judge which heuristics to use that isn’t consequentialist / utilitarian.
Then again, it’s quite likely that my understanding of these terms doesn’t agree with their colloquial use, in which case I need to find a better word for what I mean by consequentialist / utilitarian. Maybe I should stick to “VNM-rational.”
I also didn’t claim to be an ultrafinitist, although I have ultrafinitist sympathies. I haven’t worked through the proof of the VNM theorem yet in enough detail to understand how infinitary it is (although I intend to).
My current interpretation is that a deontologist will make a decision that makes everything worse if it upholds some moral principle, which just seems like obviously a bad idea to me.
Taboo “make everything worse”.
At the very least I find it interesting how rarely an analogous objection against VNM-utiliterians with different utility functions is raised. It’s almost as if many of the “VNM-utiliterians” around here don’t care what it means to “make everything worse” as long as one avoids doing it, and avoids doing it following the mathematically correct decision theory.
I also didn’t claim to be an ultrafinitist, although I have ultrafinitist sympathies. I haven’t worked through the proof of the VNM theorem yet in enough detail to understand how infinitary it is (although I intend to).
Well the continuity axiom in the statement certainly seems dubious from an ultafinitist point of view.
Have worse consequences for everybody, where “everybody” means present and future agents to which we assign moral value. For example, a sufficiently crazy deontologist might want to kill all such agents in the name of some sacred moral principle.
At the very least I find it interesting how rarely an analogous objection against VNM-utiliterians with different utility functions is raised. It’s almost as if many of the “VNM-utiliterians” around here don’t care what it means to “make everything worse” as long as one avoids doing it, and avoids doing it following the mathematically correct decision theory.
Rarely? Isn’t this exactly what we’re talking about when we talk about paperclip maximizers?
You want me to say something like “worse with respect to some utility function” and you want to respond with something like “a VNM-rational agent with a different utility function has the same property.” I didn’t claim that I reject deontologists but accept VNM-rational agents even if they have different utility functions from me. I’m just trying to explain that my current understanding of deontology makes it seem like a bad idea to me, which is why I don’t think it’s accurate. Are you trying to correct my understanding of deontology or are you agreeing with it but disagreeing that it’s a bad idea?
You want me to say something like “worse with respect to some utility function” and you want to respond with something like “a VNM-rational agent with a different utility function has the same property.”
No, I’m going to respond by asking you “with respect to which utility function?” and “why should I care about that utility function?”
Have worse consequences for everybody, where “everybody” means present and future agents to which we assign moral value.
You’ve assumed vague-utilitarianism here, which weakens your point. I would taboo “make everything worse” as “less freedom, health, fun, awesomeness, happyness, truth, etc”, where the list refers to all the good things, as argued in the metaethcis sequence.
You’ve assumed vague-utilitarianism here, which weakens your point. I would taboo “make everything worse” as “less freedom, health, fun, awesomeness, happyness, truth, etc”
Nice try. The problem with your definition is that freedom, for example, is fundamentally a deontological concept. If you don’t agree, I challenge you to give a non-deontological definition.
After thinking about it some more, I think I have a better way to explain what I mean.
What is freedom? One (not very good but illustrative) definition is the ability to make meaningful choices. Notice that this means respecting someone else’s freedom is a constraint on one’s decision algorithm not just on one’s outcome, thus it doesn’t satisfy the VNM axioms.
It sounds to me like you’re implicitly enforcing a Cartesian separation between the physical world and the algorithms that agents in it run. Properties of the algorithms that agents in the world run are still properties of the world.
For example, a sufficiently crazy deontologist might want to kill all such agents in the name of some sacred moral principle.
A sufficiently crazy consequentialist might want to kill all such agents because he’s scared of what the voices in his head might otherwise do. Your argument is not an argument at all.
And if the sacred moral principle leads to the deontologist killing everyone, that is a pretty terrible moral principle. Usually they’re not like that. Usually the “don’t kill people if you can help it” moral principle tends to be ranked pretty high up there to prevent things like this from happening.
Smells like consequentialist reasoning. Look, if I had a better example I would give it, but I am genuinely not sure what deontologists think they’re doing if they don’t think they’re just using heuristics that approximate consequentialist reasoning.
To make sure I understand, do you mean that a sufficiently rational utilitarian agent may decide to kill a 6 month old baby if it decides that would serve its goal of maximizing aggregate utility, and if I’m pretty sure that no 6 month old baby should ever be intentionally killed, I would conclude that utilitarianism is probably wrong?
-- Scenes From A Multiverse
This works equally well as an argument against utilitarianism, which I’m guessing may be your intent.
I have no idea what people mean when they say they are against utilitarianism. My current interpretation is that they don’t think people should be VNM-rational, and I haven’t seen a cogent argument supporting this. Why isn’t this quote just establishing that the utility of babies is high?
I aspire to be VNM rational, but not a utilitarian.
It’s all very confusing because they both use the word “utility” but they seem to be different concepts. “Utilitarianism” is a particular moral theory that (depending on the speaker) assumes consequentialism, linearish aggregation of “utility” between people, independence and linearity of utility function components, utility is proportional to “happyness” or “well-being” or preference fulfillment, etc. I’m sure any given utilitarian will disagree with something in that list, but I’ve seen all of them claimed.
VNM utility only assumes that you assign utilities to possibilities consistently, and that your utilities aggregate by expectation. It also assumes consequentialism in some sense, but it’s not hard to make utility assignments that aren’t really usefully described as consequentialist.
I reject “utilitarianism” because it is very vague, and because I disagree with many of its interpretations.
Thanks for the explanation. Reading through the Wikipedia article on utilitarianism, it seems like this is one of those words that has been muddled by the presence of too many authors using it. In the future I guess I should refer to the concept I had in mind as VNM-utilitarianism.
Probably best not to refer to it with the word “utilitarianism”, since it isn’t a form of that. Calling it “consequentialism” is arguably enough, since (making appropriate asumptions about what a rational agent must do) a rational consequentialist must use a VNM utility function. But I guess not everyone does in fact agree with those assumptions, so perhaps “utility-function based consequentialism”. Or perhaps “VNM-consequentialism”.
A bounded utility function that places a lot of value on signaling/being “a good person” and desirable associate, getting some “warm glow” and “mostly doing the (deontologically) right thing” seems like a pretty good approximation.
I find these criticisms by Vladimir_M to be really superb.
Okay. So none of that is an argument against VNM-rationality, it’s an argument against a bunch of other ideas that have historically been attached to the label “utilitarian,” right? The main thing I got out of that post is that utilitarianism is hard, not that it’s wrong.
I don’t know what you have in mind by your allusion to Morgenstern-von Neumann. The theorem is descriptive, right? It says you can model a certain broad class of decision-making entities as maximizing a utility function. What is VNM-rationality, and what does it mean to argue for it or against it?
If your goal is “to do the greatest good for the greatest number,” or a similar utilitarian goal, I am not sure how the VNM theorem helps you.
What do you think of the “interpersonal utility comparison” problem? Vladimir_M regards it as something close to a defeater of utilitarianism.
“People should aim to be VNM-rational.” I think of this as a weak claim, which is why I didn’t understand why people appeared to be arguing against it. I concluded that they probably weren’t, and instead meant something else by utilitarianism, which is why I switched to a different term.
Yes, that’s why I think of “people should aim to be VNM-rational” as a weak claim and didn’t understand why people appeared to be against it.
It seems like a very hard problem, but nobody claimed that ethics was easy. What does Vladimir_M think we should be doing instead?
What definition of “should” are you using here? Do you mean that people deontologically should aim to be VNM-rational? Or do you mean that people should be VNM-rational in order to maximize some (which?) utility function?
Can you spell this out a little more?
I don’t know. I think this comment reveals a lot of respect for what you might call “folk ethics,” i.e. the way normal people do it.
“People should aim for their behavior to satisfy the VNM axioms.” I’m not sure how to get more precise than this.
OK. But this seems funny to me as a moral prescription. In fact a standard premise of economics is that people’s behavior does satisfy the VNM axioms, or at least that deviations from them are random and cancel each other out at large scales. That’s sort of the point of the VNM theorem: you can model people’s behavior as though they were maximizing something, even if that’s not the way an individual understands his own behavior.
Even if you don’t buy that premise, it’s hard for me to see why famous utilitarians like Bentham or Singer would be pleased if people hewed more closely to the VNM axioms. Couldn’t they do so, and still make the world worse by valuing bad things?
Is “people should aim for their behavior to satisfy the VNM axioms” all that you meant originally by utilitarianism? From what you’ve written elsewhere in this thread it sounds like you might mean something more, but I could be misunderstanding.
Yes, but if I think that optimal moral behavior means using a specific utility function, somebody who isn’t being VNM-rational is incapable of optimal moral behavior.
It’s all I originally meant. I gathered from all of the responses that this is not how other people use the term, so I stopped using it that way.
Well, Alicorn is a deontologist.
In any case, as an ultafinitist you should know the problems with the VNM theorem.
I also have no idea what people mean when they say they are deontologists. I’ve read Alicorn’s Deontology for Consequentialists and I still really have no idea. My current interpretation is that a deontologist will make a decision that makes everything worse if it upholds some moral principle, which just seems like obviously a bad idea to me. I think it’s reasonable to argue that deontology and virtue ethics describe heuristics for carrying out moral decisions in practice, but heuristics are heuristics because they break down, and I don’t see a reasonable way to judge which heuristics to use that isn’t consequentialist / utilitarian.
Then again, it’s quite likely that my understanding of these terms doesn’t agree with their colloquial use, in which case I need to find a better word for what I mean by consequentialist / utilitarian. Maybe I should stick to “VNM-rational.”
I also didn’t claim to be an ultrafinitist, although I have ultrafinitist sympathies. I haven’t worked through the proof of the VNM theorem yet in enough detail to understand how infinitary it is (although I intend to).
Taboo “make everything worse”.
At the very least I find it interesting how rarely an analogous objection against VNM-utiliterians with different utility functions is raised. It’s almost as if many of the “VNM-utiliterians” around here don’t care what it means to “make everything worse” as long as one avoids doing it, and avoids doing it following the mathematically correct decision theory.
Well the continuity axiom in the statement certainly seems dubious from an ultafinitist point of view.
Have worse consequences for everybody, where “everybody” means present and future agents to which we assign moral value. For example, a sufficiently crazy deontologist might want to kill all such agents in the name of some sacred moral principle.
Rarely? Isn’t this exactly what we’re talking about when we talk about paperclip maximizers?
When I asked you to taboo “makes everything worse”, I meant taboo “worse” not taboo “everything”.
You want me to say something like “worse with respect to some utility function” and you want to respond with something like “a VNM-rational agent with a different utility function has the same property.” I didn’t claim that I reject deontologists but accept VNM-rational agents even if they have different utility functions from me. I’m just trying to explain that my current understanding of deontology makes it seem like a bad idea to me, which is why I don’t think it’s accurate. Are you trying to correct my understanding of deontology or are you agreeing with it but disagreeing that it’s a bad idea?
No, I’m going to respond by asking you “with respect to which utility function?” and “why should I care about that utility function?”
You’ve assumed vague-utilitarianism here, which weakens your point. I would taboo “make everything worse” as “less freedom, health, fun, awesomeness, happyness, truth, etc”, where the list refers to all the good things, as argued in the metaethcis sequence.
Nice try. The problem with your definition is that freedom, for example, is fundamentally a deontological concept. If you don’t agree, I challenge you to give a non-deontological definition.
What is a deontological concept and what is a non-deontological concept?
After thinking about it some more, I think I have a better way to explain what I mean.
What is freedom? One (not very good but illustrative) definition is the ability to make meaningful choices. Notice that this means respecting someone else’s freedom is a constraint on one’s decision algorithm not just on one’s outcome, thus it doesn’t satisfy the VNM axioms.
It sounds to me like you’re implicitly enforcing a Cartesian separation between the physical world and the algorithms that agents in it run. Properties of the algorithms that agents in the world run are still properties of the world.
I don’t see why I’m relying in it anymore than than the VNM-utiliterian is.
I thought I had made that clear in my second sentence:
Um, no. I can’t respond to a challenge to give a non-X definition of Y if I don’t know what X means.
A sufficiently crazy consequentialist might want to kill all such agents because he’s scared of what the voices in his head might otherwise do. Your argument is not an argument at all.
And if the sacred moral principle leads to the deontologist killing everyone, that is a pretty terrible moral principle. Usually they’re not like that. Usually the “don’t kill people if you can help it” moral principle tends to be ranked pretty high up there to prevent things like this from happening.
Smells like consequentialist reasoning. Look, if I had a better example I would give it, but I am genuinely not sure what deontologists think they’re doing if they don’t think they’re just using heuristics that approximate consequentialist reasoning.
Huh? How so?
Replace the “corn god” in the quote with a sufficiently rational utiliterian agent.
To make sure I understand, do you mean that a sufficiently rational utilitarian agent may decide to kill a 6 month old baby if it decides that would serve its goal of maximizing aggregate utility, and if I’m pretty sure that no 6 month old baby should ever be intentionally killed, I would conclude that utilitarianism is probably wrong?
Nah, it’s just a cheap shot at the theists.
EDIT: not sure about the source, but the way it’s edited …
I hadn’t actually thought of that, but that could be part of why I liked the quote.