Have worse consequences for everybody, where “everybody” means present and future agents to which we assign moral value. For example, a sufficiently crazy deontologist might want to kill all such agents in the name of some sacred moral principle.
At the very least I find it interesting how rarely an analogous objection against VNM-utiliterians with different utility functions is raised. It’s almost as if many of the “VNM-utiliterians” around here don’t care what it means to “make everything worse” as long as one avoids doing it, and avoids doing it following the mathematically correct decision theory.
Rarely? Isn’t this exactly what we’re talking about when we talk about paperclip maximizers?
You want me to say something like “worse with respect to some utility function” and you want to respond with something like “a VNM-rational agent with a different utility function has the same property.” I didn’t claim that I reject deontologists but accept VNM-rational agents even if they have different utility functions from me. I’m just trying to explain that my current understanding of deontology makes it seem like a bad idea to me, which is why I don’t think it’s accurate. Are you trying to correct my understanding of deontology or are you agreeing with it but disagreeing that it’s a bad idea?
You want me to say something like “worse with respect to some utility function” and you want to respond with something like “a VNM-rational agent with a different utility function has the same property.”
No, I’m going to respond by asking you “with respect to which utility function?” and “why should I care about that utility function?”
Have worse consequences for everybody, where “everybody” means present and future agents to which we assign moral value.
You’ve assumed vague-utilitarianism here, which weakens your point. I would taboo “make everything worse” as “less freedom, health, fun, awesomeness, happyness, truth, etc”, where the list refers to all the good things, as argued in the metaethcis sequence.
You’ve assumed vague-utilitarianism here, which weakens your point. I would taboo “make everything worse” as “less freedom, health, fun, awesomeness, happyness, truth, etc”
Nice try. The problem with your definition is that freedom, for example, is fundamentally a deontological concept. If you don’t agree, I challenge you to give a non-deontological definition.
After thinking about it some more, I think I have a better way to explain what I mean.
What is freedom? One (not very good but illustrative) definition is the ability to make meaningful choices. Notice that this means respecting someone else’s freedom is a constraint on one’s decision algorithm not just on one’s outcome, thus it doesn’t satisfy the VNM axioms.
It sounds to me like you’re implicitly enforcing a Cartesian separation between the physical world and the algorithms that agents in it run. Properties of the algorithms that agents in the world run are still properties of the world.
For example, a sufficiently crazy deontologist might want to kill all such agents in the name of some sacred moral principle.
A sufficiently crazy consequentialist might want to kill all such agents because he’s scared of what the voices in his head might otherwise do. Your argument is not an argument at all.
And if the sacred moral principle leads to the deontologist killing everyone, that is a pretty terrible moral principle. Usually they’re not like that. Usually the “don’t kill people if you can help it” moral principle tends to be ranked pretty high up there to prevent things like this from happening.
Smells like consequentialist reasoning. Look, if I had a better example I would give it, but I am genuinely not sure what deontologists think they’re doing if they don’t think they’re just using heuristics that approximate consequentialist reasoning.
Have worse consequences for everybody, where “everybody” means present and future agents to which we assign moral value. For example, a sufficiently crazy deontologist might want to kill all such agents in the name of some sacred moral principle.
Rarely? Isn’t this exactly what we’re talking about when we talk about paperclip maximizers?
When I asked you to taboo “makes everything worse”, I meant taboo “worse” not taboo “everything”.
You want me to say something like “worse with respect to some utility function” and you want to respond with something like “a VNM-rational agent with a different utility function has the same property.” I didn’t claim that I reject deontologists but accept VNM-rational agents even if they have different utility functions from me. I’m just trying to explain that my current understanding of deontology makes it seem like a bad idea to me, which is why I don’t think it’s accurate. Are you trying to correct my understanding of deontology or are you agreeing with it but disagreeing that it’s a bad idea?
No, I’m going to respond by asking you “with respect to which utility function?” and “why should I care about that utility function?”
You’ve assumed vague-utilitarianism here, which weakens your point. I would taboo “make everything worse” as “less freedom, health, fun, awesomeness, happyness, truth, etc”, where the list refers to all the good things, as argued in the metaethcis sequence.
Nice try. The problem with your definition is that freedom, for example, is fundamentally a deontological concept. If you don’t agree, I challenge you to give a non-deontological definition.
What is a deontological concept and what is a non-deontological concept?
After thinking about it some more, I think I have a better way to explain what I mean.
What is freedom? One (not very good but illustrative) definition is the ability to make meaningful choices. Notice that this means respecting someone else’s freedom is a constraint on one’s decision algorithm not just on one’s outcome, thus it doesn’t satisfy the VNM axioms.
It sounds to me like you’re implicitly enforcing a Cartesian separation between the physical world and the algorithms that agents in it run. Properties of the algorithms that agents in the world run are still properties of the world.
I don’t see why I’m relying in it anymore than than the VNM-utiliterian is.
I thought I had made that clear in my second sentence:
Um, no. I can’t respond to a challenge to give a non-X definition of Y if I don’t know what X means.
A sufficiently crazy consequentialist might want to kill all such agents because he’s scared of what the voices in his head might otherwise do. Your argument is not an argument at all.
And if the sacred moral principle leads to the deontologist killing everyone, that is a pretty terrible moral principle. Usually they’re not like that. Usually the “don’t kill people if you can help it” moral principle tends to be ranked pretty high up there to prevent things like this from happening.
Smells like consequentialist reasoning. Look, if I had a better example I would give it, but I am genuinely not sure what deontologists think they’re doing if they don’t think they’re just using heuristics that approximate consequentialist reasoning.