I claim that consequentialism is correct because it tends to follow from some basic axioms like moral responsibility being contagious backwards through causality, which I accept.
I further claim that deontologists, if they accept such axioms (which you claim they do), degenerate into consequentialists given sufficient reflection. (To be precise, there exists some finite sequence of reflection such that a deontologist will become a consequentialist).
I will note that though consequentialism is a fine ideal theory, at some point you really do have to implement a procedure, which means in practice, all consequentialists will be deontologists. (Possibly following the degenerate rule “see the future and pick actions that maximize EU”, though they will usually have other rules like “Don’t kill anyone even if it’s the right thing to do”). However, their deontological procedure will be ultimately justified by consequentialist reasoning.
I further claim that deontologists, if they accept such axioms (which you claim they do), degenerate into consequentialists given sufficient reflection.
My main objection is that this further claim wasn’t really argued in the original point. It was simply assumed—and it’s just too controversial a claim to assume. The net effect of your assumption was an inflationary use of the term—if consequentialist means what you said, all the interesting disputants in moral philosophy are consequentialists, whether they realize it or not.
It might be the case that your proposition is correct, and asserted non-consequentialists are just confused. I was objecting to assuming this when it was irrelevant to your broader point about the advantages of the label “awesome” in discussing moral reasoning. The overall point you were trying to make is equally insightful whether your further assertion is true or not.
I will note that though consequentialism is a fine ideal theory, at some point you really do have to implement a procedure, which means in practice, all consequentialists will be deontologists.
Agreed. This is usually called “rule utilitarianism” – the idea that, in practice, it actually conserves utils to just make a set of basic rules and follow them, rather than recalculating from scratch the utility of any given action each time you make a decision. Like, “don’t murder” is a pretty safe one, because it seems like in the vast majority of situations taking a life will have a negative utility. However, its still worth distinguishing this sharply from deontology, because if you ever did calculate and find a situation in which your rule resulted in lesser utility – like pushing the fat man in front of the train – you’d break the rule. The rule is an efficiency-approximation rather than a fundamental posit.
The point of rule utilitarianism isn’t only to save computational resources. It’s also that in any particular concrete situation we’re liable to have all sorts of non-moral motivations pulling at us, and those are liable to “leak” into whatever moral calculations we try to do and produce biased answers. Whereas if we work out ahead of time what our values are and turn them into sufficiently clear-cut rules (or procedures, or something), we don’t have that option. Hence “don’t kill anyone even if it’s the right thing to do”, as nyan_sandwich puts it—I think quoting someone else, maybe EY.
(A tangential remark, which you should feel free to ignore: The above may make it sound as if rule utilitarianism is only appropriate for those whose goal is to prioritize morality above absolutely everything else, and therefore for scarcely anyone. I think this is wrong, for two reasons. Firstly, the values you encode into those clear-cut rules don’t have to be only of the sort generally called “moral”. You can build into them a strong preference for your own welfare over others’, or whatever. Secondly, you always have the option of working out what your moral principles say you should do and then doing something else; but the rule-utilitarian approach makes it harder to do that while fooling yourself into thinking you aren’t.)
The above may make it sound as if rule utilitarianism is only appropriate for those whose goal is to prioritize morality above absolutely everything else, and therefore for scarcely anyone.
However, its still worth distinguishing this sharply from deontology, because if you ever did calculate and find a situation in which your rule resulted in lesser utility – like pushing the fat man in front of the train – you’d break the rule.
But the moment you allow sub-rules as exceptions to the general rules like in the quoted part above, you set the ground for Rule-consequentialism to collapse into Act-consequentialism via an unending chain of sub-rules. See Lyons, 1965.
Further, as a consequentialist, you have to think about the effects of accepting a decision-theory which lets you push the fat man onto the train tracks and what that means for the decision processes of other agents as well.
Neither should a good deontologist.
Ok, I wonder what we are even saying here.
I claim that consequentialism is correct because it tends to follow from some basic axioms like moral responsibility being contagious backwards through causality, which I accept.
I further claim that deontologists, if they accept such axioms (which you claim they do), degenerate into consequentialists given sufficient reflection. (To be precise, there exists some finite sequence of reflection such that a deontologist will become a consequentialist).
I will note that though consequentialism is a fine ideal theory, at some point you really do have to implement a procedure, which means in practice, all consequentialists will be deontologists. (Possibly following the degenerate rule “see the future and pick actions that maximize EU”, though they will usually have other rules like “Don’t kill anyone even if it’s the right thing to do”). However, their deontological procedure will be ultimately justified by consequentialist reasoning.
What do you think of that?
My main objection is that this further claim wasn’t really argued in the original point. It was simply assumed—and it’s just too controversial a claim to assume. The net effect of your assumption was an inflationary use of the term—if consequentialist means what you said, all the interesting disputants in moral philosophy are consequentialists, whether they realize it or not.
It might be the case that your proposition is correct, and asserted non-consequentialists are just confused. I was objecting to assuming this when it was irrelevant to your broader point about the advantages of the label “awesome” in discussing moral reasoning. The overall point you were trying to make is equally insightful whether your further assertion is true or not.
Agreed. This is usually called “rule utilitarianism” – the idea that, in practice, it actually conserves utils to just make a set of basic rules and follow them, rather than recalculating from scratch the utility of any given action each time you make a decision. Like, “don’t murder” is a pretty safe one, because it seems like in the vast majority of situations taking a life will have a negative utility. However, its still worth distinguishing this sharply from deontology, because if you ever did calculate and find a situation in which your rule resulted in lesser utility – like pushing the fat man in front of the train – you’d break the rule. The rule is an efficiency-approximation rather than a fundamental posit.
The point of rule utilitarianism isn’t only to save computational resources. It’s also that in any particular concrete situation we’re liable to have all sorts of non-moral motivations pulling at us, and those are liable to “leak” into whatever moral calculations we try to do and produce biased answers. Whereas if we work out ahead of time what our values are and turn them into sufficiently clear-cut rules (or procedures, or something), we don’t have that option. Hence “don’t kill anyone even if it’s the right thing to do”, as nyan_sandwich puts it—I think quoting someone else, maybe EY.
(A tangential remark, which you should feel free to ignore: The above may make it sound as if rule utilitarianism is only appropriate for those whose goal is to prioritize morality above absolutely everything else, and therefore for scarcely anyone. I think this is wrong, for two reasons. Firstly, the values you encode into those clear-cut rules don’t have to be only of the sort generally called “moral”. You can build into them a strong preference for your own welfare over others’, or whatever. Secondly, you always have the option of working out what your moral principles say you should do and then doing something else; but the rule-utilitarian approach makes it harder to do that while fooling yourself into thinking you aren’t.)
Isn’t that the awesomest goal? ,:-.
But the moment you allow sub-rules as exceptions to the general rules like in the quoted part above, you set the ground for Rule-consequentialism to collapse into Act-consequentialism via an unending chain of sub-rules. See Lyons, 1965.
Further, as a consequentialist, you have to think about the effects of accepting a decision-theory which lets you push the fat man onto the train tracks and what that means for the decision processes of other agents as well.