The best distinction I’ve seen between the two consists in whether you honour or promote your values.
Say I value not-murdering.
If I’m a consequentialist, I’ll act on this by trying to maximise the amount of non-murdering (or minimising the amount of murdering). This might include murdering someone who I knew was a particularly prolific murderer.
If I’m a deontologist, I’ll act on this value by honouring it: I’ll withhold from murdering anyone, even if this might increase the total amount of murdering.
Unfortunately I can’t remember offhand who came up with this analysis.
If I’m a consequentialist, I’ll act on this by trying to maximise the amount of non-murdering (or minimising the amount of murdering). This might include murdering someone who I knew was a particularly prolific murderer.
If I’m a deontologist, I’ll act on this value by honouring it: I’ll withhold from murdering anyone, even if this might increase the total amount of murdering.
This sounds like they are, in fact, valuing different things altogether. The consequentialist negvalues the amount of murdering there is, while the deontologist negvalues doing the murdering.
If the deontologist and consequentialist both value not-murdering-people, then the consequentialist takes the action which leads to them not having murdered someone (so they don’t murder, even if it means more total murdering), and the deontologist is as quoted.
If they both negvalue the total amount of murders, the deontologist will honour not-doing-things-which-are-more-total-murder, which by logical necessity implies ¬( not murdering this one time), which means they also murder for the sake of less murdering.
It seems the distinction is, again, merely one of degree and probability estimates, and a difference in the general conceptspace of where people from both “camps” tend to usually pinpoint their values. To rephrase, this means it seems like the only real difference between consequentialists and deontologists is the language and the general empirical clusters of things they value more, including different probability estimates for certain values of the likelihood of some things.
I think it isn’t precise to say that they value different things, since the deontologist doesn’t decide in terms of values. Speaking of values is practical from the point of view of a consequentialist, who compares different possible states (or histories) of the world; values are then functions defined over the set of world states which the decider tries to maximise. A pure ideal deontologist doesn’t do that; his moral decisions are local (i.e. they take into account only the deontologist’s own action and perhaps its immediate context) and binary (i.e. the considered action is either approved or not, it isn’t compared to other possible actions). If more actions are approved the deontologist may use whatever algorithm to choose between them, but this choice is outside the domain of deontologist ethics.
Deontologist rules can’t force one to act as if one valued some total amount of murders (low or high), as the total amount of murders isn’t one’s own action. Formulating the preference as a “deontological” rule of “you shouldn’t do things that would lead you to believe that the total amount of murders would increase” is sneaking consequentialism into deontology.
Formulating the preference as a “deontological” rule of “you shouldn’t do things that would lead you to believe that the total amount of murders would increase” is sneaking consequentialism into deontology.
This is not at all clear to me. The Kantian Categorical Imperative is usually seen as a deontological rule, even though it’s really a formulation of ‘reflective’ concerns (viz., ‘you should not act as you would not have everyone act’, akin to the Silver and Golden Rule) that could be seen as meta-ethical in their own right.
Good point. This also explains why we are so willing to delegate “killing” to external entities, such as job occupations (when the “killing” involves chickens and cattle) and authorities (when we target war enemies, terrorists and the like. Of course this comes with very strict safeguards and due processes.) More recently, we have also started delegating our “killing” to machines such as drones; admittedly, this ignores the truism that drones don’t kill people, people kill people.
Maybe if we were less deontological and more consequentialist in our outlook, there would be less of this kind of delegation.
The best distinction I’ve seen between the two consists in whether you honour or promote your values.
Say I value not-murdering.
If I’m a consequentialist, I’ll act on this by trying to maximise the amount of non-murdering (or minimising the amount of murdering). This might include murdering someone who I knew was a particularly prolific murderer.
If I’m a deontologist, I’ll act on this value by honouring it: I’ll withhold from murdering anyone, even if this might increase the total amount of murdering.
Unfortunately I can’t remember offhand who came up with this analysis.
This sounds like they are, in fact, valuing different things altogether. The consequentialist negvalues the amount of murdering there is, while the deontologist negvalues doing the murdering.
If the deontologist and consequentialist both value not-murdering-people, then the consequentialist takes the action which leads to them not having murdered someone (so they don’t murder, even if it means more total murdering), and the deontologist is as quoted.
If they both negvalue the total amount of murders, the deontologist will honour not-doing-things-which-are-more-total-murder, which by logical necessity implies ¬( not murdering this one time), which means they also murder for the sake of less murdering.
It seems the distinction is, again, merely one of degree and probability estimates, and a difference in the general conceptspace of where people from both “camps” tend to usually pinpoint their values. To rephrase, this means it seems like the only real difference between consequentialists and deontologists is the language and the general empirical clusters of things they value more, including different probability estimates for certain values of the likelihood of some things.
I think it isn’t precise to say that they value different things, since the deontologist doesn’t decide in terms of values. Speaking of values is practical from the point of view of a consequentialist, who compares different possible states (or histories) of the world; values are then functions defined over the set of world states which the decider tries to maximise. A pure ideal deontologist doesn’t do that; his moral decisions are local (i.e. they take into account only the deontologist’s own action and perhaps its immediate context) and binary (i.e. the considered action is either approved or not, it isn’t compared to other possible actions). If more actions are approved the deontologist may use whatever algorithm to choose between them, but this choice is outside the domain of deontologist ethics.
Deontologist rules can’t force one to act as if one valued some total amount of murders (low or high), as the total amount of murders isn’t one’s own action. Formulating the preference as a “deontological” rule of “you shouldn’t do things that would lead you to believe that the total amount of murders would increase” is sneaking consequentialism into deontology.
This is not at all clear to me. The Kantian Categorical Imperative is usually seen as a deontological rule, even though it’s really a formulation of ‘reflective’ concerns (viz., ‘you should not act as you would not have everyone act’, akin to the Silver and Golden Rule) that could be seen as meta-ethical in their own right.
Good point. This also explains why we are so willing to delegate “killing” to external entities, such as job occupations (when the “killing” involves chickens and cattle) and authorities (when we target war enemies, terrorists and the like. Of course this comes with very strict safeguards and due processes.) More recently, we have also started delegating our “killing” to machines such as drones; admittedly, this ignores the truism that drones don’t kill people, people kill people.
Maybe if we were less deontological and more consequentialist in our outlook, there would be less of this kind of delegation.
Depends, a deontological outlook with a maxim that you are responsible for what you have done in your name would be even more effective.