The problem with unbreakable rules is that you’re only allowed to have one. Suppose I have a moral duty to tell the truth no matter what and a moral duty to protect the innocent no matter what. Then what do I do if I find myself in a situation where the only way I can protect the innocent is by lying?
More generally, real life finds us in situations where we are forced to make tradeoffs, and furthermore, real life is continuous in a way that is not well-captured by qualitative rules. What if I think I have a 98% chance of protecting the innocent by lying?---or a 51% chance, or a 40% chance? What if I think a statement is 60% probable but I assert it confidently; is that a “lie”? &c., &c.
“Lying is wrong because I swore an oath to be honest” or “Lying is wrong because people have a right to the truth” may be good summaries of more-or-less what you’re trying to do and why, but they’re far too brittle to be your actual decision process. Real life has implementation details, and the implementation details are not made out of English sentences.
Is there a standard reply in deontology? The standard reply of a consequentialist, of course, is the utility function.
I don’t know whether there is a standard reply in deontology but the appropriate reply is using a function equivalent to the utility function by a consequentialist.
Take the concept of the utility function
Rename it to something suitably impressive (but I’ll just go with the bland ‘deontological decision function’)
Replace ‘utility of this decision’ with ‘rightness of this decision’.
A primitive utility function may include a term for ‘my bank balance’. A primitive deontological decision function would have a term for “Not telling a lie”.
Obviously, the ‘deontological decision function’ sacrifices the unbreakable criteria. This is appropriate when making a fair comparison between consequentialist and deontological decisions. The utility function sacrifice absolute reliance on one particular desideratum in order to accommodate all the others.
For the sake of completeness I’ll iterate what seem to be the only possible approaches that actually allow having multiple unbreakable rules.
1) Only allow unbreakable rules that never contradict each other. This involves making the rules more complex. For example:
Always rescue puppies.
Never lie, except if it saves the life of puppies.
Do not commit adultery unless you are prostituting yourself in order to donate to the (R)SPCA.
Such a system is results in an approximation of the continuous deontological decision function.
2) Just have a single unbreakable meta-rule. For example:
Always do the most right thing in the deontological decision function. Or,
Always maximise utility.
These responses amount to “Hack a deontological system with unbreakable rules to work around the spirit of either ‘unbreakable’ or ‘deontological’” and I include them only for completeness. My main point is that a deontological approach can be practically the same as the consequentialist ‘utility function’ approach.
It disappoints me that this comment is currently at −1. Of all the comments I have made in the last week this was probably my favourite and it remains so now.
If “the standard reply of a consequentialist is the utility function” then the analogous reply of a deontologist is something very similar. It is unreasonable to compare consequentialism with a utility function with a deontological system in which rules are ‘unbreakable’. The latter is an absurd caricature of deontological reasoning that is only worth mentioning at all because deontologists are on average less inclined to follow their undeveloped thoughts through to the natural conclusion.
Was my post downvoted because...?
Someone disagrees that a ‘function’ system applies to deontology just as it applies to consiquentialism.
I have missed the fact that conclusion is universally apparent and I am stating the obvious.
I included an appendix to acknowledge the consequences of the ‘universal rule’ system and elaborate on what a coherent system will look like if this universality can not be let go.
I haven’t voted on your comment. I like parts of it, but found other parts very hard to interpret, to the point where they might have altered the reading of the parts I like, and so I was left with no way to assess its content. If I had downvoted, it would be because of the confusion and a desire to see fewer confusing comments.
I’m pretty sure the standard reply is, “Sometimes there is no right
answer.” These are rules for classifying actions as moral or immoral,
not rules that describe the behavior of an always moral actor. If every
possible action (including inaction) is immoral, then your actions are
immoral.
The problem with unbreakable rules is that you’re only allowed to have one.
I second the question. Is there a standard reply in deontology?
In my experience, deontologists treat this as a feature rather than a bug. The absolute necessity that the rules never conflict is a constraint, which, they think, helps them to deduce what those rules must be.
This assumes that deontological rules must be unbreakable, doesn’t it? That might be true for Kantian deontology, but probably isn’t true for Rossian deontology or situation ethics.
We can, for instance imagine a deontological system (moral code) with three rules A, B and C. Where A and B conflict, B takes precedence; where B and C conflict, C takes precedence; where C and A conflict, A takes precedence (and there are no circumstances where rules A, B and C all apply together). That would give a clear moral conclusion in all cases, but with no unbreakable rules at all.
True, there would be a complex, messy rule which combines A, B and C in such a way as not to create exceptions, but the messy rule is not itself part of the moral code, so it is not strictly a deontological rule.
All unbreakable rules in a deontological moral system are negative; you would never have one saying “protect the innocent.” But you can have “don’t lie” and “don’t murder” and so on.
And no, if you answer the question truthfully, failing to protect the innocent, they don’t count that as murdering (unless there was some other choice that you could have made without either lying or failing to protect the person.)
Yes, but not “unbreakable” ones. In other words there will be exceptions on account of some other positive or negative requirement, as in the objections above.
The problem with unbreakable rules is that you’re only allowed to have one.
“Allowed”?
It is quite common for moral systems found in the field to have multiple unbreakable rules and for subscribers to be faced with the bad moral luck of having to break one of them. The moral system probably has a preference on the choice, but it still condemns the act and the person.
A really clever deontic theory either doesn’t permit those conflicts, or has a meta-rule that tells you what to do when they happen. (My favored solution is to privilege the null action.)
A deontic theory might take into account your probability assessments, or ideal probability assessments, regarding the likely outcome of your action.
And of course if you’re going to fully describe what a rule means, you have to define things in it like “lie”, just as to fully describe utilitarianism you have to define “utility”.
It’s true that the detail of real life is an objection to deontology, but it is also an objection to every other moral system, for much the same reasons.
Suppose you believe that lying is wrong for deontic reasons. Does it follow that we should program an AI never to lie? If so, can a consequentialist counter with arguments about how that would result in destroying the universe and (assuming those arguments were empirically correct) have a hope of changing your mind?
A deontologist may care about consequences, of course. I think whether and how much you are responsible for the lies of an AI you create probably depends on the exact theory. And of course knowingly doing something to risk destroying the world would almost certainly be worse than lying-by-proxy, so such arguments could be effective.
If you think that lying is just wrong, can’t you just… not lie? I don’t see the problem here.
The problem with unbreakable rules is that you’re only allowed to have one. Suppose I have a moral duty to tell the truth no matter what and a moral duty to protect the innocent no matter what. Then what do I do if I find myself in a situation where the only way I can protect the innocent is by lying?
More generally, real life finds us in situations where we are forced to make tradeoffs, and furthermore, real life is continuous in a way that is not well-captured by qualitative rules. What if I think I have a 98% chance of protecting the innocent by lying?---or a 51% chance, or a 40% chance? What if I think a statement is 60% probable but I assert it confidently; is that a “lie”? &c., &c.
“Lying is wrong because I swore an oath to be honest” or “Lying is wrong because people have a right to the truth” may be good summaries of more-or-less what you’re trying to do and why, but they’re far too brittle to be your actual decision process. Real life has implementation details, and the implementation details are not made out of English sentences.
I second the question. Is there a standard reply in deontology? The standard reply of a consequentialist, of course, is the utility function.
I don’t know whether there is a standard reply in deontology but the appropriate reply is using a function equivalent to the utility function by a consequentialist.
Take the concept of the utility function
Rename it to something suitably impressive (but I’ll just go with the bland ‘deontological decision function’)
Replace ‘utility of this decision’ with ‘rightness of this decision’.
A primitive utility function may include a term for ‘my bank balance’. A primitive deontological decision function would have a term for “Not telling a lie”.
Obviously, the ‘deontological decision function’ sacrifices the unbreakable criteria. This is appropriate when making a fair comparison between consequentialist and deontological decisions. The utility function sacrifice absolute reliance on one particular desideratum in order to accommodate all the others.
For the sake of completeness I’ll iterate what seem to be the only possible approaches that actually allow having multiple unbreakable rules.
1) Only allow unbreakable rules that never contradict each other. This involves making the rules more complex. For example:
Always rescue puppies.
Never lie, except if it saves the life of puppies.
Do not commit adultery unless you are prostituting yourself in order to donate to the (R)SPCA.
Such a system is results in an approximation of the continuous deontological decision function.
2) Just have a single unbreakable meta-rule. For example:
Always do the most right thing in the deontological decision function. Or,
Always maximise utility.
These responses amount to “Hack a deontological system with unbreakable rules to work around the spirit of either ‘unbreakable’ or ‘deontological’” and I include them only for completeness. My main point is that a deontological approach can be practically the same as the consequentialist ‘utility function’ approach.
It disappoints me that this comment is currently at −1. Of all the comments I have made in the last week this was probably my favourite and it remains so now.
If “the standard reply of a consequentialist is the utility function” then the analogous reply of a deontologist is something very similar. It is unreasonable to compare consequentialism with a utility function with a deontological system in which rules are ‘unbreakable’. The latter is an absurd caricature of deontological reasoning that is only worth mentioning at all because deontologists are on average less inclined to follow their undeveloped thoughts through to the natural conclusion.
Was my post downvoted because...?
Someone disagrees that a ‘function’ system applies to deontology just as it applies to consiquentialism.
I have missed the fact that conclusion is universally apparent and I am stating the obvious.
I included an appendix to acknowledge the consequences of the ‘universal rule’ system and elaborate on what a coherent system will look like if this universality can not be let go.
I haven’t voted on your comment. I like parts of it, but found other parts very hard to interpret, to the point where they might have altered the reading of the parts I like, and so I was left with no way to assess its content. If I had downvoted, it would be because of the confusion and a desire to see fewer confusing comments.
Thankyou. A reasonable judgement. Not something that is trivial to rectify but certainly not an objection to object to.
I’m pretty sure the standard reply is, “Sometimes there is no right answer.” These are rules for classifying actions as moral or immoral, not rules that describe the behavior of an always moral actor. If every possible action (including inaction) is immoral, then your actions are immoral.
In my experience, deontologists treat this as a feature rather than a bug. The absolute necessity that the rules never conflict is a constraint, which, they think, helps them to deduce what those rules must be.
This assumes that deontological rules must be unbreakable, doesn’t it? That might be true for Kantian deontology, but probably isn’t true for Rossian deontology or situation ethics.
We can, for instance imagine a deontological system (moral code) with three rules A, B and C. Where A and B conflict, B takes precedence; where B and C conflict, C takes precedence; where C and A conflict, A takes precedence (and there are no circumstances where rules A, B and C all apply together). That would give a clear moral conclusion in all cases, but with no unbreakable rules at all.
True, there would be a complex, messy rule which combines A, B and C in such a way as not to create exceptions, but the messy rule is not itself part of the moral code, so it is not strictly a deontological rule.
All unbreakable rules in a deontological moral system are negative; you would never have one saying “protect the innocent.” But you can have “don’t lie” and “don’t murder” and so on.
And no, if you answer the question truthfully, failing to protect the innocent, they don’t count that as murdering (unless there was some other choice that you could have made without either lying or failing to protect the person.)
This isn’t necessarily the case. You can have positive requirements in a deontic system.
Yes, but not “unbreakable” ones. In other words there will be exceptions on account of some other positive or negative requirement, as in the objections above.
“Allowed”?
It is quite common for moral systems found in the field to have multiple unbreakable rules and for subscribers to be faced with the bad moral luck of having to break one of them. The moral system probably has a preference on the choice, but it still condemns the act and the person.
A really clever deontic theory either doesn’t permit those conflicts, or has a meta-rule that tells you what to do when they happen. (My favored solution is to privilege the null action.)
A deontic theory might take into account your probability assessments, or ideal probability assessments, regarding the likely outcome of your action.
And of course if you’re going to fully describe what a rule means, you have to define things in it like “lie”, just as to fully describe utilitarianism you have to define “utility”.
It’s true that the detail of real life is an objection to deontology, but it is also an objection to every other moral system, for much the same reasons.
Yes. It may or may not cause the extinction of humanity but if you want to ‘just… not lie’ you can certainly do so.
Can a deontologist still care about consequences?
Suppose you believe that lying is wrong for deontic reasons. Does it follow that we should program an AI never to lie? If so, can a consequentialist counter with arguments about how that would result in destroying the universe and (assuming those arguments were empirically correct) have a hope of changing your mind?
A deontologist may care about consequences, of course. I think whether and how much you are responsible for the lies of an AI you create probably depends on the exact theory. And of course knowingly doing something to risk destroying the world would almost certainly be worse than lying-by-proxy, so such arguments could be effective.