I agree with a lot of things in this post and disagree with a lot of things in this post, but before I comment in more detail, I would like to clarify one thing, please:
Are you aware that there exist moral frameworks that aren’t act consequentialism? And if so, are you aware that some people adhere to these other moral frameworks? And if so, do you think that those people are all idiots, crazy, or crazy idiots?
(These questions are not rhetorical. Especially the last one, despite it obviously sounding like the most rhetorical of the set. But it’s not!)
Yes I am aware of other moral frameworks, and I freely confess to having ignored them entirely in this essay. In my defence, a lot of people are (or claim to be, or aspire to be) some variant of consequentialist or another. Against strict kantian deontologists I admit no version of this argument could be persuasive and they’re free to bite the other bullet and fail to achieve any good outcomes sometimes produce avoidable bad outcomes. Against rule utilitarians (who I am counting as a primary target audience) this issue is much more thorny than to act utilitarians, but I am hoping to be persuasive that never lying is not actually a good rule to endorse and that they shouldn’t endorse it.
I don’t necessarily think they’re crazy, but to various extents I think they’d be lowering their own effectiveness by not accepting some variation on this position, and they should at least do that knowingly.
Ok, thanks. (You omit from your enumeration rule consequentialists who are not utilitarians, but I infer that you have a similar attitude toward these as you do towards rule utilitarians.)
Well, as I am most partial to rule consequentialism, I have to agree that “this issue is much more thorny”. On the one hand, I agree with you that “never lie” is not a good rule to endorse (if even for the very straightforward reason that lying is sometimes not only permissible, but in fact is morally obligatory, so if you adopted a “never lie” rule then this would obligate you to predictably behave in an immoral way). On the other hand, I consider act consequentialism[1] to be obviously foolish and doomed (for boring, yet completely non-dismissable and un-avoidable, reasons of bounded rationality etc.), so your proposed solution where you simply “do the Expected Utility Calculation” is a non-starter. (You even admit that this calculation cannot be done, but then say to pretend to do it anyway; this looks to me like saying “the solution I propose can’t actually work, but do it anyway”. Well, no, if it can’t work, then obviously I shouldn’t do it, duh.)
Utilitarianism, specifically (of any stripe whatsoever, and as distinct from non-utilitarian consequentialist frameworks) seems to me to be rejectable in a thoroughly overdetermined manner.
Against strict kantian deontologists I admit no version of this argument could be persuasive and they’re free to bite the other bullet and fail to achieve any good outcomes.
Note that this is very different from what you said in your post, which is “sometimes you will lose.” (And this one seems obviously false)
I agree with a lot of things in this post and disagree with a lot of things in this post, but before I comment in more detail, I would like to clarify one thing, please:
Are you aware that there exist moral frameworks that aren’t act consequentialism? And if so, are you aware that some people adhere to these other moral frameworks? And if so, do you think that those people are all idiots, crazy, or crazy idiots?
(These questions are not rhetorical. Especially the last one, despite it obviously sounding like the most rhetorical of the set. But it’s not!)
Yes I am aware of other moral frameworks, and I freely confess to having ignored them entirely in this essay. In my defence, a lot of people are (or claim to be, or aspire to be) some variant of consequentialist or another. Against strict kantian deontologists I admit no version of this argument could be persuasive and they’re free to bite the other bullet and
fail to achieve any good outcomessometimes produce avoidable bad outcomes. Against rule utilitarians (who I am counting as a primary target audience) this issue is much more thorny than to act utilitarians, but I am hoping to be persuasive that never lying is not actually a good rule to endorse and that they shouldn’t endorse it.I don’t necessarily think they’re crazy, but to various extents I think they’d be lowering their own effectiveness by not accepting some variation on this position, and they should at least do that knowingly.
Ok, thanks. (You omit from your enumeration rule consequentialists who are not utilitarians, but I infer that you have a similar attitude toward these as you do towards rule utilitarians.)
Well, as I am most partial to rule consequentialism, I have to agree that “this issue is much more thorny”. On the one hand, I agree with you that “never lie” is not a good rule to endorse (if even for the very straightforward reason that lying is sometimes not only permissible, but in fact is morally obligatory, so if you adopted a “never lie” rule then this would obligate you to predictably behave in an immoral way). On the other hand, I consider act consequentialism[1] to be obviously foolish and doomed (for boring, yet completely non-dismissable and un-avoidable, reasons of bounded rationality etc.), so your proposed solution where you simply “do the Expected Utility Calculation” is a non-starter. (You even admit that this calculation cannot be done, but then say to pretend to do it anyway; this looks to me like saying “the solution I propose can’t actually work, but do it anyway”. Well, no, if it can’t work, then obviously I shouldn’t do it, duh.)
(More commentary to come later.)
Utilitarianism, specifically (of any stripe whatsoever, and as distinct from non-utilitarian consequentialist frameworks) seems to me to be rejectable in a thoroughly overdetermined manner.
Note that this is very different from what you said in your post, which is “sometimes you will lose.” (And this one seems obviously false)
I agree and have editted. Sorry for overstating the position here (though not in original post).