Ok, thanks. (You omit from your enumeration rule consequentialists who are not utilitarians, but I infer that you have a similar attitude toward these as you do towards rule utilitarians.)
Well, as I am most partial to rule consequentialism, I have to agree that “this issue is much more thorny”. On the one hand, I agree with you that “never lie” is not a good rule to endorse (if even for the very straightforward reason that lying is sometimes not only permissible, but in fact is morally obligatory, so if you adopted a “never lie” rule then this would obligate you to predictably behave in an immoral way). On the other hand, I consider act consequentialism[1] to be obviously foolish and doomed (for boring, yet completely non-dismissable and un-avoidable, reasons of bounded rationality etc.), so your proposed solution where you simply “do the Expected Utility Calculation” is a non-starter. (You even admit that this calculation cannot be done, but then say to pretend to do it anyway; this looks to me like saying “the solution I propose can’t actually work, but do it anyway”. Well, no, if it can’t work, then obviously I shouldn’t do it, duh.)
Utilitarianism, specifically (of any stripe whatsoever, and as distinct from non-utilitarian consequentialist frameworks) seems to me to be rejectable in a thoroughly overdetermined manner.
Ok, thanks. (You omit from your enumeration rule consequentialists who are not utilitarians, but I infer that you have a similar attitude toward these as you do towards rule utilitarians.)
Well, as I am most partial to rule consequentialism, I have to agree that “this issue is much more thorny”. On the one hand, I agree with you that “never lie” is not a good rule to endorse (if even for the very straightforward reason that lying is sometimes not only permissible, but in fact is morally obligatory, so if you adopted a “never lie” rule then this would obligate you to predictably behave in an immoral way). On the other hand, I consider act consequentialism[1] to be obviously foolish and doomed (for boring, yet completely non-dismissable and un-avoidable, reasons of bounded rationality etc.), so your proposed solution where you simply “do the Expected Utility Calculation” is a non-starter. (You even admit that this calculation cannot be done, but then say to pretend to do it anyway; this looks to me like saying “the solution I propose can’t actually work, but do it anyway”. Well, no, if it can’t work, then obviously I shouldn’t do it, duh.)
(More commentary to come later.)
Utilitarianism, specifically (of any stripe whatsoever, and as distinct from non-utilitarian consequentialist frameworks) seems to me to be rejectable in a thoroughly overdetermined manner.