Excellent post. There is not much here to agree or disagree with—which, to be clear, is a compliment! Your explanations seem mostly to be consistent with what I’ve been taught and have read.
A couple of fairly minor notes:
“maximising” moral theories like classical utilitarianism claim that the only action one is permitted to take is the very best action, leaving no room for choosing the “prudentially best” action out of a range of “morally acceptable” actions
This accords with my own understanding, but I should note that I’ve seen utilitarians deny this. That is, the claim seemed to be (on the several occasions I’ve seen it made) that this is “not even wrong”, and misunderstands utilitarianism. I was not able to figure out just what the confusion was, so I can’t say much more than that; I only figured that this is worth noting. (I am not a utilitarian myself, to be clear.)
[stuff about axiology]
I found this part unsatisfying, but I don’t think it’s your fault. In fact I’ve always found the idea of axiology—the so-called “study of ‘value’”—to be rather confused. There is (it seems to me) a non-confused version which boils down to conceptual analysis of the concept of ‘value’, but this would be quite orthogonal to both morality and prudence (and everything else in this post). Anyhow, this is a digression, and I think mostly irrelevant to any points you intend to make.
Not a philosopher, but common-sensically, I understand utilitarianism as saying that actions that create more good for more people are progressively more praiseworthy. It’s something else to label the one very best possible action as “moral / permitted” and label every other action as “immoral / forbidden”. That seems like a weird and counterproductive way to talk about things. Do utilitarians actually do that?
Ethics/morality is generally understood to be a way to answer the question, “what is the right thing to do [in some circumstance / class of circumstances]?” (or, in other words, “what ought I to do [in this circumstance / class of circumstances]?”)
If, in answer to this, your ethical framework / moral system / etc. says “well, action X is better than action Y, but even better would be action Z”, then you don’t actually have an answer to your question (yet), do you? Because the obvious follow-up is, “Well, ok, so… which of those things should I do? X? Or Y? Or Z…?”
At that point, your morality can give you one of several answers:
“Any of those things is acceptable. You ought to do something in the set { X, Y, Z } (but definitely don’t do action W!); but which of those three things to do, is really up to you. Although, X is more morally praiseworthy than Y, and Z more praiseworthy than X. If you care about that sort of thing.”
“You ought to do the best thing (which is Z).”
“I cannot answer your question. There is no right thing to do, nor is there such a thing as ‘the thing you ought to do’ or even ‘a thing you ought to do’. Some things are simply better than others.”
If your morality gives answer #3, then what you have is actually not a morality, but merely an axiology. In other words, you have a ranking of actions, but what do you do with this ranking? Not clear. If you want your initial question (“what ought I to do?”) answered, you still need a morality!
Now, an axiology can certainly be a component of a morality. For example, if you have a decision rule that says “rank all available actions, then do the one at the top of the ranking”, and you also have a utilitarian axiology, then you can put them together and presto!—you’ve got a morality. (You might have a different decision rule instead, of course, but you do need one.)
Answer #3 plus a “do the best thing, out of this ranking” is, of course, just answer #2, so that’s all fine and good.
In answer #1, we are supposing that we have some axiology (evaluative ranking) that ranks actions Z > X > Y > W, and some decision rule that says “do any of the first three (feel free to select among them according to any criteria you like, including random choice), and you will be doing what you ought to do; but if you do W, you’ll have done a thing you ought not to do”. Now, what can be the nature of this decision rule? There would seem to be little alternative to the rule being a simple threshold of some sort: “actions that are at least this good [in the evaluative ranking] are permissible, while actions worse than this threshold are impermissible”. (In the absence of such a decision rule, you will recall, answer #1 degenerates into answer #3, and ceases to be a morality.)
Well, fair enough. But how to come up with the threshold? On what basis to select it? How to know it’s the right one—and what would it mean for it to be right (or wrong)? Could two moralities with different permissibility thresholds (but with the same, utilitarian, axiology) both be right?
Note that the lower you set the threshold, the more empty your morality becomes of any substantive content. For instance, if you set the threshold at exactly zero—in the sense that actions that do either no good at all, or some good, but in either case no harm, are permitted, while harmful actions are forbidden—then your morality boils down to “do no harm (but doing good is praiseworthy, and the more the better)”. Not a great guide to action!
On the other hand, the higher you set the threshold, the closer you get to answer #2.
And in any event, the questions about how to correctly locate the threshold, remain unanswered…
Turns out the latest 80,000 Hours episode has a brief relevant discussion at 30:45. That discussion seems to match my claim in my other comment that classical utilitarianism, if you take its original form at face value, sees only the maximally good action as permitted. But it also matches the idea that other forms of utilitarianism—and possibly the most common versions of utilitarianism nowadays—do not work that way.
(Said Achmiz’s answer is more thorough anyway, though.)
Timely! I noticed that too and was gonna comment but you beat me to it… For my part, I guess I go back and forth between Rob-style “I am going to do the best possible thing for the future, but that thing is to set reasonable goals so I don’t burn out or give up, etc. etc.”, and the quasi-nihilist “None of this morality stuff is coherent, but I care about people having good lives and good futures, and I’m going to act on the basis of that feeling! (And oh by the way I also care about other things too.)” :-P
To be clear, I haven’t thought it through very much, y’know, it’s just the meaning of life, nothing important, I’m kinda busy :-P
I’m pretty confident that (self-described) utilitarians, in practice, very rarely do that. I think it’s more common for them to view and discuss things as if they become “progressively more praiseworthy”, or as if there’s an obligation to do something that’s at least sufficiently good, and then better things become “progressively more praiseworthy” (i.e., like you have to satisfice, and then past there it’s a matter of supererogation).
I’m pretty confident that at least some forms of utilitarianism do see only the maximally good (either in expectation or “objectively”) action as permitted. And I think that classical utilitarianism, if you take its original form at face value, fits that description. But there are various forms of utilitarianism, and it’s very possible that not all of them have this “maximising” nature. (Note that I’m not a philosopher either.)
I think a few somewhat relevant distinctions/debates are subjectivism vs objectivism (as mentioned in this post) and actualism vs possibilism (full disclosure: I haven’t read that linked article).
Note that me highlighting that self-described utilitarians don’t necessarily live by or make statements directly corresponding to classical utilitarianism isn’t necessarily a critique. I would roughly describe myself as utilitarian, and don’t necessarily live by or make statements directly corresponding to classical utilitarianism. This post is somewhat relevant to that (and is very interesting anyway).
People keep telling me that my criticisms of utilitarianism are criticisms of classical utilitarianism, not the improved version they believe in. But they keep failing to provide clear explanations of new improved utilitarianism. Which is a problem because if improved utilitarianism has aspects of subjectivism or social construction, or whatever, then it is no longer a purely mathematical and objective theory, as advertised.
We put people in jail or execute them for doing bad things. That’s kind of a binary. If utilitarianism can only justify a spectrum of praiseworthiness-blameworthiness, then it is, insufficient to justify the social practices surrounding ethics. If it can’t justify blameworthiness, then things are even worse.
Currently, axiology seems confusing to me in that it seems to mean many different things at different times. I haven’t looked into it enough to be confident calling it rather than me confused, but I certainly wouldn’t throw that hypothesis out yet either.
But I’m also a bit confused as to why you think analysis of the concept of value would be orthogonal to morality, prudence, and other normative matters?
It seems to me like maybe one analogy (which is spit-balling and goes outside of my wheelhouse) to illustrate my way of viewing this could be that an agent’s moral theory, if we subtracted the axiology from it, gives the agent a utility function, but one containing references/pointers to other things, not yet specified. Like it could say “maximise value”, but not what value is. And then the axiology specifies what that is. So to the extent to which axiology (under a given definition) helps clarify what is valuable, it feeds into morality, rather than running perpendicular to it. Or do you view it differently?
Perhaps what you meant by “boils down to conceptual analysis of the concept of ‘value’” was more like metaethics-style reasoning about things like the “nature of” value, which might not directly help answer what specifically is valuable?
Excellent post. There is not much here to agree or disagree with—which, to be clear, is a compliment! Your explanations seem mostly to be consistent with what I’ve been taught and have read.
A couple of fairly minor notes:
This accords with my own understanding, but I should note that I’ve seen utilitarians deny this. That is, the claim seemed to be (on the several occasions I’ve seen it made) that this is “not even wrong”, and misunderstands utilitarianism. I was not able to figure out just what the confusion was, so I can’t say much more than that; I only figured that this is worth noting. (I am not a utilitarian myself, to be clear.)
I found this part unsatisfying, but I don’t think it’s your fault. In fact I’ve always found the idea of axiology—the so-called “study of ‘value’”—to be rather confused. There is (it seems to me) a non-confused version which boils down to conceptual analysis of the concept of ‘value’, but this would be quite orthogonal to both morality and prudence (and everything else in this post). Anyhow, this is a digression, and I think mostly irrelevant to any points you intend to make.
I very much look forward to the next post!
Not a philosopher, but common-sensically, I understand utilitarianism as saying that actions that create more good for more people are progressively more praiseworthy. It’s something else to label the one very best possible action as “moral / permitted” and label every other action as “immoral / forbidden”. That seems like a weird and counterproductive way to talk about things. Do utilitarians actually do that?
Ethics/morality is generally understood to be a way to answer the question, “what is the right thing to do [in some circumstance / class of circumstances]?” (or, in other words, “what ought I to do [in this circumstance / class of circumstances]?”)
If, in answer to this, your ethical framework / moral system / etc. says “well, action X is better than action Y, but even better would be action Z”, then you don’t actually have an answer to your question (yet), do you? Because the obvious follow-up is, “Well, ok, so… which of those things should I do? X? Or Y? Or Z…?”
At that point, your morality can give you one of several answers:
“Any of those things is acceptable. You ought to do something in the set { X, Y, Z } (but definitely don’t do action W!); but which of those three things to do, is really up to you. Although, X is more morally praiseworthy than Y, and Z more praiseworthy than X. If you care about that sort of thing.”
“You ought to do the best thing (which is Z).”
“I cannot answer your question. There is no right thing to do, nor is there such a thing as ‘the thing you ought to do’ or even ‘a thing you ought to do’. Some things are simply better than others.”
If your morality gives answer #3, then what you have is actually not a morality, but merely an axiology. In other words, you have a ranking of actions, but what do you do with this ranking? Not clear. If you want your initial question (“what ought I to do?”) answered, you still need a morality!
Now, an axiology can certainly be a component of a morality. For example, if you have a decision rule that says “rank all available actions, then do the one at the top of the ranking”, and you also have a utilitarian axiology, then you can put them together and presto!—you’ve got a morality. (You might have a different decision rule instead, of course, but you do need one.)
Answer #3 plus a “do the best thing, out of this ranking” is, of course, just answer #2, so that’s all fine and good.
In answer #1, we are supposing that we have some axiology (evaluative ranking) that ranks actions Z > X > Y > W, and some decision rule that says “do any of the first three (feel free to select among them according to any criteria you like, including random choice), and you will be doing what you ought to do; but if you do W, you’ll have done a thing you ought not to do”. Now, what can be the nature of this decision rule? There would seem to be little alternative to the rule being a simple threshold of some sort: “actions that are at least this good [in the evaluative ranking] are permissible, while actions worse than this threshold are impermissible”. (In the absence of such a decision rule, you will recall, answer #1 degenerates into answer #3, and ceases to be a morality.)
Well, fair enough. But how to come up with the threshold? On what basis to select it? How to know it’s the right one—and what would it mean for it to be right (or wrong)? Could two moralities with different permissibility thresholds (but with the same, utilitarian, axiology) both be right?
Note that the lower you set the threshold, the more empty your morality becomes of any substantive content. For instance, if you set the threshold at exactly zero—in the sense that actions that do either no good at all, or some good, but in either case no harm, are permitted, while harmful actions are forbidden—then your morality boils down to “do no harm (but doing good is praiseworthy, and the more the better)”. Not a great guide to action!
On the other hand, the higher you set the threshold, the closer you get to answer #2.
And in any event, the questions about how to correctly locate the threshold, remain unanswered…
Turns out the latest 80,000 Hours episode has a brief relevant discussion at 30:45. That discussion seems to match my claim in my other comment that classical utilitarianism, if you take its original form at face value, sees only the maximally good action as permitted. But it also matches the idea that other forms of utilitarianism—and possibly the most common versions of utilitarianism nowadays—do not work that way.
(Said Achmiz’s answer is more thorough anyway, though.)
Timely! I noticed that too and was gonna comment but you beat me to it… For my part, I guess I go back and forth between Rob-style “I am going to do the best possible thing for the future, but that thing is to set reasonable goals so I don’t burn out or give up, etc. etc.”, and the quasi-nihilist “None of this morality stuff is coherent, but I care about people having good lives and good futures, and I’m going to act on the basis of that feeling! (And oh by the way I also care about other things too.)” :-P
To be clear, I haven’t thought it through very much, y’know, it’s just the meaning of life, nothing important, I’m kinda busy :-P
I’m pretty confident that (self-described) utilitarians, in practice, very rarely do that. I think it’s more common for them to view and discuss things as if they become “progressively more praiseworthy”, or as if there’s an obligation to do something that’s at least sufficiently good, and then better things become “progressively more praiseworthy” (i.e., like you have to satisfice, and then past there it’s a matter of supererogation).
I’m pretty confident that at least some forms of utilitarianism do see only the maximally good (either in expectation or “objectively”) action as permitted. And I think that classical utilitarianism, if you take its original form at face value, fits that description. But there are various forms of utilitarianism, and it’s very possible that not all of them have this “maximising” nature. (Note that I’m not a philosopher either.)
I think a few somewhat relevant distinctions/debates are subjectivism vs objectivism (as mentioned in this post) and actualism vs possibilism (full disclosure: I haven’t read that linked article).
Note that me highlighting that self-described utilitarians don’t necessarily live by or make statements directly corresponding to classical utilitarianism isn’t necessarily a critique. I would roughly describe myself as utilitarian, and don’t necessarily live by or make statements directly corresponding to classical utilitarianism. This post is somewhat relevant to that (and is very interesting anyway).
People keep telling me that my criticisms of utilitarianism are criticisms of classical utilitarianism, not the improved version they believe in. But they keep failing to provide clear explanations of new improved utilitarianism. Which is a problem because if improved utilitarianism has aspects of subjectivism or social construction, or whatever, then it is no longer a purely mathematical and objective theory, as advertised.
We put people in jail or execute them for doing bad things. That’s kind of a binary. If utilitarianism can only justify a spectrum of praiseworthiness-blameworthiness, then it is, insufficient to justify the social practices surrounding ethics. If it can’t justify blameworthiness, then things are even worse.
Currently, axiology seems confusing to me in that it seems to mean many different things at different times. I haven’t looked into it enough to be confident calling it rather than me confused, but I certainly wouldn’t throw that hypothesis out yet either.
But I’m also a bit confused as to why you think analysis of the concept of value would be orthogonal to morality, prudence, and other normative matters?
It seems to me like maybe one analogy (which is spit-balling and goes outside of my wheelhouse) to illustrate my way of viewing this could be that an agent’s moral theory, if we subtracted the axiology from it, gives the agent a utility function, but one containing references/pointers to other things, not yet specified. Like it could say “maximise value”, but not what value is. And then the axiology specifies what that is. So to the extent to which axiology (under a given definition) helps clarify what is valuable, it feeds into morality, rather than running perpendicular to it. Or do you view it differently?
Perhaps what you meant by “boils down to conceptual analysis of the concept of ‘value’” was more like metaethics-style reasoning about things like the “nature of” value, which might not directly help answer what specifically is valuable?