I think this whole “utilitarian vs. deontological” setup is a misleading false dichotomy. In reality, the way people make moral judgments—and I’d also say, any moral system that is really usable in practice—is best modeled neither by utilitarianism nor by deontology, but by virtue ethics.
All of the puzzles listed in this article are clarified once we realize that when people judge whether an act is moral, they ask primarily what sort of person would act that way, and consequently, whether they want to be (or be seen as) this sort of person and how people of this sort should be dealt with. Of course, this judgment is only partly (and sometimes not at all) in the form of conscious deliberation, but from an evolutionary and game-theoretical perspective, it’s clear why the unconscious processes would have evolved to judge things from that viewpoint. (And also why their judgment is often covered in additional rationalizations at the conscious level.)
The “fat man” variant of the trolley problem is a good illustration. Try to imagine someone who actually acts that way in practice, i.e. who really goes ahead and kills in cold blood when convinced by utilitarian arithmetic that it’s right to do so. Would you be comfortable working or socializing with this person, or even just being in their company? Of course, being scared and creeped out by such a person is perfectly rational: among the actually existing decision algorithms implemented by human brains, there are none (or at least very few) that would make the utilitarian decision in the fat man-trolley problem and otherwise produce reasonably predictable, cooperative, and non-threatening behavior.
It’s similar with the less dramatic examples discussed by Haidt. In all of these, the negative judgment, even if not explicitly expressed that way, is ultimately about judging what kind of person would act like that. (And again, except perhaps for the ideologically polarized flag example, it is true that such behaviors signal that the person in question is likely to be otherwise weird, unpredictable, and threatening.)
I’d also add that when it comes to rationalizations, utilitarians should be the last ones to throw stones. In practice, utilitarianism has never been much more than a sophisticated framework for constructing rationalizations for ideological positions on questions where correct utilitarian answers are at worst just undefined, and at best wildly intractable to calculate. (As is the case for pretty much all questions of practical interest.)
I’d also add that when it comes to rationalizations, utilitarians should be the last ones to throw stones. In practice, utilitarianism has never been much more than a sophisticated framework for constructing rationalizations for ideological positions on questions where correct utilitarian answers are at worst just undefined, and at best wildly intractable to calculate. (As is the case for pretty much all questions of practical interest.)
The phenomenon of utilitarianism serving as a sophisticated framework for constructing rationalizations for ideological positions exists and is perhaps generic. But there’s an analogous phenomenon of virtue ethics being rhetorically (think about both sides of the abortion debate). I strongly disagree that utilitarianism is ethically useless in practice. Do you disagree that VillageReach’s activity has higher utilitarian expected value per dollar than that of the Make A Wish Foundation?
Yes, there are plenty of situations where game theoretic dynamics and coordination problems make utilitarian style analysis useless, but your claim seems overly broad and sweeping.
I agree that I have indulged in a bit of a rhetorical excess above. What I had in mind is primarily welfare economics—as I indicated in another comment, I think it’s quite evident that this particular kind of formalized utilitarianism is regularly used to construct arguments for various ideological positions that are seemingly rigorous but in fact clearly rationalizations.
I also agree that non-utilitarian theories of ethics are fertile grounds for rationalizations too. I merely wanted to emphasize that given all the utilitarian rationalizations being thrown around, the idea of utilitarian thinking being somehow generally less prone to rationalizations is a non-starter, under any reasonable definitions of these terms.
As for the issues of charity, I think they are also more complicated than they seem, but this is a quite complex topic in its own right, which unfortunately I don’t have the time to address right now. I do agree that this area can be seen as a partial counterexample to my general thesis about uselessness of utilitarianism. (But less so than the strong proponents of utilitarian charity commonly claim.)
So I guess the takeaway is that if you care more about your status as a predictable, cooperative, and non-threatening person than about four innocent lives, don’t push the fat man.
(Also, please try to avoid sentences like “if you care about X more than innocent lives” — that comes across to me as sarcastic moral condemnation and probably tends to emotionally trigger people.)
It’s not just about what status you have, but what you actually are. You can view it as analogous to the Newcomb problem, where the predictor/Omega is able to model you accurately enough to predict if you’re going to take one or two boxes, and there’s no way to fool him into believing you’ll take one and then take both. Similarly, your behavior in one situation makes it possible to predict your behavior in other situations, at least with high statistical accuracy, and humans actually have some Omega-like abilities in this regard. If you kill the fat man, this predicts with high probability that you will be non-cooperative and threatening in other situations. This is maybe not necessarily true in the space of all possible minds, but it is true in the space of human minds—and it’s this constraint that gives humans these limited Omega-like abilities for predicting each others’ behavior.
(Of course, in real life this is further complicated by all sorts of higher-order strategies that humans employ to outsmart each other, both consciously and unconsciously. But when it comes to the fundamental issues like the conditions under which deadly violence is expected, things are usually simple and clear.)
And while these constraints may seem like evolutionary baggage that we’d best get rid of somehow, it must be recognized that they are essential for human cooperation. When dealing with a typical person, you can be confident that they’ll be cooperative and non-threatening only because you know that their mind is somewhere within the human mind-space, which means that as long as there are no red flags, cooperative and non-threatening behavior according to the usual folk-ethics is highly probable. All human social organization rests on this ability, and if humans are to self-modify into something very different, like utility-maximizers of some sort, this is a fundamental problem that must be addressed first.
Another way of saying this (I think—Vladimir_M can correct me):
You only have two choices. You can be the kind of person who kills the fat mat in order to save four other lives and kills the fat man in order to get a million dollars for yourself. Or you can be the kind of person who refuses to kill the fat man in both situations. Because of human hardware, those are your only choices.
I don’t mean to imply that the kind of person who would kill the fat man would also kill for profit. The only observation that’s necessary for my argument is that killing the fat man—by which I mean actually doing so, not merely saying you’d do so—indicates that the decision algorithms in your brain are sufficiently remote from the human standard that you can no longer be trusted to behave in normal, cooperative, and non-dangerous ways. (Which is then correctly perceived by others when they consider you scary.)
Now, to be more precise, there are actually two different issues there. The first is whether pushing the fat man is compatible with otherwise cooperative and benevolent behavior within the human mind-space. (I’d say even if it is, the latter is highly improbable given the former.) The second one is whether minds that implement some such utilitarian (or otherwise non-human) ethic could cooperate with each other the way humans are able to thanks to the mutual predictability of our constrained minds. That’s an extremely deep and complicated problem of game and decision theory, which is absolutely crucial for the future problems of artificial minds and human self-modification, but has little bearing on the contemporary problems of ideology, ethics, etc.
It seems like you can make similar arguments for virtue ethics and acausal trade.
If another agent is able to simulate you well, then it helps them to coordinate with you by knowing what you will do without communicating. When you’re not able to have a good prediction of what other people will do, it takes waaay more computation to figure out how to get what you want, and if its compatible with them getting what they want.
By making yourself easily simulated, you open yourself up to ambient control, and by not being easily simulated you’re difficult to trust. Lawful Stupid seems to happen when you have too many rules enforced too inflexibly, and often (in literature) other characters can take advantage of that really easily.
The second one is whether minds that implement some such utilitarian (or otherwise non-human) ethic could cooperate with each other the way humans are able to thanks to the mutual predictability of our constrained minds.
But we normally seem to see “one death as a tragedy, a million as a statistic” due to scope insensitivity, availability bias etc.
Why not trust that people only directly dealing with numbers are normal when they implement cold-blooded utilitarianism? Why not have many important decisions made abstractly by such people? Is wanting to make decisions this way, remote from the consequences and up a few meta-levels, a barbaric thing to advocate?
Why not trust that people only directly dealing with numbers are normal when they implement cold-blooded utilitarianism? Why not have many important decisions made abstractly by such people? Is wanting to make decisions this way, remote from the consequences and up a few meta-levels, a barbaric thing to advocate?
During the 20th century some societies have attempted to implement more-or-less that policy. The results certainly justify the adjective barbaric.
But most of the people remained relatively normal throughout. So virtue ethics needs a huge patch to approximate consequentialism.
You are providing a consequentialist argument for a base of virtue ethics plus making sure no one makes abstract decisions, but I don’t see how preventing people from making abstract decisions emerges naturally from virtue ethics at all.
I agree with your comment in one sense and was trying to imply it, as the bad results are not prevented by virtue ethics alone. On the other hand, you have provided a consequentialist argument that I think valid and was hinting towards.
Spreading this meme, even by a believing virtue ethicist, would seem to reduce the lifespan of fat men with bounties on their heads much faster than it would spare the crowds tied to the train tracks.
U: “Ooo look, a way to rationalize killing for profit!”
VE: “No no no, the message is that you shouldn’t kill the fat man in either ca-”
U: “Shush you!”
Of course, one may want to simply be the sort who tells the truth, consequences to fat men be damned.
You only have two choices. You can be the kind of person who kills the fat mat in order to save four other lives and kills the fat man in order to get a million dollars for yourself. Or you can be the kind of person who refuses to kill the fat man in both situations. Because of human hardware, those are your only choices.
No, I don’t think so. “What I actually am”, if I’m understanding Vladimir correctly, refers to the actual actions I take under various situations.
For example, if I believe I’m the sort of person who would throw the fat man under the train, but in fact I would not throw the fat man under the train, then I’ve successfully signaled to myself my status as a fat-man-under-train-thrower (I wonder if that’s an allowed construction in German), but I am not actually a fat-man-under-train-thrower.
On one level it’s almost akin to what a Bayesian calculation (taking “weird but harmless behaviour” as positive evidence of “weird and harmful”) would feel like from the inside, and in that respect I can see the value in virtue ethics (even though it strikes me as a mind projection issue of creating a person’s ethical ‘character’ when all you need is the likelihood of them performing this act or that).
But on another level, I can see it is as a description of a sort of hard-coded irrationality that we have evolution to thank for. All things being equal, we prefer to associate with people who will never murder us, rather than people who will only murder us when it would be good to do so—because we personally calculate good with a term for our existence. People with an irrational, compelling commitment are more trustworthy than people compelled by rational or utilitarian concerns (Schelling’s Strategy of Conflict), because we are aware that there exists situations where the best outcome overall is not the best outcome personally.
So I am torn between lumping virtue ethics in with deontological ethics as “descriptions of human moral behaviour” and repairing it into a usable set of prescriptions for human moral behaviour.
(even though it strikes me as a mind projection issue of creating a person’s ethical ‘character’ when all you need is the likelihood of them performing this act or that).
Character just is a compressed representation of patterns of likely behavior and the algorithms generating them.
All things being equal, we prefer to associate with people who will never murder us, rather than people who will only murder us when it would be good to do so—because we personally calculate good with a term for our existence. People with an irrational, compelling commitment are more trustworthy than people compelled by rational or utilitarian concerns (Schelling’s Strategy of Conflict), because we are aware that there exists situations where the best outcome overall is not the best outcome personally.
This connotes that wanting others to self-bind comes from unvirtuous selfishness, which seems like the wrong connotation to apply to a phenomenon that enables very general and large Pareto improvements (yay!).
In particular (not maximally relevant in this conversation, but particularly important for LW), among fallible (including selfishly biased in their beliefs) agents that wish to pursue common non-indexical values, self-binding to cooperate in the epistemic prisoner’s dilemma enables greater group success than a war of all against all who disagree, or mere refusal to cooperate given strategic disagreement.
So I am torn between lumping virtue ethics in with deontological ethics as “descriptions of human moral behaviour” and repairing it into a usable set of prescriptions for human moral behaviour.
Why not do both? Treat naive virtue ethics as a description of human moralizing verbal behavior, and treat the virtue-ethical things people do as human game-theoretic behavior, and, because behaviors tend to have interesting and not completely insane causes, look for any good reasons for these behaviors that you aren’t already aware of and craft a set of prescriptions from them.
(even though it strikes me as a mind projection issue of creating a person’s ethical ‘character’ when all you need is the likelihood of them performing this act or that).
It’s not a fallacy if the thing your projecting onto is an actual human with an actual human mind. Another way to see this is as using the priors on how humans tend to behave that evolution has provided you.
But on another level, I can see it is as a description of a sort of hard-coded irrationality that we have evolution to thank for. All things being equal, we prefer to associate with people who will never murder us, rather than people who will only murder us when it would be good to do so—because we personally calculate good with a term for our existence. People with an irrational, compelling commitment are more trustworthy than people compelled by rational or utilitarian concerns (Schelling’s Strategy of Conflict), because we are aware that there exists situations where the best outcome overall is not the best outcome personally.
The definition of “rational” you’re using in that paragraph has the problem that it will cause you to regret your rationality. If having an “irrational” commitment helps you be more trusted and thus achieve your goals, it’s not irrational. See the articles about decision theory for more details on this.
It’s not a fallacy if the thing your projecting onto is an actual human with an actual human mind. Another way to see this is as using the priors on how humans tend to behave that evolution has provided you.
That only works if you’re (a) not running in to cultural differences and (b) not dealing with someone who has major neurological differences. Using your default priors on “how humans work” to handle an autistic or a schizophrenic is probably going to produce sub-par results. Same if you assume that “homosexuality is wrong” or “steak is delicious” is culturally universal.
It’s unlikely that you’ll run in to someone who prioritizes prime-sized stacks of pebbles, but it’s entirely likely you’ll run in to people who thinks eating meat is wrong, or that gay marriage ought to be legalized :)
Using your default priors on “how humans work” to handle an autistic or a schizophrenic is probably going to produce sub-par results.
They’re going to produce the result that this human’s brain is wired strangely and thus he’s liable to exhibit other strange and likely negative behaviors. Which is more-or-less accurate.
Because his comment is evidence for the hypothesis that he has a divergent neurology from mine, and is therefor liable to exhibit negative behaviors :P
Indeed, and it probably needs to be emphasized that nations are not monocultures. Americans reading mainly utilitarian blogs and Americans reading mainly deontologist blogs live in different cultures, for instance. (To say nothing about Americans reading atheist blogs and Americans reading fundamentalist blogs, let alone Americans reading any kinds of blogs and Americans who don’t read period.)
(even though it strikes me as a mind projection issue of creating a person’s ethical ‘character’ when all you need is the likelihood of them performing this act or that).
This may be part of the reason many virtue ethical theories are prescriptions on what one should do themselves, and usually disapprove of trying to apply it to other human’s.On this level, it’s poor for predicting, but wonderful for meaningful signalling of cooperative intent. I tend to consider virtue ethics as my low-level compressed version of consequentialist morality; it gives me the ability to develop actions for snap situations that I’d want to take for consequentialist reasons.
As is the case for pretty much all questions of practical interest.
Is it a good idea to spend money on yourself (rather than donating it)?
I don’t see how you could possibly rationalize that, and the inconvenience of it would seem to outweigh any benefit it gives to rationalizing other things.
I think this whole “utilitarian vs. deontological” setup is a misleading false dichotomy. In reality, the way people make moral judgments—and I’d also say, any moral system that is really usable in practice—is best modeled neither by utilitarianism nor by deontology, but by virtue ethics.
All of the puzzles listed in this article are clarified once we realize that when people judge whether an act is moral, they ask primarily what sort of person would act that way, and consequently, whether they want to be (or be seen as) this sort of person and how people of this sort should be dealt with. Of course, this judgment is only partly (and sometimes not at all) in the form of conscious deliberation, but from an evolutionary and game-theoretical perspective, it’s clear why the unconscious processes would have evolved to judge things from that viewpoint. (And also why their judgment is often covered in additional rationalizations at the conscious level.)
The “fat man” variant of the trolley problem is a good illustration. Try to imagine someone who actually acts that way in practice, i.e. who really goes ahead and kills in cold blood when convinced by utilitarian arithmetic that it’s right to do so. Would you be comfortable working or socializing with this person, or even just being in their company? Of course, being scared and creeped out by such a person is perfectly rational: among the actually existing decision algorithms implemented by human brains, there are none (or at least very few) that would make the utilitarian decision in the fat man-trolley problem and otherwise produce reasonably predictable, cooperative, and non-threatening behavior.
It’s similar with the less dramatic examples discussed by Haidt. In all of these, the negative judgment, even if not explicitly expressed that way, is ultimately about judging what kind of person would act like that. (And again, except perhaps for the ideologically polarized flag example, it is true that such behaviors signal that the person in question is likely to be otherwise weird, unpredictable, and threatening.)
I’d also add that when it comes to rationalizations, utilitarians should be the last ones to throw stones. In practice, utilitarianism has never been much more than a sophisticated framework for constructing rationalizations for ideological positions on questions where correct utilitarian answers are at worst just undefined, and at best wildly intractable to calculate. (As is the case for pretty much all questions of practical interest.)
The phenomenon of utilitarianism serving as a sophisticated framework for constructing rationalizations for ideological positions exists and is perhaps generic. But there’s an analogous phenomenon of virtue ethics being rhetorically (think about both sides of the abortion debate). I strongly disagree that utilitarianism is ethically useless in practice. Do you disagree that VillageReach’s activity has higher utilitarian expected value per dollar than that of the Make A Wish Foundation?
Yes, there are plenty of situations where game theoretic dynamics and coordination problems make utilitarian style analysis useless, but your claim seems overly broad and sweeping.
I agree that I have indulged in a bit of a rhetorical excess above. What I had in mind is primarily welfare economics—as I indicated in another comment, I think it’s quite evident that this particular kind of formalized utilitarianism is regularly used to construct arguments for various ideological positions that are seemingly rigorous but in fact clearly rationalizations.
I also agree that non-utilitarian theories of ethics are fertile grounds for rationalizations too. I merely wanted to emphasize that given all the utilitarian rationalizations being thrown around, the idea of utilitarian thinking being somehow generally less prone to rationalizations is a non-starter, under any reasonable definitions of these terms.
As for the issues of charity, I think they are also more complicated than they seem, but this is a quite complex topic in its own right, which unfortunately I don’t have the time to address right now. I do agree that this area can be seen as a partial counterexample to my general thesis about uselessness of utilitarianism. (But less so than the strong proponents of utilitarian charity commonly claim.)
So I guess the takeaway is that if you care more about your status as a predictable, cooperative, and non-threatening person than about four innocent lives, don’t push the fat man.
http://lesswrong.com/lw/v2/prices_or_bindings/
(Also, please try to avoid sentences like “if you care about X more than innocent lives” — that comes across to me as sarcastic moral condemnation and probably tends to emotionally trigger people.)
It’s not just about what status you have, but what you actually are. You can view it as analogous to the Newcomb problem, where the predictor/Omega is able to model you accurately enough to predict if you’re going to take one or two boxes, and there’s no way to fool him into believing you’ll take one and then take both. Similarly, your behavior in one situation makes it possible to predict your behavior in other situations, at least with high statistical accuracy, and humans actually have some Omega-like abilities in this regard. If you kill the fat man, this predicts with high probability that you will be non-cooperative and threatening in other situations. This is maybe not necessarily true in the space of all possible minds, but it is true in the space of human minds—and it’s this constraint that gives humans these limited Omega-like abilities for predicting each others’ behavior.
(Of course, in real life this is further complicated by all sorts of higher-order strategies that humans employ to outsmart each other, both consciously and unconsciously. But when it comes to the fundamental issues like the conditions under which deadly violence is expected, things are usually simple and clear.)
And while these constraints may seem like evolutionary baggage that we’d best get rid of somehow, it must be recognized that they are essential for human cooperation. When dealing with a typical person, you can be confident that they’ll be cooperative and non-threatening only because you know that their mind is somewhere within the human mind-space, which means that as long as there are no red flags, cooperative and non-threatening behavior according to the usual folk-ethics is highly probable. All human social organization rests on this ability, and if humans are to self-modify into something very different, like utility-maximizers of some sort, this is a fundamental problem that must be addressed first.
Another way of saying this (I think—Vladimir_M can correct me):
You only have two choices. You can be the kind of person who kills the fat mat in order to save four other lives and kills the fat man in order to get a million dollars for yourself. Or you can be the kind of person who refuses to kill the fat man in both situations. Because of human hardware, those are your only choices.
I don’t mean to imply that the kind of person who would kill the fat man would also kill for profit. The only observation that’s necessary for my argument is that killing the fat man—by which I mean actually doing so, not merely saying you’d do so—indicates that the decision algorithms in your brain are sufficiently remote from the human standard that you can no longer be trusted to behave in normal, cooperative, and non-dangerous ways. (Which is then correctly perceived by others when they consider you scary.)
Now, to be more precise, there are actually two different issues there. The first is whether pushing the fat man is compatible with otherwise cooperative and benevolent behavior within the human mind-space. (I’d say even if it is, the latter is highly improbable given the former.) The second one is whether minds that implement some such utilitarian (or otherwise non-human) ethic could cooperate with each other the way humans are able to thanks to the mutual predictability of our constrained minds. That’s an extremely deep and complicated problem of game and decision theory, which is absolutely crucial for the future problems of artificial minds and human self-modification, but has little bearing on the contemporary problems of ideology, ethics, etc.
It seems like you can make similar arguments for virtue ethics and acausal trade.
If another agent is able to simulate you well, then it helps them to coordinate with you by knowing what you will do without communicating. When you’re not able to have a good prediction of what other people will do, it takes waaay more computation to figure out how to get what you want, and if its compatible with them getting what they want.
By making yourself easily simulated, you open yourself up to ambient control, and by not being easily simulated you’re difficult to trust. Lawful Stupid seems to happen when you have too many rules enforced too inflexibly, and often (in literature) other characters can take advantage of that really easily.
But we normally seem to see “one death as a tragedy, a million as a statistic” due to scope insensitivity, availability bias etc.
Why not trust that people only directly dealing with numbers are normal when they implement cold-blooded utilitarianism? Why not have many important decisions made abstractly by such people? Is wanting to make decisions this way, remote from the consequences and up a few meta-levels, a barbaric thing to advocate?
During the 20th century some societies have attempted to implement more-or-less that policy. The results certainly justify the adjective barbaric.
But most of the people remained relatively normal throughout. So virtue ethics needs a huge patch to approximate consequentialism.
You are providing a consequentialist argument for a base of virtue ethics plus making sure no one makes abstract decisions, but I don’t see how preventing people from making abstract decisions emerges naturally from virtue ethics at all.
I agree with your comment in one sense and was trying to imply it, as the bad results are not prevented by virtue ethics alone. On the other hand, you have provided a consequentialist argument that I think valid and was hinting towards.
Spreading this meme, even by a believing virtue ethicist, would seem to reduce the lifespan of fat men with bounties on their heads much faster than it would spare the crowds tied to the train tracks.
U: “Ooo look, a way to rationalize killing for profit!”
VE: “No no no, the message is that you shouldn’t kill the fat man in either ca-”
U: “Shush you!”
Of course, one may want to simply be the sort who tells the truth, consequences to fat men be damned.
This seems obviously false.
Is “what you actually are” equivalent to status of yourself, to yourself?
No, I don’t think so. “What I actually am”, if I’m understanding Vladimir correctly, refers to the actual actions I take under various situations.
For example, if I believe I’m the sort of person who would throw the fat man under the train, but in fact I would not throw the fat man under the train, then I’ve successfully signaled to myself my status as a fat-man-under-train-thrower (I wonder if that’s an allowed construction in German), but I am not actually a fat-man-under-train-thrower.
http://lesswrong.com/lw/v2/prices_or_bindings/
(Also, your comment reads to me — deliberately or not — as sarcastic moral opprobrium directed at Vladimir’s position. Please try to avoid that.)
I am torn on virtue ethics.
On one level it’s almost akin to what a Bayesian calculation (taking “weird but harmless behaviour” as positive evidence of “weird and harmful”) would feel like from the inside, and in that respect I can see the value in virtue ethics (even though it strikes me as a mind projection issue of creating a person’s ethical ‘character’ when all you need is the likelihood of them performing this act or that).
But on another level, I can see it is as a description of a sort of hard-coded irrationality that we have evolution to thank for. All things being equal, we prefer to associate with people who will never murder us, rather than people who will only murder us when it would be good to do so—because we personally calculate good with a term for our existence. People with an irrational, compelling commitment are more trustworthy than people compelled by rational or utilitarian concerns (Schelling’s Strategy of Conflict), because we are aware that there exists situations where the best outcome overall is not the best outcome personally.
So I am torn between lumping virtue ethics in with deontological ethics as “descriptions of human moral behaviour” and repairing it into a usable set of prescriptions for human moral behaviour.
Character just is a compressed representation of patterns of likely behavior and the algorithms generating them.
This connotes that wanting others to self-bind comes from unvirtuous selfishness, which seems like the wrong connotation to apply to a phenomenon that enables very general and large Pareto improvements (yay!).
In particular (not maximally relevant in this conversation, but particularly important for LW), among fallible (including selfishly biased in their beliefs) agents that wish to pursue common non-indexical values, self-binding to cooperate in the epistemic prisoner’s dilemma enables greater group success than a war of all against all who disagree, or mere refusal to cooperate given strategic disagreement.
As to “irrational” (and, come to think of it, also cooperation), see Bayesians vs. Barbarians.
Why not do both? Treat naive virtue ethics as a description of human moralizing verbal behavior, and treat the virtue-ethical things people do as human game-theoretic behavior, and, because behaviors tend to have interesting and not completely insane causes, look for any good reasons for these behaviors that you aren’t already aware of and craft a set of prescriptions from them.
It’s not a fallacy if the thing your projecting onto is an actual human with an actual human mind. Another way to see this is as using the priors on how humans tend to behave that evolution has provided you.
The definition of “rational” you’re using in that paragraph has the problem that it will cause you to regret your rationality. If having an “irrational” commitment helps you be more trusted and thus achieve your goals, it’s not irrational. See the articles about decision theory for more details on this.
That only works if you’re (a) not running in to cultural differences and (b) not dealing with someone who has major neurological differences. Using your default priors on “how humans work” to handle an autistic or a schizophrenic is probably going to produce sub-par results. Same if you assume that “homosexuality is wrong” or “steak is delicious” is culturally universal.
It’s unlikely that you’ll run in to someone who prioritizes prime-sized stacks of pebbles, but it’s entirely likely you’ll run in to people who thinks eating meat is wrong, or that gay marriage ought to be legalized :)
They’re going to produce the result that this human’s brain is wired strangely and thus he’s liable to exhibit other strange and likely negative behaviors. Which is more-or-less accurate.
Why on Earth is this comment getting downvoted?
Because his comment is evidence for the hypothesis that he has a divergent neurology from mine, and is therefor liable to exhibit negative behaviors :P
My guess is it’s in response to the phrase “negative behaviors” describing a non-neurotypical person’s behavior.
Indeed, and it probably needs to be emphasized that nations are not monocultures. Americans reading mainly utilitarian blogs and Americans reading mainly deontologist blogs live in different cultures, for instance. (To say nothing about Americans reading atheist blogs and Americans reading fundamentalist blogs, let alone Americans reading any kinds of blogs and Americans who don’t read period.)
This may be part of the reason many virtue ethical theories are prescriptions on what one should do themselves, and usually disapprove of trying to apply it to other human’s.On this level, it’s poor for predicting, but wonderful for meaningful signalling of cooperative intent. I tend to consider virtue ethics as my low-level compressed version of consequentialist morality; it gives me the ability to develop actions for snap situations that I’d want to take for consequentialist reasons.
Is it a good idea to spend money on yourself (rather than donating it)?
I don’t see how you could possibly rationalize that, and the inconvenience of it would seem to outweigh any benefit it gives to rationalizing other things.