It’s far easier to soothe your ego by finding a rationalization for having caused good consequences, than to self-deceive about whether you’re a paragon of courage or honesty.
It seems like the opposite to me, that it is extremely easy to self-deceive about whether you’re a virtuous person. In fact this seems like a quintessential example of self-deception that one encounters fairly commonly.
It is also quite easy to self-deceive about the consequences of ones actions, I agree, but in that case there is at least some empirical fact about the world which could drag you back from your self-deception of you were open to it. In contrast, it seems to me like if you self-deceive hard enough you can always view yourself as a paragon of virtue essentially regardless of how virtuous you actually are and you can always come up with rationalizations for your own actions. If you care about consequences your performance on those consequences can be contradicted empirically, but there isn’t a similar way of contradicting your virtue if you are willing to twist your impressions of what is virtuous enough.
I find this an interesting line of criticism, it is essentially pointing at the difficulty of finding good evidence and evaluating yourself on that evidence and making a disagreement about how easy it is.
I would like to bring in a perspective of more first principles modelling of how quickly you incorporate evidence pointing against the way you’re thinking.
One thing is the amount of disconfirming evidence you look for. Another thing is your ability to bring that information into your worldview, your openness to being wrong. Thirdly we might also mention is the speed of the feedback, how long time does it take for you to get feedback.
I think you’re saying that when you go into virtue ethics we often find failures in bringing in disconcerting information into a worldview. I don’t think this has to be the case personally as I do think there are ways to actually get feedback on whether you’re acting in a way that is aligned with your virtues, by mentioning examples of them and then having anonymous people give feedback or just normal reflection.
This is a lot easier to do if your loops are shorter which is the exact point that consequentalism and utilitarianism can fail on, it is a target that is quite far away and so the crispness and locality of the feedback is not high enough.
I think that virtue ethics outperforms consequentialism because it is better suited for bringing in information as well as for speed and crispness of that information. I personally think it is because it is a game theory optimal solution to consequentialism in environments where you have little information but that is probably beside the point.
This might just be a difference in terminology though? Would you agree with my 3 point characterisation above?
For clarity, I’m not trying to make the case for or against consequentialism/virtue ethics, I’m just trying to respond to the narrow point that I quoted. I don’t think people should choose an approach to ethical decision making based primarily on this one specific point.
That said, I take your central point to be that virtue is more direct or local compared to consequences that it is easier to have evidence on your virtues than the consequences of your actions.
My argument above is specifically about the robustness of these to self deception. Being more local and in a sense “personal” is what makes virtue more suscetible to self deception in my view. There can be things that a reasonable obsever might consider evidence about your virtue, but these are so closely tied to you personally that self deception will often be easy. It will be easy to dismiss people who decry your lack of virtue as unfair or bias or bad people themselves, precisely because it is the question of your virtue on the line!
In contrast, the evidence you get about the consequences of your actions may be harder to interpret in some cases because you have to analysze the causation and it could be noisier, but this also creates a seperation so that it is not as personal. It gives you the chance to admit that even if you had good intentions and high integrity things didn’t play out how you wanted in actual fact. For virtue ethics you can’t admit you were wrong without also admitting you had bad intentions or lacked integrity.
I think you can do virtue ethics and also work on your tendency to self deceive, but that doesn’t make it is robust if you are self deception (although the degree of delf seception could defintiely be relevant).
That is a fair point, since virtue is tied to your identity and self it is a lot easier to take things personally and therefore distort the truth.
A part of me is like “meh, skill issue, just get good at emotional intelligence and see through your self” but that is probably not a very valid solution at scale if I’m being honest.
There’s still something nice about it leading to repeated games and similar, something about how if you look at our past then cooperation arises from repeated games rather than individual games where you analyse things in detail. This is the specific point that Joshua Greene makes in his Moral Tribes book for example.
Maybe the core point here is not virtue versus utilitarian reasoning, it might more be about the ease of self-deception and how different time limits and how ways of evaluating your own outcomes and outputs should be done in a more impersonal way. Maybe one shouldn’t call this virtue ethics as it carries a large bag and camp, maybe heruistics ethics or something (though that feels stupid).
If you care about consequences your performance on those consequences can be contradicted empirically, but there isn’t a similar way of contradicting your virtue if you are willing to twist your impressions of what is virtuous enough.
If you care about the consequences of your crypto philanthropy strategy, your performance on those consequences can be contradicted empirically. But so what if the contradiction arrives in the form of your bankruptcy, along with blemishing the reputation of the movement you were trying to support?
“Basic virtue ethics” would probably prevent this (if not through by making you correct your strategy, at least it would light up more red flags in other people’s heads) (as would “non-naive consequentialism”). Of course, virtue ethics has its own failure modes. Virtue ethics and consequentialism have different failure profiles.
For clarity, I’m not trying to make the case for or against consequentialism/virtue ethics, I’m just trying to respond to the narrow point that I quoted. I don’t think people should choose an approach to ethical decision making based primarily on this one specific point.
If you care about the consequences of your crypto philanthropy strategy, your performance on those consequences can be contradicted empirically. But so what if the contradiction arrives in the form of your bankruptcy, along with blemishing the reputation of the movement you were trying to support?
I think this is consistent with my point. From what I can tell SBF continues to claim that he was acting with good intentions and high integrity, despite being convicted for fraud, which I think most people would reasonably assume demonstrates a strong lack of those characteristics. This seems like it might be a case of self deception about one’s own virtues. This is the kind of thing I meant when I said it seems like this is a quintisentially example of self deception. From what I can tell it is extremely common that people who aren’t virtuous still think of themselves as virtuous.
It seems to me like a lot of people involved in the SBF scandal admit that it was bad and that they made strategic mistakes by trusting SBF, but they often don’t say that this is related to virtue failures on their part, such as lacking integrity or honesty. In other words, they admit to their actions potentially having bad consequences as the result of evidence about those consequences, but don’t admit to these events shedding light on their virtues or character.
It seems like the opposite to me, that it is extremely easy to self-deceive about whether you’re a virtuous person. In fact this seems like a quintessential example of self-deception that one encounters fairly commonly.
It is also quite easy to self-deceive about the consequences of ones actions, I agree, but in that case there is at least some empirical fact about the world which could drag you back from your self-deception of you were open to it. In contrast, it seems to me like if you self-deceive hard enough you can always view yourself as a paragon of virtue essentially regardless of how virtuous you actually are and you can always come up with rationalizations for your own actions. If you care about consequences your performance on those consequences can be contradicted empirically, but there isn’t a similar way of contradicting your virtue if you are willing to twist your impressions of what is virtuous enough.
I find this an interesting line of criticism, it is essentially pointing at the difficulty of finding good evidence and evaluating yourself on that evidence and making a disagreement about how easy it is.
I would like to bring in a perspective of more first principles modelling of how quickly you incorporate evidence pointing against the way you’re thinking.
One thing is the amount of disconfirming evidence you look for. Another thing is your ability to bring that information into your worldview, your openness to being wrong. Thirdly we might also mention is the speed of the feedback, how long time does it take for you to get feedback.
I think you’re saying that when you go into virtue ethics we often find failures in bringing in disconcerting information into a worldview. I don’t think this has to be the case personally as I do think there are ways to actually get feedback on whether you’re acting in a way that is aligned with your virtues, by mentioning examples of them and then having anonymous people give feedback or just normal reflection.
This is a lot easier to do if your loops are shorter which is the exact point that consequentalism and utilitarianism can fail on, it is a target that is quite far away and so the crispness and locality of the feedback is not high enough.
I think that virtue ethics outperforms consequentialism because it is better suited for bringing in information as well as for speed and crispness of that information. I personally think it is because it is a game theory optimal solution to consequentialism in environments where you have little information but that is probably beside the point.
This might just be a difference in terminology though? Would you agree with my 3 point characterisation above?
For clarity, I’m not trying to make the case for or against consequentialism/virtue ethics, I’m just trying to respond to the narrow point that I quoted. I don’t think people should choose an approach to ethical decision making based primarily on this one specific point.
That said, I take your central point to be that virtue is more direct or local compared to consequences that it is easier to have evidence on your virtues than the consequences of your actions.
My argument above is specifically about the robustness of these to self deception. Being more local and in a sense “personal” is what makes virtue more suscetible to self deception in my view. There can be things that a reasonable obsever might consider evidence about your virtue, but these are so closely tied to you personally that self deception will often be easy. It will be easy to dismiss people who decry your lack of virtue as unfair or bias or bad people themselves, precisely because it is the question of your virtue on the line!
In contrast, the evidence you get about the consequences of your actions may be harder to interpret in some cases because you have to analysze the causation and it could be noisier, but this also creates a seperation so that it is not as personal. It gives you the chance to admit that even if you had good intentions and high integrity things didn’t play out how you wanted in actual fact. For virtue ethics you can’t admit you were wrong without also admitting you had bad intentions or lacked integrity.
I think you can do virtue ethics and also work on your tendency to self deceive, but that doesn’t make it is robust if you are self deception (although the degree of delf seception could defintiely be relevant).
That is a fair point, since virtue is tied to your identity and self it is a lot easier to take things personally and therefore distort the truth.
A part of me is like “meh, skill issue, just get good at emotional intelligence and see through your self” but that is probably not a very valid solution at scale if I’m being honest.
There’s still something nice about it leading to repeated games and similar, something about how if you look at our past then cooperation arises from repeated games rather than individual games where you analyse things in detail. This is the specific point that Joshua Greene makes in his Moral Tribes book for example.
Maybe the core point here is not virtue versus utilitarian reasoning, it might more be about the ease of self-deception and how different time limits and how ways of evaluating your own outcomes and outputs should be done in a more impersonal way. Maybe one shouldn’t call this virtue ethics as it carries a large bag and camp, maybe heruistics ethics or something (though that feels stupid).
If you care about the consequences of your crypto philanthropy strategy, your performance on those consequences can be contradicted empirically. But so what if the contradiction arrives in the form of your bankruptcy, along with blemishing the reputation of the movement you were trying to support?
“Basic virtue ethics” would probably prevent this (if not through by making you correct your strategy, at least it would light up more red flags in other people’s heads) (as would “non-naive consequentialism”). Of course, virtue ethics has its own failure modes. Virtue ethics and consequentialism have different failure profiles.
For clarity, I’m not trying to make the case for or against consequentialism/virtue ethics, I’m just trying to respond to the narrow point that I quoted. I don’t think people should choose an approach to ethical decision making based primarily on this one specific point.
I think this is consistent with my point. From what I can tell SBF continues to claim that he was acting with good intentions and high integrity, despite being convicted for fraud, which I think most people would reasonably assume demonstrates a strong lack of those characteristics. This seems like it might be a case of self deception about one’s own virtues. This is the kind of thing I meant when I said it seems like this is a quintisentially example of self deception. From what I can tell it is extremely common that people who aren’t virtuous still think of themselves as virtuous.
It seems to me like a lot of people involved in the SBF scandal admit that it was bad and that they made strategic mistakes by trusting SBF, but they often don’t say that this is related to virtue failures on their part, such as lacking integrity or honesty. In other words, they admit to their actions potentially having bad consequences as the result of evidence about those consequences, but don’t admit to these events shedding light on their virtues or character.