I find this an interesting line of criticism, it is essentially pointing at the difficulty of finding good evidence and evaluating yourself on that evidence and making a disagreement about how easy it is.
I would like to bring in a perspective of more first principles modelling of how quickly you incorporate evidence pointing against the way you’re thinking.
One thing is the amount of disconfirming evidence you look for. Another thing is your ability to bring that information into your worldview, your openness to being wrong. Thirdly we might also mention is the speed of the feedback, how long time does it take for you to get feedback.
I think you’re saying that when you go into virtue ethics we often find failures in bringing in disconcerting information into a worldview. I don’t think this has to be the case personally as I do think there are ways to actually get feedback on whether you’re acting in a way that is aligned with your virtues, by mentioning examples of them and then having anonymous people give feedback or just normal reflection.
This is a lot easier to do if your loops are shorter which is the exact point that consequentalism and utilitarianism can fail on, it is a target that is quite far away and so the crispness and locality of the feedback is not high enough.
I think that virtue ethics outperforms consequentialism because it is better suited for bringing in information as well as for speed and crispness of that information. I personally think it is because it is a game theory optimal solution to consequentialism in environments where you have little information but that is probably beside the point.
This might just be a difference in terminology though? Would you agree with my 3 point characterisation above?
For clarity, I’m not trying to make the case for or against consequentialism/virtue ethics, I’m just trying to respond to the narrow point that I quoted. I don’t think people should choose an approach to ethical decision making based primarily on this one specific point.
That said, I take your central point to be that virtue is more direct or local compared to consequences that it is easier to have evidence on your virtues than the consequences of your actions.
My argument above is specifically about the robustness of these to self deception. Being more local and in a sense “personal” is what makes virtue more suscetible to self deception in my view. There can be things that a reasonable obsever might consider evidence about your virtue, but these are so closely tied to you personally that self deception will often be easy. It will be easy to dismiss people who decry your lack of virtue as unfair or bias or bad people themselves, precisely because it is the question of your virtue on the line!
In contrast, the evidence you get about the consequences of your actions may be harder to interpret in some cases because you have to analysze the causation and it could be noisier, but this also creates a seperation so that it is not as personal. It gives you the chance to admit that even if you had good intentions and high integrity things didn’t play out how you wanted in actual fact. For virtue ethics you can’t admit you were wrong without also admitting you had bad intentions or lacked integrity.
I think you can do virtue ethics and also work on your tendency to self deceive, but that doesn’t make it is robust if you are self deception (although the degree of delf seception could defintiely be relevant).
That is a fair point, since virtue is tied to your identity and self it is a lot easier to take things personally and therefore distort the truth.
A part of me is like “meh, skill issue, just get good at emotional intelligence and see through your self” but that is probably not a very valid solution at scale if I’m being honest.
There’s still something nice about it leading to repeated games and similar, something about how if you look at our past then cooperation arises from repeated games rather than individual games where you analyse things in detail. This is the specific point that Joshua Greene makes in his Moral Tribes book for example.
Maybe the core point here is not virtue versus utilitarian reasoning, it might more be about the ease of self-deception and how different time limits and how ways of evaluating your own outcomes and outputs should be done in a more impersonal way. Maybe one shouldn’t call this virtue ethics as it carries a large bag and camp, maybe heruistics ethics or something (though that feels stupid).
I find this an interesting line of criticism, it is essentially pointing at the difficulty of finding good evidence and evaluating yourself on that evidence and making a disagreement about how easy it is.
I would like to bring in a perspective of more first principles modelling of how quickly you incorporate evidence pointing against the way you’re thinking.
One thing is the amount of disconfirming evidence you look for. Another thing is your ability to bring that information into your worldview, your openness to being wrong. Thirdly we might also mention is the speed of the feedback, how long time does it take for you to get feedback.
I think you’re saying that when you go into virtue ethics we often find failures in bringing in disconcerting information into a worldview. I don’t think this has to be the case personally as I do think there are ways to actually get feedback on whether you’re acting in a way that is aligned with your virtues, by mentioning examples of them and then having anonymous people give feedback or just normal reflection.
This is a lot easier to do if your loops are shorter which is the exact point that consequentalism and utilitarianism can fail on, it is a target that is quite far away and so the crispness and locality of the feedback is not high enough.
I think that virtue ethics outperforms consequentialism because it is better suited for bringing in information as well as for speed and crispness of that information. I personally think it is because it is a game theory optimal solution to consequentialism in environments where you have little information but that is probably beside the point.
This might just be a difference in terminology though? Would you agree with my 3 point characterisation above?
For clarity, I’m not trying to make the case for or against consequentialism/virtue ethics, I’m just trying to respond to the narrow point that I quoted. I don’t think people should choose an approach to ethical decision making based primarily on this one specific point.
That said, I take your central point to be that virtue is more direct or local compared to consequences that it is easier to have evidence on your virtues than the consequences of your actions.
My argument above is specifically about the robustness of these to self deception. Being more local and in a sense “personal” is what makes virtue more suscetible to self deception in my view. There can be things that a reasonable obsever might consider evidence about your virtue, but these are so closely tied to you personally that self deception will often be easy. It will be easy to dismiss people who decry your lack of virtue as unfair or bias or bad people themselves, precisely because it is the question of your virtue on the line!
In contrast, the evidence you get about the consequences of your actions may be harder to interpret in some cases because you have to analysze the causation and it could be noisier, but this also creates a seperation so that it is not as personal. It gives you the chance to admit that even if you had good intentions and high integrity things didn’t play out how you wanted in actual fact. For virtue ethics you can’t admit you were wrong without also admitting you had bad intentions or lacked integrity.
I think you can do virtue ethics and also work on your tendency to self deceive, but that doesn’t make it is robust if you are self deception (although the degree of delf seception could defintiely be relevant).
That is a fair point, since virtue is tied to your identity and self it is a lot easier to take things personally and therefore distort the truth.
A part of me is like “meh, skill issue, just get good at emotional intelligence and see through your self” but that is probably not a very valid solution at scale if I’m being honest.
There’s still something nice about it leading to repeated games and similar, something about how if you look at our past then cooperation arises from repeated games rather than individual games where you analyse things in detail. This is the specific point that Joshua Greene makes in his Moral Tribes book for example.
Maybe the core point here is not virtue versus utilitarian reasoning, it might more be about the ease of self-deception and how different time limits and how ways of evaluating your own outcomes and outputs should be done in a more impersonal way. Maybe one shouldn’t call this virtue ethics as it carries a large bag and camp, maybe heruistics ethics or something (though that feels stupid).