I think consequentialism works pretty well in low-adversarialness environments, virtue ethics works in medium-adversarialness environments, and then deontology is most important in the most adversarial environments, because as you go from the former to the latter you are making decisions in ways which have fewer and fewer degrees of freedom to exploit.
I’ve been thinking about this a lot recently. It seems we could generalize this beyond adversarialness to uncertainty more broadly: In a low-uncertainty environment, consequentialism seems more compelling; in a high-uncertainty environment, deontology makes sense (because as you go from the former to the latter you are making decisions in ways which rest on fewer and fewer error-prone assumptions).
However, this still feels unsatisfying to me for a couple of reasons: (1) In a low-uncertainty environment, there is still some uncertainty. It doesn’t seem to make sense for an actor to behave in violation of their felt sense of morality to achieve a “good” outcome unless they are omniscient and can perfectly predict all indirect effects of their actions.[1] And, if they were truly omniscient, then deontic and consequentialist approaches might converge on similar actions—at least Derek Parfit argues this. I don’t know if I buy this, because (2) why do we value outcomes over the experiences by which we arrive at them? This presupposes consequentialism, which seems increasingly clearly misaligned with human psychology—e.g., the finding that maximizers are unhappier than satisficers, despite achieving “objectively” better outcomes, or the finding that happiness-seeking is associated with reduced happiness.
Relating this back to the question of reasoning in high-adversarial environments, it seems to me that the most prudent (and psychologically protective) approach is a deontological one, not only because it is more robust to outcome-thwarting by adversaries but more importantly because it is (a) positively associated with wellbeing and empathy and (b) inversely associated with power-seeking. See also here.
Moreover, one would need to be omniscient to accurately judge the uncertainty/adversarialness of their environment, so it probably makes sense to assume a high-uncertainty/high-adversarialness environment regardless (at least, if one cares about this sort of thing).
I’ve been thinking about this a lot recently. It seems we could generalize this beyond adversarialness to uncertainty more broadly: In a low-uncertainty environment, consequentialism seems more compelling; in a high-uncertainty environment, deontology makes sense (because as you go from the former to the latter you are making decisions in ways which rest on fewer and fewer error-prone assumptions).
However, this still feels unsatisfying to me for a couple of reasons: (1) In a low-uncertainty environment, there is still some uncertainty. It doesn’t seem to make sense for an actor to behave in violation of their felt sense of morality to achieve a “good” outcome unless they are omniscient and can perfectly predict all indirect effects of their actions.[1] And, if they were truly omniscient, then deontic and consequentialist approaches might converge on similar actions—at least Derek Parfit argues this. I don’t know if I buy this, because (2) why do we value outcomes over the experiences by which we arrive at them? This presupposes consequentialism, which seems increasingly clearly misaligned with human psychology—e.g., the finding that maximizers are unhappier than satisficers, despite achieving “objectively” better outcomes, or the finding that happiness-seeking is associated with reduced happiness.
Relating this back to the question of reasoning in high-adversarial environments, it seems to me that the most prudent (and psychologically protective) approach is a deontological one, not only because it is more robust to outcome-thwarting by adversaries but more importantly because it is (a) positively associated with wellbeing and empathy and (b) inversely associated with power-seeking. See also here.
Moreover, one would need to be omniscient to accurately judge the uncertainty/adversarialness of their environment, so it probably makes sense to assume a high-uncertainty/high-adversarialness environment regardless (at least, if one cares about this sort of thing).