A toy model of ethics which I’ve found helpful lately:
Consider society as a group of reinforcement learners, each getting rewards from interacting with the environment and each other.* We can then define two moral motivations:
Altruism: trying to increase the rewards received by others.
Justice**: trying to ensure that people get rewarded more when they act in ways that are more altruistic and just, and less when they don’t (note that this is a partially recursive definition).
Importantly, if you have one faction who’s primarily optimizing for altruism, and another that’s primarily optimizing for justice, by default they’ll undermine each other’s goals:
The easiest way to promote justice is to focus on punishing people who behave badly (since that’s easier than rewarding people who behave well). This means that the justice faction will (as a first-order effect) decrease the rewards received by others.
The easiest way to promote altruism is to focus on helping the worst-off. But insofar as the world is just, this will often be people who are badly-off as a consequence of their own misbehavior. And so this kind of altruism can easily undermine justice.
One way of thinking about the last few decades (and possibly centuries) is that ethical thinking has become dominated by altruism, to the point where being ethical and being altruistic are near-synonymous to many people (especially utilitarians). At an extreme, it leads to reasoning like in the comic below:
Of course, positively reinforcing misbehavior will tend to produce more misbehavior (both by teaching those who misbehave to do it again, and by making well-behaved people feel like chumps). And so more thoughtful utilitarians will defend justice as an instrumental moral good, albeit not as a terminal moral good. Unfortunately, it seems very hard to actually hold this position without in practice deprioritizing justice (e.g. it’s rare to see effective altruists reasoning themselves into trying to make society more just).
I think this difficulty is related to why consequentialism is wrong. This is a tricky topic to write about, but one core intuition is that before figuring out how to act, you need to figure out who is acting. For example, before trying to plan for the future, you need to have a sense of personal identity whereby your future self will feel a sense of continuity with and loyalty to your plans.
We can analogously view justice (and other moral intuitions which I’m ignoring in this simplified analysis) are mechanisms for holding society together as a moral agent which is able to act coherently at all. And so people who think that individuals should choose actions on the basis of their consequences are putting the locus of agency in the wrong place—it’s like saying that each ten-second timeslice of you should choose actions based on their consequences. Instead, something closer to virtue ethics is a far better approach.
Is this still consistent with some version of consequentialism? In some sense yes, in another sense no. Mostly I expect that the viewpoint I’ve outlined above will, when explored carefully enough, dissolve the standard debate between different branches of ethics. This is conceptually tricky to work through, though, and I’ll save further discussion for another post.
* The main reason I call this a toy model is that viewing people as reward-maximizers is itself assuming a kind-of-consequentialist viewpoint. I think we actually want a much richer conception of what it means to help and hurt people, but “increase or decrease reward” is so much easier to describe that I decided to use it here.
** Justice isn’t quite the right term here, because it implies being reward/punished for a specific action rather than being rewarded/punished for being generally good/bad; the same with “accountability”. “Fairness” might be better except that it’s been coopted by egalitarian notions of fairness. Other suggestions welcome—maybe something related to karma?
But in a just world this will tend to be the people who are badly-off as a consequence of their own misbehavior.
The real world is not just though. Yes some people who are badly off are there as a consequence of their own actions: Eg. this is quite likely the case if they’re in jail. But, like, the most common way to be badly off in a way that makes you a target for the assistance of effective altruists is to be born into a poor country without a good public health system. Non-effective altruists might try to help prisoners or do other things that oppose the justice people. But those choices seem more random: some of them will also just go and fund museums.
Of course, I agree with the overall point that it’s very important to consider what incentives you will create when you try to help people.
Fair point; I’ve just weakened my phrasing in the section you quoted.
However, I do think the world is much closer to just in some important ways than most cultural elites think. E.g. for questions like “whose fault is it that poor countries are poor?” or “whose fault is it that poor people in rich countries are poor?”, the answer “it’s mostly their own fault” is somewhat taboo in elite circles.
To be clear, considerations of justice and blame on a collective level rather than an individual level are pretty complicated. But I think we do have to grapple with them in order to reason about ethics in any sensible way.
And so more thoughtful utilitarians will defend justice as an instrumental moral good, albeit not as a terminal moral good. Unfortunately, it seems very hard to actually hold this position without in practice deprioritizing justice (e.g. it’s rare to see effective altruists reasoning themselves into trying to make society more just).
There’s an extensive literature in economics on optimal punishment. Does that count, as far as utilitarians working on justice as an instrumental good?
For example, before trying to plan for the future, you need to have a sense of personal identity whereby your future self will feel a sense of continuity with and loyalty to your plans.
I think we just need our terminal values to not change too much over time, so if I ever feel like I need to rethink my plans, I’ll come up with a similar or even better plan. Is your thinking that this is impossible or infeasible for most humans, due to things like “power corrupts”? If so, I think consequentialism is still good as it lets us manage or mitigate such value drift, e.g., if I can foresee power (or other circumstances) corrupting my values, I can take precautions like avoiding getting into those situations?
Linking this to your other recent shortform, how could Paul have avoided other people misusing his work, except by doing better consequentialism (i.e., foreseeing this consequence and doing something ahead of time to mitigate it)? Are you not applying consequentialism in predicting the possible downside of one research/communications approach and adopting a different approach based on this prediction?
The easiest way to promote altruism is to focus on helping the worst-off. But insofar as the world is just, this will often be people who are badly-off as a consequence of their own misbehavior. And so this kind of altruism can easily undermine justice.
That seems right, but at a global scope, the world is clearly not just, and so there’s not a conflict between altruism and justice.
eg Most of a human’s personal welfare depends on the country of their birth which is not due to their own behavior.
Within more limited scopes, individuals wealfare is much more loaded on their personal choices. And in those limited scopes, there does seem to be a conflict where trying to help the worst off disproportionately focuses on the least responsible.
“eg Most of a human’s personal welfare depends on the country of their birth which is not due to their own behavior.”
But it was dependent on their ancestors’ behavior. And so insofar as you view tribes/ethnic groups/countries as playing games with/against each other, then the same logic applies at that higher level.
Now you might reject that viewpoint, and take a purely individualist stance. But my claim above is (loosely, I haven’t made it precise yet) that the point of ethics is to move us beyond thinking of ourselves as individual units, so that we can make decisions as a larger-scale moral agent.
And from the perspective of the larger moral agent you’re instantiated within, there’s a big difference between people who were born in your home country and people born on other continents—because the former are part of that same moral agent to a much greater degree than the latter. (And yes, they may be worse at “being part of the moral superagent” than a foreigner would be. But this is all negotiated via Schelling points in games between millions of agents, so you can’t just pick and choose your coalition. You need an initiation ritual like a naturalization process or a judicial trial to change that coalition.) Analogously, some of your time-slices are less good at “being Eli” (according to your dominant identity narrative) than time-slices of some other people. But it’s still just for them to benefit or lose out based on the actions of your other time-slices.
(Written quickly on phone, please forgive infelicities of phrasing.)
It seems like we maybe have to decompose notions of moral patienthood and notions of moral agency.
Where a moral patient is someone who’s welfare value, and a moral agent is someone who you regard as responsible for their choices.
(This is already a fairly natural distinction, for me at least. I regard most mammals as moral patients, but mostly only humans as moral agents.)
But we run into problems when there are moral patients inside of, or under the care and responsibility of moral agents (that might or might not be moral patients in of themselves), because attending to the wellbeing of the inner moral patient entails violating the boundary of the larger moral agent or otherwise distorting just treatment of that agent.
Examples:
A person born into a country that enacted bad policies 50 years ago, and still hasn’t recovered.
A conscript to the military of an aggressor nation.
A child who is malnourished, because their parent spends all the money on booze.
(Naively, it’s good to intervene and feed that child, but that is effectively a subsidy to the parent’s drinking habit, if they tradeoff booze food for their kid even a little bit. It’s maybe bad decision theory to take care of the child because that gives the parent more leniency to not bother feeding them in the first place.)
I’m totally unwilling to write those people off because they happen to have been born into an unlucky situation. But it does seem like there’s some philosophy to figure out here about how to help those people without creating bad incentives for the moral agents that they’re contained within.
But we run into problems when there are moral patients inside of, or under the care and responsibility of moral agents (that might or might not be moral patients in of themselves), because attending to the wellbeing of the inner moral patient entails violating the boundary of the larger moral agent or otherwise distorting just treatment of that agent.
Yepp, this is a great way of putting it.
I’m totally unwilling to write those people off because they happen to have been born into an unlucky situation. But it does seem like there’s some philosophy to figure out here about how to help those people without creating bad incentives for the moral agents that they’re contained within.
Yeah, agree that we shouldn’t write them off, and that there’s some way to balance these two things. (One way I think about politics is that one faction has refused to consider “without creating bad incentives” and in response the other faction is now polarizing towards refusing to consider “help those people”. And we’ve now reached the point where these refusals commonly serve as vice signals on each side.)
Relatedly, my phrasing “the point of ethics” in my earlier message was too strong. I should have instead said something like “Although ethics has facets related to dealing with moral patients and other facets related to dealing with moral agents, the latter should generally have primacy, because (mis)aligning other moral agents is a big force multiplier (positively or negatively).”
Literally just as I was finishing writing up this post, I heard a commotion outside my house (in San Francisco). A homeless-looking man was yelling and throwing an electric guitar down the road. Apparently this had been going on for 5-10 minutes already. I sat in my window and watched for a few minutes; during that time, he stopped a car by standing in front of it and yelling. He also threw his guitar in the vicinity of several passers-by, including some old people and a mother cycling past with her kid.
There was a small gathering (of 5-10 people) at my house at this time. They were mostly ignoring it. I felt like this was wrong, and was slowly gathering up willpower to intervene. In hindsight I moved slowly because I was worried that a) he’d hit me with his guitar if I did, or b) he’d see which house I came out from and try to smash my windows or similar. But I wasn’t very worried, because I knew I could bring a few friends out with me.
Before I ended up doing anything, though, a man stopped his car and started yelling at the homeless guy quite aggressively, things like “Get the fuck out of here!” I immediately went outside to offer support in case the homeless guy got aggressive, but he didn’t need it; the homeless guy was already grabbing his stuff. He was somewhat apologetic but still kinda defensive (saying things like “it’s not my fault, man, it’s society”). At one point he turned to my friend and asked “were you bothered?” and my friend said “it was a bit loud”.
As he left, he picked up his guitar again. The man who’d stopped turned around and yelled “Leave that guitar!” The homeless guy threw it again, the man ran over to pick it up, and then the homeless guy left. A few minutes later, two police cars pulled up—apparently someone else had called them.
Overall it was an excellent illustration of why virtue ethics is important. We should have confronted him as soon as we’d noticed him causing a ruckus, both so that (much more defenseless) passers-by didn’t need to worry, and to preemptively prevent any escalation from him. But small niggles about him escalating meant that our fear ended up winning out, and made San Francisco a slightly less safe place. Even on the small things—like responding “it was a bit loud” instead of “you were being an asshole, quit scaring people”—it’s very easy to instinctively flinch away from taking appropriate action. To avoid that, cultivating courage and honesty seems crucial.
dissolve the standard debate between different branches of ethics
What I suspect is that consequences, as traced to the long term, would yield the same result of the collective being outcompeted as a result of adopting a fairly obvious malpractice. Additionally, I would rather see the argument explore in more detail what it means to produce good things.
Were mankind to become totally unemployed and live off some distribution of AI-produced goods, the reasoning from the comic which you mention would have a far better basis: the character would no longer be able to claim that his happiness is more important than the thief’s happiness[1]because the character, like the thief, would be unlikely to become able to usefully contribute to the society. This state reminds me of groups of kids in a kindergarten where they learn the basics of social interactions and find it hard to meaningfully claim ownership[2] of things received from teachers or of kids in a family trying to claim ownership of stuff received from parents.
However, before the rise of AI mankind would have individuals or collectives perform economically useful tasks, and being rewarded for these tasks made perfect sense. Thieves, on the other hand, only redistributed the goods in their favor and didn’t do anything like producing them, receiving them for something else or even protecting the collective from adversaries.
P.S. I also wonder what you would say in the post that you were planning to write.
However, such claims become easier once the kids’ interests diverge far enough. For example, if a kid isn’t interested in music, then the kid wouldn’t be sad about losing the opportunity to play the violin bought due to another kid’s interests.
I very much like this frame and I would be curious to hear what you think about truth seeking and a sense of curiosity as another axis (not necessarily orthogonal) to view this through.
I think that justice and doing actual (effective) altruism rests on the ability to evaluate actions and outcomes in a larger system in an efficient way and so the promotion of truth seeking seems to me something that is like almost an instrumental goal to whatever system you’re trying to create. It is a bit like actually closing the prediction action loop, it’s a bit boring in some ways as truth is just like obviously good but I think it might be undervalued when it comes to ideas of justice?
Also I’m not sure if you’re talking about virtue ethics here or deontology for it seems to me that you’re applying Kant’s categorical imperative more than you’re applying a sort of individualised golden mean, a bit like arguing from the state perspective rather than the individual.
Gillian Hadfield has a bunch of cool ideas about how law is a sort of reflection and development of the ideas of large scale act as if you were a random person in the system reasoning that might be fun to check out as well.
I also think you’re right when it comes to morality, it is just a question of what view your arguing from and from the imperfect information system level perspective, virtue ethics makes a lot of sense.
A toy model of ethics which I’ve found helpful lately:
Consider society as a group of reinforcement learners, each getting rewards from interacting with the environment and each other.* We can then define two moral motivations:
Altruism: trying to increase the rewards received by others.
Justice**: trying to ensure that people get rewarded more when they act in ways that are more altruistic and just, and less when they don’t (note that this is a partially recursive definition).
Importantly, if you have one faction who’s primarily optimizing for altruism, and another that’s primarily optimizing for justice, by default they’ll undermine each other’s goals:
The easiest way to promote justice is to focus on punishing people who behave badly (since that’s easier than rewarding people who behave well). This means that the justice faction will (as a first-order effect) decrease the rewards received by others.
The easiest way to promote altruism is to focus on helping the worst-off. But insofar as the world is just, this will often be people who are badly-off as a consequence of their own misbehavior. And so this kind of altruism can easily undermine justice.
One way of thinking about the last few decades (and possibly centuries) is that ethical thinking has become dominated by altruism, to the point where being ethical and being altruistic are near-synonymous to many people (especially utilitarians). At an extreme, it leads to reasoning like in the comic below:
Of course, positively reinforcing misbehavior will tend to produce more misbehavior (both by teaching those who misbehave to do it again, and by making well-behaved people feel like chumps). And so more thoughtful utilitarians will defend justice as an instrumental moral good, albeit not as a terminal moral good. Unfortunately, it seems very hard to actually hold this position without in practice deprioritizing justice (e.g. it’s rare to see effective altruists reasoning themselves into trying to make society more just).
I think this difficulty is related to why consequentialism is wrong. This is a tricky topic to write about, but one core intuition is that before figuring out how to act, you need to figure out who is acting. For example, before trying to plan for the future, you need to have a sense of personal identity whereby your future self will feel a sense of continuity with and loyalty to your plans.
We can analogously view justice (and other moral intuitions which I’m ignoring in this simplified analysis) are mechanisms for holding society together as a moral agent which is able to act coherently at all. And so people who think that individuals should choose actions on the basis of their consequences are putting the locus of agency in the wrong place—it’s like saying that each ten-second timeslice of you should choose actions based on their consequences. Instead, something closer to virtue ethics is a far better approach.
Is this still consistent with some version of consequentialism? In some sense yes, in another sense no. Mostly I expect that the viewpoint I’ve outlined above will, when explored carefully enough, dissolve the standard debate between different branches of ethics. This is conceptually tricky to work through, though, and I’ll save further discussion for another post.
* The main reason I call this a toy model is that viewing people as reward-maximizers is itself assuming a kind-of-consequentialist viewpoint. I think we actually want a much richer conception of what it means to help and hurt people, but “increase or decrease reward” is so much easier to describe that I decided to use it here.
** Justice isn’t quite the right term here, because it implies being reward/punished for a specific action rather than being rewarded/punished for being generally good/bad; the same with “accountability”. “Fairness” might be better except that it’s been coopted by egalitarian notions of fairness. Other suggestions welcome—maybe something related to karma?
The real world is not just though. Yes some people who are badly off are there as a consequence of their own actions: Eg. this is quite likely the case if they’re in jail. But, like, the most common way to be badly off in a way that makes you a target for the assistance of effective altruists is to be born into a poor country without a good public health system. Non-effective altruists might try to help prisoners or do other things that oppose the justice people. But those choices seem more random: some of them will also just go and fund museums.
Of course, I agree with the overall point that it’s very important to consider what incentives you will create when you try to help people.
Fair point; I’ve just weakened my phrasing in the section you quoted.
However, I do think the world is much closer to just in some important ways than most cultural elites think. E.g. for questions like “whose fault is it that poor countries are poor?” or “whose fault is it that poor people in rich countries are poor?”, the answer “it’s mostly their own fault” is somewhat taboo in elite circles.
To be clear, considerations of justice and blame on a collective level rather than an individual level are pretty complicated. But I think we do have to grapple with them in order to reason about ethics in any sensible way.
There’s an extensive literature in economics on optimal punishment. Does that count, as far as utilitarians working on justice as an instrumental good?
I think we just need our terminal values to not change too much over time, so if I ever feel like I need to rethink my plans, I’ll come up with a similar or even better plan. Is your thinking that this is impossible or infeasible for most humans, due to things like “power corrupts”? If so, I think consequentialism is still good as it lets us manage or mitigate such value drift, e.g., if I can foresee power (or other circumstances) corrupting my values, I can take precautions like avoiding getting into those situations?
Linking this to your other recent shortform, how could Paul have avoided other people misusing his work, except by doing better consequentialism (i.e., foreseeing this consequence and doing something ahead of time to mitigate it)? Are you not applying consequentialism in predicting the possible downside of one research/communications approach and adopting a different approach based on this prediction?
The premises of the toy model don’t require this to be true. Whether it’s true, and to what extent, can vary between environments.
That seems right, but at a global scope, the world is clearly not just, and so there’s not a conflict between altruism and justice.
eg Most of a human’s personal welfare depends on the country of their birth which is not due to their own behavior.
Within more limited scopes, individuals wealfare is much more loaded on their personal choices. And in those limited scopes, there does seem to be a conflict where trying to help the worst off disproportionately focuses on the least responsible.
“eg Most of a human’s personal welfare depends on the country of their birth which is not due to their own behavior.”
But it was dependent on their ancestors’ behavior. And so insofar as you view tribes/ethnic groups/countries as playing games with/against each other, then the same logic applies at that higher level.
Now you might reject that viewpoint, and take a purely individualist stance. But my claim above is (loosely, I haven’t made it precise yet) that the point of ethics is to move us beyond thinking of ourselves as individual units, so that we can make decisions as a larger-scale moral agent.
And from the perspective of the larger moral agent you’re instantiated within, there’s a big difference between people who were born in your home country and people born on other continents—because the former are part of that same moral agent to a much greater degree than the latter. (And yes, they may be worse at “being part of the moral superagent” than a foreigner would be. But this is all negotiated via Schelling points in games between millions of agents, so you can’t just pick and choose your coalition. You need an initiation ritual like a naturalization process or a judicial trial to change that coalition.) Analogously, some of your time-slices are less good at “being Eli” (according to your dominant identity narrative) than time-slices of some other people. But it’s still just for them to benefit or lose out based on the actions of your other time-slices.
(Written quickly on phone, please forgive infelicities of phrasing.)
Spitballing:
It seems like we maybe have to decompose notions of moral patienthood and notions of moral agency.
Where a moral patient is someone who’s welfare value, and a moral agent is someone who you regard as responsible for their choices.
(This is already a fairly natural distinction, for me at least. I regard most mammals as moral patients, but mostly only humans as moral agents.)
But we run into problems when there are moral patients inside of, or under the care and responsibility of moral agents (that might or might not be moral patients in of themselves), because attending to the wellbeing of the inner moral patient entails violating the boundary of the larger moral agent or otherwise distorting just treatment of that agent.
Examples:
A person born into a country that enacted bad policies 50 years ago, and still hasn’t recovered.
A conscript to the military of an aggressor nation.
A child who is malnourished, because their parent spends all the money on booze.
(Naively, it’s good to intervene and feed that child, but that is effectively a subsidy to the parent’s drinking habit, if they tradeoff booze food for their kid even a little bit. It’s maybe bad decision theory to take care of the child because that gives the parent more leniency to not bother feeding them in the first place.)
I’m totally unwilling to write those people off because they happen to have been born into an unlucky situation. But it does seem like there’s some philosophy to figure out here about how to help those people without creating bad incentives for the moral agents that they’re contained within.
Yepp, this is a great way of putting it.
Yeah, agree that we shouldn’t write them off, and that there’s some way to balance these two things. (One way I think about politics is that one faction has refused to consider “without creating bad incentives” and in response the other faction is now polarizing towards refusing to consider “help those people”. And we’ve now reached the point where these refusals commonly serve as vice signals on each side.)
Relatedly, my phrasing “the point of ethics” in my earlier message was too strong. I should have instead said something like “Although ethics has facets related to dealing with moral patients and other facets related to dealing with moral agents, the latter should generally have primacy, because (mis)aligning other moral agents is a big force multiplier (positively or negatively).”
Literally just as I was finishing writing up this post, I heard a commotion outside my house (in San Francisco). A homeless-looking man was yelling and throwing an electric guitar down the road. Apparently this had been going on for 5-10 minutes already. I sat in my window and watched for a few minutes; during that time, he stopped a car by standing in front of it and yelling. He also threw his guitar in the vicinity of several passers-by, including some old people and a mother cycling past with her kid.
There was a small gathering (of 5-10 people) at my house at this time. They were mostly ignoring it. I felt like this was wrong, and was slowly gathering up willpower to intervene. In hindsight I moved slowly because I was worried that a) he’d hit me with his guitar if I did, or b) he’d see which house I came out from and try to smash my windows or similar. But I wasn’t very worried, because I knew I could bring a few friends out with me.
Before I ended up doing anything, though, a man stopped his car and started yelling at the homeless guy quite aggressively, things like “Get the fuck out of here!” I immediately went outside to offer support in case the homeless guy got aggressive, but he didn’t need it; the homeless guy was already grabbing his stuff. He was somewhat apologetic but still kinda defensive (saying things like “it’s not my fault, man, it’s society”). At one point he turned to my friend and asked “were you bothered?” and my friend said “it was a bit loud”.
As he left, he picked up his guitar again. The man who’d stopped turned around and yelled “Leave that guitar!” The homeless guy threw it again, the man ran over to pick it up, and then the homeless guy left. A few minutes later, two police cars pulled up—apparently someone else had called them.
Overall it was an excellent illustration of why virtue ethics is important. We should have confronted him as soon as we’d noticed him causing a ruckus, both so that (much more defenseless) passers-by didn’t need to worry, and to preemptively prevent any escalation from him. But small niggles about him escalating meant that our fear ended up winning out, and made San Francisco a slightly less safe place. Even on the small things—like responding “it was a bit loud” instead of “you were being an asshole, quit scaring people”—it’s very easy to instinctively flinch away from taking appropriate action. To avoid that, cultivating courage and honesty seems crucial.
What I suspect is that consequences, as traced to the long term, would yield the same result of the collective being outcompeted as a result of adopting a fairly obvious malpractice. Additionally, I would rather see the argument explore in more detail what it means to produce good things.
Were mankind to become totally unemployed and live off some distribution of AI-produced goods, the reasoning from the comic which you mention would have a far better basis: the character would no longer be able to claim that his happiness is more important than the thief’s happiness[1] because the character, like the thief, would be unlikely to become able to usefully contribute to the society. This state reminds me of groups of kids in a kindergarten where they learn the basics of social interactions and find it hard to meaningfully claim ownership[2] of things received from teachers or of kids in a family trying to claim ownership of stuff received from parents.
However, before the rise of AI mankind would have individuals or collectives perform economically useful tasks, and being rewarded for these tasks made perfect sense. Thieves, on the other hand, only redistributed the goods in their favor and didn’t do anything like producing them, receiving them for something else or even protecting the collective from adversaries.
P.S. I also wonder what you would say in the post that you were planning to write.
Unless, of course, the thief ended up overconcentrating the goods.
However, such claims become easier once the kids’ interests diverge far enough. For example, if a kid isn’t interested in music, then the kid wouldn’t be sad about losing the opportunity to play the violin bought due to another kid’s interests.
I very much like this frame and I would be curious to hear what you think about truth seeking and a sense of curiosity as another axis (not necessarily orthogonal) to view this through.
I think that justice and doing actual (effective) altruism rests on the ability to evaluate actions and outcomes in a larger system in an efficient way and so the promotion of truth seeking seems to me something that is like almost an instrumental goal to whatever system you’re trying to create. It is a bit like actually closing the prediction action loop, it’s a bit boring in some ways as truth is just like obviously good but I think it might be undervalued when it comes to ideas of justice?
Also I’m not sure if you’re talking about virtue ethics here or deontology for it seems to me that you’re applying Kant’s categorical imperative more than you’re applying a sort of individualised golden mean, a bit like arguing from the state perspective rather than the individual.
Gillian Hadfield has a bunch of cool ideas about how law is a sort of reflection and development of the ideas of large scale act as if you were a random person in the system reasoning that might be fun to check out as well.
I also think you’re right when it comes to morality, it is just a question of what view your arguing from and from the imperfect information system level perspective, virtue ethics makes a lot of sense.