And so more thoughtful utilitarians will defend justice as an instrumental moral good, albeit not as a terminal moral good. Unfortunately, it seems very hard to actually hold this position without in practice deprioritizing justice (e.g. it’s rare to see effective altruists reasoning themselves into trying to make society more just).
There’s an extensive literature in economics on optimal punishment. Does that count, as far as utilitarians working on justice as an instrumental good?
For example, before trying to plan for the future, you need to have a sense of personal identity whereby your future self will feel a sense of continuity with and loyalty to your plans.
I think we just need our terminal values to not change too much over time, so if I ever feel like I need to rethink my plans, I’ll come up with a similar or even better plan. Is your thinking that this is impossible or infeasible for most humans, due to things like “power corrupts”? If so, I think consequentialism is still good as it lets us manage or mitigate such value drift, e.g., if I can foresee power (or other circumstances) corrupting my values, I can take precautions like avoiding getting into those situations?
Linking this to your other recent shortform, how could Paul have avoided other people misusing his work, except by doing better consequentialism (i.e., foreseeing this consequence and doing something ahead of time to mitigate it)? Are you not applying consequentialism in predicting the possible downside of one research/communications approach and adopting a different approach based on this prediction?
There’s an extensive literature in economics on optimal punishment. Does that count, as far as utilitarians working on justice as an instrumental good?
I think we just need our terminal values to not change too much over time, so if I ever feel like I need to rethink my plans, I’ll come up with a similar or even better plan. Is your thinking that this is impossible or infeasible for most humans, due to things like “power corrupts”? If so, I think consequentialism is still good as it lets us manage or mitigate such value drift, e.g., if I can foresee power (or other circumstances) corrupting my values, I can take precautions like avoiding getting into those situations?
Linking this to your other recent shortform, how could Paul have avoided other people misusing his work, except by doing better consequentialism (i.e., foreseeing this consequence and doing something ahead of time to mitigate it)? Are you not applying consequentialism in predicting the possible downside of one research/communications approach and adopting a different approach based on this prediction?