Justice, mercy, duty, etc are found by comparison to logical models pinned down by axioms. Getting the axioms right is damn tough, but if we have a decent set we should be able to say “If Alex kills Bob under circumstances X, this is unjust.” We can say this the same way that we can say “Two apples plus two apples is four apples.” I can’t find an atom of addition in the universe, and this doesn’t make me reject addition.
Also, the widespread convergence of theories of justice on some issues (eg. Rape is unjust.) suggests that theories of justice are attempting to use their axioms to pin down something that is already there. Moral philosophers are more likely to say “My axioms are leading me to conclude rape is a moral duty, where did I mess up?” than “My axioms are leading me to conclude rape is a moral duty, therefore it is.” This also suggests they are pinning down something real with axioms. If it was otherwise, we would expect the second conclusion.
“theories of justice are attempting to use their axioms to pin down something that is already there”
So in other words, duty, justice, mercy—morality words—are basically logical transformations that transform the state of the universe (or a particular circumstance) into an ought statement.
Just as we derive valid conlcusions from premises using logical statements, we derive moral obligations from premises using moral statements.
The term ‘utility funcion’ seems less novel now (novel as in, a departure from traditional ethics).
Not quite. They don’t go all the way to completing an ought statement, as this doesn’t solve the Is/Ought dichotomy. They are logical transformations that make applying our values to the universe much easier.
“X is unjust” doesn’t quite create an ought statement of “Don’t do X”. If I place value on justice, that statement helps me evaluate X. I may decide that some other consideration trumps justice. I may decide to steal bread to feed my starving family, even if I view the theft as unjust.
Justice, mercy, duty, etc are found by comparison to logical models pinned down by axioms. Getting the axioms right is damn tough, but if we have a decent set we should be able to say “If Alex kills Bob under circumstances X, this is unjust.” We can say this the same way that we can say “Two apples plus two apples is four apples.” I can’t find an atom of addition in the universe, and this doesn’t make me reject addition.
Also, the widespread convergence of theories of justice on some issues (eg. Rape is unjust.) suggests that theories of justice are attempting to use their axioms to pin down something that is already there. Moral philosophers are more likely to say “My axioms are leading me to conclude rape is a moral duty, where did I mess up?” than “My axioms are leading me to conclude rape is a moral duty, therefore it is.” This also suggests they are pinning down something real with axioms. If it was otherwise, we would expect the second conclusion.
“theories of justice are attempting to use their axioms to pin down something that is already there”
So in other words, duty, justice, mercy—morality words—are basically logical transformations that transform the state of the universe (or a particular circumstance) into an ought statement.
Just as we derive valid conlcusions from premises using logical statements, we derive moral obligations from premises using moral statements.
The term ‘utility funcion’ seems less novel now (novel as in, a departure from traditional ethics).
Not quite. They don’t go all the way to completing an ought statement, as this doesn’t solve the Is/Ought dichotomy. They are logical transformations that make applying our values to the universe much easier.
“X is unjust” doesn’t quite create an ought statement of “Don’t do X”. If I place value on justice, that statement helps me evaluate X. I may decide that some other consideration trumps justice. I may decide to steal bread to feed my starving family, even if I view the theft as unjust.
This is my view.