The risk of incorrectly believing in moral realism
(Status: not fully fleshed out, philosophically unrigorous)
A common talking point is that if you have even some credence in moral realism being correct, you should act as if it’s correct. The idea is something like: if moral realism is true and you act is if it’s false, you’re making a genuine mistake (i.e. by doing something bad), whereas if it’s false and you act as if it’s true, it doesn’t matter (i.e. because nothing is good or bad in this case).
I think this way of thinking is flawed, and in fact, the opposite argument can be made (albeit less strongly): if there’s some credence in moral realism being false, acting as if it’s true could be very risky.
The “act as if moral realism is true if unsure” principle contrasts moral realism, (i.e. that there is an objective moral truth, independent of any particular mind) with nihilism (i.e. nothing matters). But these are not the only two perspectives you could have. Moral subjectivism is a to-me intuitively compelling anti-realist view, which says that the truth value of moral propositions is mind-dependent (i.e. based on an individual’s beliefs about what is right and wrong).
From a moral subjectivist perspective, my actions can be justified by what I think is good, and your actions can be justified by what you think is good, and these things can disagree.
Importantly, compared to moral realism, moral subjectivism implies a different strategy when it comes to coordinating with others to achieve good things. If I am a moral realist, I may hope that with enough effort, I can prove to others (other people, or even machines), that something is good or bad. Whereas if I’m a moral subjectivist, this idea seems rather doomed. I need to accept that others may have a different, valid to them, conception of good. And so my options are either to overpower them (by not letting them achieve their idea of good when it conflicts with mine) or trade with them.
If I decide to “act as if moral realism is true”, I might spend a lot of resources trying to prove my idea of goodness to others, instead of directly pursuing my goals or trading with those who disagree. Furthermore, if everyone adopts this strategy, we end up in a long, unproductive fight that can never be resolved, instead of engaging in mutually-beneficial trades wherever possible.
This may pose a practical issue when it comes to AI development: if AI developers believe that there’s an objectively correct morality that the AI should follow, they may end up violating almost all people’s subjective conception of goodness in pursuit of an objective goodness that doesn’t exist.
And so my options are either to overpower them (by not letting them achieve their idea of good when it conflicts with mine) or trade with them.
There’s room for persuasion and deliberation as well. Moral anti-realists can care about how other people form moral beliefs (e.g. quality of justifications, coherence of values, non-coercion).
Moral anti-realism shouldn’t insist that a person’s values are apparent to that person, what they currently think is good. Moral realism likes to declare the dubious assumption that everyone’s values-on-reflection should be the same (in the limit), but hardly uses this assumption. Instead, it correctly points out that values-on-reflection are not the same as currently-apparent-values, that arguments about values are worthwhile. But the same should be the case when we allow (normative) orthogonality, where everyone’s values-on-reflection can (normatively) end up different. Worthwhile arguments can even be provided by one person to another, about that other’s person misunderstanding of their own different values.
my actions can be justified by what I think is good, and your actions can be justified by what you think is good, and these things can disagree
It’s easy to conflate three different things: 1. Whether or not there is an objective collection of moral facts 2. Whether or not it is possible to learn objective moral facts 3. Whether or not I should convince someone to believe a certain set of moral facts in a given situation
We can deny (1) with moral subjectivism. We can accept (1) but deny (2) by claiming that there are objective moral facts, but some (or all) of these are unknowable to some (or all) of humanity (moral realists don’t need to think that they can prove anything to others) We can accept (1) and (2) but deny (3) by saying that persuasion isn’t always the action that maximizes moral outcomes. Maybe the way to achieve the morally best outcome is actually to convince someone else of some false ideas that end up leading to morally useful actions (e.g. in 1945 we could convince Hitler’s colleagues that it’s righteous in general to backstab your colleagues if it meant one of them might kill Hitler)
So moral realists can accept that others will have other conceptions of good, and believe that the best options are to overpower or trade with those others (rather than convince them). They’re not perfect examples, but we’ve seen many moral realists do this throughout history (e.g. the Crusades). I think whether or not convincing others of your sense of morality is a morality-maximizing action depends a lot on the specifics of your morality and the context you’re in.
The risk of incorrectly believing in moral realism
(Status: not fully fleshed out, philosophically unrigorous)
A common talking point is that if you have even some credence in moral realism being correct, you should act as if it’s correct. The idea is something like: if moral realism is true and you act is if it’s false, you’re making a genuine mistake (i.e. by doing something bad), whereas if it’s false and you act as if it’s true, it doesn’t matter (i.e. because nothing is good or bad in this case).
I think this way of thinking is flawed, and in fact, the opposite argument can be made (albeit less strongly): if there’s some credence in moral realism being false, acting as if it’s true could be very risky.
The “act as if moral realism is true if unsure” principle contrasts moral realism, (i.e. that there is an objective moral truth, independent of any particular mind) with nihilism (i.e. nothing matters). But these are not the only two perspectives you could have. Moral subjectivism is a to-me intuitively compelling anti-realist view, which says that the truth value of moral propositions is mind-dependent (i.e. based on an individual’s beliefs about what is right and wrong).
From a moral subjectivist perspective, my actions can be justified by what I think is good, and your actions can be justified by what you think is good, and these things can disagree.
Importantly, compared to moral realism, moral subjectivism implies a different strategy when it comes to coordinating with others to achieve good things. If I am a moral realist, I may hope that with enough effort, I can prove to others (other people, or even machines), that something is good or bad. Whereas if I’m a moral subjectivist, this idea seems rather doomed. I need to accept that others may have a different, valid to them, conception of good. And so my options are either to overpower them (by not letting them achieve their idea of good when it conflicts with mine) or trade with them.
If I decide to “act as if moral realism is true”, I might spend a lot of resources trying to prove my idea of goodness to others, instead of directly pursuing my goals or trading with those who disagree. Furthermore, if everyone adopts this strategy, we end up in a long, unproductive fight that can never be resolved, instead of engaging in mutually-beneficial trades wherever possible.
This may pose a practical issue when it comes to AI development: if AI developers believe that there’s an objectively correct morality that the AI should follow, they may end up violating almost all people’s subjective conception of goodness in pursuit of an objective goodness that doesn’t exist.
Generally agree, but disagree with this part:
There’s room for persuasion and deliberation as well. Moral anti-realists can care about how other people form moral beliefs (e.g. quality of justifications, coherence of values, non-coercion).
I think those things can be generally interpreted as “trades” in the broadest sense. Sometimes trades of favour, reputation, or knowledge.
Moral anti-realism shouldn’t insist that a person’s values are apparent to that person, what they currently think is good. Moral realism likes to declare the dubious assumption that everyone’s values-on-reflection should be the same (in the limit), but hardly uses this assumption. Instead, it correctly points out that values-on-reflection are not the same as currently-apparent-values, that arguments about values are worthwhile. But the same should be the case when we allow (normative) orthogonality, where everyone’s values-on-reflection can (normatively) end up different. Worthwhile arguments can even be provided by one person to another, about that other’s person misunderstanding of their own different values.
It’s easy to conflate three different things:
1. Whether or not there is an objective collection of moral facts
2. Whether or not it is possible to learn objective moral facts
3. Whether or not I should convince someone to believe a certain set of moral facts in a given situation
We can deny (1) with moral subjectivism.
We can accept (1) but deny (2) by claiming that there are objective moral facts, but some (or all) of these are unknowable to some (or all) of humanity (moral realists don’t need to think that they can prove anything to others)
We can accept (1) and (2) but deny (3) by saying that persuasion isn’t always the action that maximizes moral outcomes. Maybe the way to achieve the morally best outcome is actually to convince someone else of some false ideas that end up leading to morally useful actions (e.g. in 1945 we could convince Hitler’s colleagues that it’s righteous in general to backstab your colleagues if it meant one of them might kill Hitler)
So moral realists can accept that others will have other conceptions of good, and believe that the best options are to overpower or trade with those others (rather than convince them). They’re not perfect examples, but we’ve seen many moral realists do this throughout history (e.g. the Crusades). I think whether or not convincing others of your sense of morality is a morality-maximizing action depends a lot on the specifics of your morality and the context you’re in.