It undercuts the motivation for believing in moral realism, leaving us with no evidence for objective moral facts, which is a complicated thing, and thus unlikely to exist without evidence.
I certainly disagree about the “no evidence” part—to me, the fact that I’m an individual with preferences and ability to suffer is very strong evidence for subjective moral facts, so to speak, and if these exist subjectively, then it’s not that much of a stretch to assume there’s an objective way to resolve conflicts between these subjective preferences.
It’s for sure too large of a topic to resolve in this random comment thread, but either way, to my knowledge the majority of philosophers believes moral realism is more likely to be true than not, and even on lesswrong I’m not aware of huge agreement on it being false (but maybe I’m mistaken?). Hence, just casually dismissing moral realism without even a hint of uncertainty seems rather overconfident.
I agree that LessWrong comments are unlikely to resolve disagreements about moral realism. Much has been written on this topic, and I doubt I have anything new to say about it, which is why I didn’t think it would be useful to try to defend moral anti-realism in the post. I brought it up anyway because the argument in that paragraph crucially relies on moral anti-realism, I suspect many readers reject moral realism without having thought through the implications of that for AI moral patienthood, and I don’t in fact have much uncertainty about moral realism.
Regarding LessWrong consensus on this topic, I looked through a couple LessWrong surveys, and didn’t find any questions about this, so, this doesn’t prove much, but just out of curiosity, I asked Claude 4 Sonnet to predict the results of such a question, and here’s what it said (which seems like a reasonable guess to me):
*Accept moral realism**: ~8% **Lean towards moral realism**: ~12% **Not sure**: ~15% **Lean against moral realism**: ~25% **Reject moral realism**: ~40%
To say that Eliezer is a moral realist is deeply, deeply misleading. Eliezer’s ethical theories correspond to what most philosophers would identify as moral anti-realism (most likely as a form of ethical subjectivism, specifically).
(Eliezer himself has a highly idiosyncratic way of talking about ethical claims and problems in ethics, and while it is perfectly coherent and consistent and even reasonable once you grasp how he’s using words etc., it results in some serious pitfalls in trying to map his views onto the usual moral-philosophical categories.)
To say that Eliezer is a moral realist is deeply, deeply misleading.
No, it is not at all misleading. He is quite explicit about that in the linked Arbital article. You might want to read it.
Eliezer’s ethical theories correspond to what most philosophers would identify as moral anti-realism (most likely as a form of ethical subjectivism, specifically).
They definitely would not. They would immediately qualify as moral realist. Helpfully, he makes that very clear:
Within the standard terminology of academic metaethics, “extrapolated volition” as a normative theory is:
Cognitivist. Normative propositions can be true or false. You can believe that something is right and be mistaken.
He explicitly classifies his theory as cognitivist theory, which means it ascribes truth values to ethical statements. Since it is a non-trivial cognitivist theory (it doesn’t make all ethical statements false, or all true, and your ethical beliefs can be mistaken, in contrast to subjectivism) it straightforwardly classifies as a “moral realist” theory in metaethics.
He does argue against moral internalism (the statement that having an ethical belief is inherently motivating) but this is not considered a requirement for moral realism. In fact, most moral realist theories are not moral internalist. His theory also implies moral naturalism, which is again common for moral realist theories (though not required). In summary, his theory not only qualifies as a moral realist theory, it does so straightforwardly. So yes, according to metaethical terminology, he is a moral realist, and not even an unusual one.
Additionally, he explicitly likens his theory to Frank Jackson’s Moral Functionalism (that is indeed very similar to his theory!), which is considered an uncontroversial case of a moral realist theory.
To say that Eliezer is a moral realist is deeply, deeply misleading.
No, it is not at all misleading. He is quite explicit about that in the linked Arbital article. You might want to read it.
I have read it. I am very familiar with Eliezer’s views on ethics and metaethics.
I repeat that Eliezer uses metaethical terminology in a highly idiosyncratic way. You simply cannot take at face value statements that he makes like “my theory is a moral-realist theory” etc. His uses of the terms “good”, “right”, etc., do not match the standard usages.
Since it is a non-trivial cognitivist theory (it doesn’t make all ethical statements false, or all true, and your ethical beliefs can be mistaken, in contrast to subjectivism) it straightforwardly classifies as a “moral realist” theory in metaethics.
Yes, Eliezer claims that his moral theory is not a subjectivist one. But it is (straightforwardly!) a subjectivist theory.
You might perhaps be able to claim that Eliezer’s theory is a sort of “minimal moral realism”, but it’s certainly not “robust moral realism”.
This is from 2008 so who knows if it still matters at all, but in the metaethics sequence Eliezer says this:
(Disclaimer: Neither Subhan nor Obert represent my own position on morality; rather they represent different sides of the questions I hope to answer.)
about two characters in a Socrates dialogue that are moral realist and anti-realist, respectively.
I once tried to read the entire sequence to figure out what Eliezer thinks about morality but then abandoned the project before completing it. I still don’t know what he thinks.
(Not sure if I’m telling you anything new or if this was even worth saying.)
I got o3 to compare Eliezer’s metaethics with that of Brand Blanshard (who has some similar ideas), with particular attention to whether morality is subjective or objective. The result...
He explicitly classifies his theory as cognitivist theory, which means it ascribes truth values to ethical statements
That’s a necessary but insufficient condition for being realist.
CEV is clearly group level relativism..there is not nothing beyond the extrapolated subjective values of humans in general to make a claim true or false. Individual claims can be false, unlike individual level relativism, but that also an insufficient criterion for realism.
Regardless of whether the view Eliezer espouses here really counts as moral realism, as people have been arguing about, it does seem that it would claim that there is a fact of the matter about whether a given AI is a moral patient. So I appreciate your point regarding the implications for the LW Overton window. But for what it’s worth, I don’t think Eliezer succeeds at this, in the sense that I don’t think he makes a good case for it to be useful to talk about ethical questions that we don’t have firm views on as if they were factual questions, because:
1. Not everyone is familiar with the way Eliezer proposes to ground moral language, not everyone who is familiar with it will be aware that it is what any given person means when they use moral language, and some people who are aware that a given person uses moral language the way Eliezer proposes will object to them doing so. Thus using moral language in the way Eliezer proposes, whenever it’s doing any meaningful work, invites getting sidetracked on unproductive semantic discussions. (This is a pretty general-purpose objection to normative moral theories)
2. Eliezer’s characterization of the meaning of moral language relies on some assumptions about it being possible in theory for a human to eventually acquire all the relevent facts about any given moral question and form a coherent stance on it, and the stance that they eventually arrive at being robust to variations in the process by which they arrived at it. I think these assumptions are highly questionable, and shouldn’t be allowed to escape questioning by remaining implicit.
3. It offers no meaningful action guidence beyond “just think about it more”, which is reasonable, but a moral non-realist who aspires to acquire moral intuitions on a given topic would also think of that.
One could object to this line of criticism on the grounds that we should talk about what’s true independently of how it is useful to use words. But any attempt to appeal to objective truth about moral language runs into the fact that words mean what people use them to mean, and you can’t force people to use words the way you’d like them to. It looks like Eliezer kind of tries to address this by observing that extrapolated volation shares some features in common with the way people use moral language, which is true, and seems to conclude that it is the way people use moral language even if they don’t know it, which does not follow.
It undercuts the motivation for believing in moral realism, leaving us with no evidence for objective moral facts, which is a complicated thing, and thus unlikely to exist without evidence.
I certainly disagree about the “no evidence” part—to me, the fact that I’m an individual with preferences and ability to suffer is very strong evidence for subjective moral facts, so to speak, and if these exist subjectively, then it’s not that much of a stretch to assume there’s an objective way to resolve conflicts between these subjective preferences.
It’s for sure too large of a topic to resolve in this random comment thread, but either way, to my knowledge the majority of philosophers believes moral realism is more likely to be true than not, and even on lesswrong I’m not aware of huge agreement on it being false (but maybe I’m mistaken?). Hence, just casually dismissing moral realism without even a hint of uncertainty seems rather overconfident.
I agree that LessWrong comments are unlikely to resolve disagreements about moral realism. Much has been written on this topic, and I doubt I have anything new to say about it, which is why I didn’t think it would be useful to try to defend moral anti-realism in the post. I brought it up anyway because the argument in that paragraph crucially relies on moral anti-realism, I suspect many readers reject moral realism without having thought through the implications of that for AI moral patienthood, and I don’t in fact have much uncertainty about moral realism.
Regarding LessWrong consensus on this topic, I looked through a couple LessWrong surveys, and didn’t find any questions about this, so, this doesn’t prove much, but just out of curiosity, I asked Claude 4 Sonnet to predict the results of such a question, and here’s what it said (which seems like a reasonable guess to me):
*Accept moral realism**: ~8%
**Lean towards moral realism**: ~12%
**Not sure**: ~15%
**Lean against moral realism**: ~25%
**Reject moral realism**: ~40%
You might be surprised to learn that the most prototypical LessWrong user (Eliezer Yudkowsky) is a moral realist. The issue is that most people have only read what he wrote in the sequences, but didn’t read Arbital.
To say that Eliezer is a moral realist is deeply, deeply misleading. Eliezer’s ethical theories correspond to what most philosophers would identify as moral anti-realism (most likely as a form of ethical subjectivism, specifically).
(Eliezer himself has a highly idiosyncratic way of talking about ethical claims and problems in ethics, and while it is perfectly coherent and consistent and even reasonable once you grasp how he’s using words etc., it results in some serious pitfalls in trying to map his views onto the usual moral-philosophical categories.)
No, it is not at all misleading. He is quite explicit about that in the linked Arbital article. You might want to read it.
They definitely would not. They would immediately qualify as moral realist. Helpfully, he makes that very clear:
He explicitly classifies his theory as cognitivist theory, which means it ascribes truth values to ethical statements. Since it is a non-trivial cognitivist theory (it doesn’t make all ethical statements false, or all true, and your ethical beliefs can be mistaken, in contrast to subjectivism) it straightforwardly classifies as a “moral realist” theory in metaethics.
He does argue against moral internalism (the statement that having an ethical belief is inherently motivating) but this is not considered a requirement for moral realism. In fact, most moral realist theories are not moral internalist. His theory also implies moral naturalism, which is again common for moral realist theories (though not required). In summary, his theory not only qualifies as a moral realist theory, it does so straightforwardly. So yes, according to metaethical terminology, he is a moral realist, and not even an unusual one.
Additionally, he explicitly likens his theory to Frank Jackson’s Moral Functionalism (that is indeed very similar to his theory!), which is considered an uncontroversial case of a moral realist theory.
I have read it. I am very familiar with Eliezer’s views on ethics and metaethics.
I repeat that Eliezer uses metaethical terminology in a highly idiosyncratic way. You simply cannot take at face value statements that he makes like “my theory is a moral-realist theory” etc. His uses of the terms “good”, “right”, etc., do not match the standard usages.
Yes, Eliezer claims that his moral theory is not a subjectivist one. But it is (straightforwardly!) a subjectivist theory.
You might perhaps be able to claim that Eliezer’s theory is a sort of “minimal moral realism”, but it’s certainly not “robust moral realism”.
This is from 2008 so who knows if it still matters at all, but in the metaethics sequence Eliezer says this:
about two characters in a Socrates dialogue that are moral realist and anti-realist, respectively.
I once tried to read the entire sequence to figure out what Eliezer thinks about morality but then abandoned the project before completing it. I still don’t know what he thinks.
(Not sure if I’m telling you anything new or if this was even worth saying.)
I got o3 to compare Eliezer’s metaethics with that of Brand Blanshard (who has some similar ideas), with particular attention to whether morality is subjective or objective. The result...
That’s a necessary but insufficient condition for being realist. CEV is clearly group level relativism..there is not nothing beyond the extrapolated subjective values of humans in general to make a claim true or false. Individual claims can be false, unlike individual level relativism, but that also an insufficient criterion for realism.
Regardless of whether the view Eliezer espouses here really counts as moral realism, as people have been arguing about, it does seem that it would claim that there is a fact of the matter about whether a given AI is a moral patient. So I appreciate your point regarding the implications for the LW Overton window. But for what it’s worth, I don’t think Eliezer succeeds at this, in the sense that I don’t think he makes a good case for it to be useful to talk about ethical questions that we don’t have firm views on as if they were factual questions, because:
1. Not everyone is familiar with the way Eliezer proposes to ground moral language, not everyone who is familiar with it will be aware that it is what any given person means when they use moral language, and some people who are aware that a given person uses moral language the way Eliezer proposes will object to them doing so. Thus using moral language in the way Eliezer proposes, whenever it’s doing any meaningful work, invites getting sidetracked on unproductive semantic discussions. (This is a pretty general-purpose objection to normative moral theories)
2. Eliezer’s characterization of the meaning of moral language relies on some assumptions about it being possible in theory for a human to eventually acquire all the relevent facts about any given moral question and form a coherent stance on it, and the stance that they eventually arrive at being robust to variations in the process by which they arrived at it. I think these assumptions are highly questionable, and shouldn’t be allowed to escape questioning by remaining implicit.
3. It offers no meaningful action guidence beyond “just think about it more”, which is reasonable, but a moral non-realist who aspires to acquire moral intuitions on a given topic would also think of that.
One could object to this line of criticism on the grounds that we should talk about what’s true independently of how it is useful to use words. But any attempt to appeal to objective truth about moral language runs into the fact that words mean what people use them to mean, and you can’t force people to use words the way you’d like them to. It looks like Eliezer kind of tries to address this by observing that extrapolated volation shares some features in common with the way people use moral language, which is true, and seems to conclude that it is the way people use moral language even if they don’t know it, which does not follow.