Regardless of whether the view Eliezer espouses here really counts as moral realism, as people have been arguing about, it does seem that it would claim that there is a fact of the matter about whether a given AI is a moral patient. So I appreciate your point regarding the implications for the LW Overton window. But for what it’s worth, I don’t think Eliezer succeeds at this, in the sense that I don’t think he makes a good case for it to be useful to talk about ethical questions that we don’t have firm views on as if they were factual questions, because:
1. Not everyone is familiar with the way Eliezer proposes to ground moral language, not everyone who is familiar with it will be aware that it is what any given person means when they use moral language, and some people who are aware that a given person uses moral language the way Eliezer proposes will object to them doing so. Thus using moral language in the way Eliezer proposes, whenever it’s doing any meaningful work, invites getting sidetracked on unproductive semantic discussions. (This is a pretty general-purpose objection to normative moral theories)
2. Eliezer’s characterization of the meaning of moral language relies on some assumptions about it being possible in theory for a human to eventually acquire all the relevent facts about any given moral question and form a coherent stance on it, and the stance that they eventually arrive at being robust to variations in the process by which they arrived at it. I think these assumptions are highly questionable, and shouldn’t be allowed to escape questioning by remaining implicit.
3. It offers no meaningful action guidence beyond “just think about it more”, which is reasonable, but a moral non-realist who aspires to acquire moral intuitions on a given topic would also think of that.
One could object to this line of criticism on the grounds that we should talk about what’s true independently of how it is useful to use words. But any attempt to appeal to objective truth about moral language runs into the fact that words mean what people use them to mean, and you can’t force people to use words the way you’d like them to. It looks like Eliezer kind of tries to address this by observing that extrapolated volation shares some features in common with the way people use moral language, which is true, and seems to conclude that it is the way people use moral language even if they don’t know it, which does not follow.
Regardless of whether the view Eliezer espouses here really counts as moral realism, as people have been arguing about, it does seem that it would claim that there is a fact of the matter about whether a given AI is a moral patient. So I appreciate your point regarding the implications for the LW Overton window. But for what it’s worth, I don’t think Eliezer succeeds at this, in the sense that I don’t think he makes a good case for it to be useful to talk about ethical questions that we don’t have firm views on as if they were factual questions, because:
1. Not everyone is familiar with the way Eliezer proposes to ground moral language, not everyone who is familiar with it will be aware that it is what any given person means when they use moral language, and some people who are aware that a given person uses moral language the way Eliezer proposes will object to them doing so. Thus using moral language in the way Eliezer proposes, whenever it’s doing any meaningful work, invites getting sidetracked on unproductive semantic discussions. (This is a pretty general-purpose objection to normative moral theories)
2. Eliezer’s characterization of the meaning of moral language relies on some assumptions about it being possible in theory for a human to eventually acquire all the relevent facts about any given moral question and form a coherent stance on it, and the stance that they eventually arrive at being robust to variations in the process by which they arrived at it. I think these assumptions are highly questionable, and shouldn’t be allowed to escape questioning by remaining implicit.
3. It offers no meaningful action guidence beyond “just think about it more”, which is reasonable, but a moral non-realist who aspires to acquire moral intuitions on a given topic would also think of that.
One could object to this line of criticism on the grounds that we should talk about what’s true independently of how it is useful to use words. But any attempt to appeal to objective truth about moral language runs into the fact that words mean what people use them to mean, and you can’t force people to use words the way you’d like them to. It looks like Eliezer kind of tries to address this by observing that extrapolated volation shares some features in common with the way people use moral language, which is true, and seems to conclude that it is the way people use moral language even if they don’t know it, which does not follow.