I’m still bothered by the fact that different people mean different and in fact contradictory things by “moral realism”.
The SEP says that moral realism means thinking that (some) morality exists as objective fact, which can be discovered through thinking or experimentation or some other process which would lead all right-thinking minds to agree about it. That is also how I understood the term before reading these posts.
And yet Eliezer seems to call himself (or be called?) a moral realist, even though he explicitly only talks about MoralGood!Eliezer (or !Humanity, !CEV, etc.) This is confusing and consequently irritating to people including myself.
So when you ask if:
maybe some people have the intuition that the orthogonality thesis is at odds with moral realism.
What do you mean? I think it’s time to taboo “moral realism” because people have repeatedly failed to agree on what these words should mean.
The SEP says that moral realism means thinking that (some) morality exists as objective fact, which can be discovered through thinking or experimentation or some other process which would lead all right-thinking minds to agree about it. That is also how I understood the term before reading these posts.
The SEP doesn’t say this. Actually, the SEP doesn’t even use the word “objective.” What the SEP actually says is, “Moral realists are those who think that, in these respects, things should be taken at face value—moral claims do purport to report facts and are true if they get the facts right,” and that’s it.
And yet Eliezer seems to call himself (or be called?) a moral realist, even though he explicitly only talks about MoralGood!Eliezer (or !Humanity, !CEV, etc.) This is confusing and consequently irritating to people including myself.
On Eliezer’s view, as I understand it, human!morality just is morality, simpliciter.
What the SEP actually says is, “Moral realists are those who think that, in these respects, things should be taken at face value—moral claims do purport to report facts and are true if they get the facts right,” and that’s it.
This is all a matter of misunderstanding the meaning of words, and nobody is objectively right or wrong about that, since the disagreement is widespread—I’m not the only one to complain.
To me, an unqualified “fact” is, by implication, a simple claim about the universe, not a fact about the person holding the belief in that fact. An unqualified “fact” should be true or false in itself, without requiring you to further specify you meant the instance-of-that-fact that applies to some particular person with particular moral beliefs.
If SEP’s usage of “fact” is taken to mean “a fact about the person holding the moral belief”, the fact being that the person does hold that belief, then I don’t understand what it would mean to say that there aren’t any moral facts (i.e. moral anti-realism). Would it mean to claim that people have no moral beliefs? That’s obviously false.
On Eliezer’s view, as I understand it, human!morality just is morality, simpliciter.
That’s exactly what bothers me—that he (and other people agree with this) redefines the word “morality” to mean human!morality, and this confuses people (I’m not the only one) who expect that word to mean something else, depending on context. (For example, the meta-concept of morality, as opposed to a concrete set of moral beliefs such as Eliezer!morality or humanity!morality.)
I agree that if everyone agreed to Eliezer’s usage, then discussing morality would be easier. But it’s just a fact that many people use the word differently from him. And when faced with such inconsistency, I would prefer that people either always qualify their usage, or taboo the word entirely.
To me, an unqualified “fact” is, by implication, a simple claim about the universe, not a fact about the person holding the belief in that fact.
It’s a fact that my height is less than six feet. It’s also a fact that I disapprove of torture. These are objective facts, not opinions or one person’s suspicions. It’s not just that I object to claims that I’m seven feet tall; such claims would be false. And if someone says of me that I approve of torture, they’re in error, as surely as if they said grass is red and ponies have seventeen hooves.
However, if when I say ‘torture is wrong’, I mean the fact that I disapprove of torture, I am using relativism. The statement “torture is wrong” is saying something about the speaker. But it’s also saying something about the listener; I expect the listener to react in some way to the idea I’m expressing. I don’t go around saying “torture is flooble”; I expect that listeners don’t assign any significance to floobleness, but they do to wrongness.
Relativism does not mean that moral claims become mere matters of passing fancy; it means that moral claims express preferences of particular minds (including speakers’ and listeners’); understanding them requires understanding something about the minds of those who make them.
Consider: As an English-speaker, you might find it distasteful if your neighbor named her daughter “Porn”. You might even think it was wrong, especially if you had concerns about how other English-speakers would react to a little girl named Porn. If you were a Thai-speaker living in a Thai language community, you probably wouldn’t see a problem, because “Porn” means “Blessing” in Thai and is a common female name. Understanding why the English-speaker is squicked by the idea of a little girl named Porn, but the Thai-speaker is not, requires knowing something about English and Thai languages, as well as about cultural responses to different sorts of mental imagery involving children.
But suppose that when I say “torture is wrong”, I mean “Any intelligent mind, no matter its origin, if it is capable of understanding what ‘torture’ means, will disapprove of torture.” That is, a relativisty-preferencey sort of “wrongness” follows from some fact that is true about all intelligent minds. That’s a very different claim. It’s a lot closer to what people tend to think of as “absolute, objective morality”.
Relativism does not mean that moral claims become mere matters of passing fancy; it means that moral claims express preferences of particular minds (including speakers’ and listeners’); understanding them requires understanding something about the minds of those who make them.
Understanding their content, understanding why the speaker considers them true, or understanding why they are
true-for_speaker?
Consider: As an English-speaker, you might find it distasteful if your neighbor named her daughter “Porn”. You might even think it was wrong, especially if you had concerns about how other English-speakers would react to a little girl named Porn. If you were a Thai-speaker living in a Thai language community, you probably wouldn’t see a problem, because “Porn” means “Blessing” in Thai and is a common female name. Understanding why the English-speaker is squicked by the idea of a little girl named Porn, but the Thai-speaker is not, requires knowing something about English and Thai languages, as well as about cultural responses to different sorts of mental imagery involving children.
Is the more general principle “don’t give your children embarrassing names” equally relative? How about “don’t embarass people in general ”? Or “don’t do unpleasant things to people in general”?
The SEP says that moral realism means thinking that (some) morality exists as objective fact, which can be discovered through thinking or experimentation or some other process which would lead all right-thinking minds to agree about it.
I took Chris’s meaning to be that moral realism (as defined by the SEP) says that moral claims are fact claims possessing truth values but says nothing about the discoverability or computability of those truth values. Your definition would have every moral realist insisting that every moral claim can be proven either true or false, but it seems to me that Chris’ definition allows moral realists to leave open Gödel-incompleteness status for moral claims, considering their truth or falsity to exist but be possibly incomputable, and still be moral realists. Or, to take no position on whether rational minds would come to the truth values of moral claims, only on whether the truth values existed. Your definition would exclude both of those from moral realism.
Chris, please correct me if this is not what you meant.
I have no problem with Godel-incompleteness, uncomputability, and so on in a system that allows you to state any moral proposition.
However: if a moral realist believes that “moral claims are fact claims possessing truth values”, then what does he belief regarding the proposition (1) “there exists at least one moral claim that can be proven true or false”? (Leaving aside claims that simply induce contradictions, are not well defined, etc.)
If he thinks such a claim exists, that is the same as saying there is a Universally Compelling Argument for or against that claim. And that is a logical impossibility. I can always construct a mind that is immune to any particular argument.
If he thinks no such claims exist, then it seems to be a kind of dualism—postulating a property “truth” of moral claims, which is not causally entangled with the physical world. It also seems pointless—why care about it if no actual mind can ever discover such truths?
ETA: talking about ‘proving’ claims true or false is a simplification. In reality we have degrees of beliefs in the truth-value of claims. But my point is that moral-realistic claims seem to be disengaged from reality; substitute “provide evidence for” in place of “prove” and my argument should still work.
If you needed my comment to decide that not understanding Chris’s comment is a much better hypothesis than not understanding Chris and SEP’s use of “fact,” then you have much worse problems than not understanding Chris’s comment.
I think the problem lies in your usage of the phrase “objective fact”.
For example, if I claim “broccoli is tasty”, my claim purports to report a fact. Plausibly, it purports to report a fact about me—namely, that I like broccoli. If someone else were to claim “broccoli is tasty”, her utterance would also purport to report a fact—plausibly, the fact that she likes broccoli. So two token utterances of the very same type may pick out different facts. If this is the case, “broccoli is tasty” is true when asserted by broccoli-lovers and false when asserted by broccoli-haters. This should not be surprising, provided that it is interpreted as a disguised indexical claim.
Clearly, there is no experimental process whereby all right-thinking people can conclude that broccoli is tasty (or, alternatively, that broccoli is not tasty), even though several right-thinking people can justifiably arrive at this conclusion (by eating broccoli and liking it, say). Crucially, this conclusion is consistent with being a realist about broccoli-tastiness, but inconsistent with thinking there are objective facts about broccoli-tastiness (as you use the term). Likewise, one can be a realist about morality without thinking there are objective facts about morality (again, as you use the term).
When I say “objective fact”, I mean (in context) a non-indexical one.
The original problem I raised was that some people who talked about things being “moral” meant those statements indexically, and others meant them objectively, and this created a lot of confusion.
one can be a realist about morality without thinking there are objective facts about morality (again, as you use the term).
I use the term “objective facts about morality” to mean “non-indexical facts which do not depend on picking out the person holding the moral beliefs”. Moral realism is the belief such objective facts about morality can and/or do exist.
Of course, one is free to interpret “moral realism” as you do—it’s a natural enough interpretation, and may even be the most common one among philosophers. However, this is not the definition given in the SEP. According to it, “moral realists are those who think that...moral claims do purport to report facts and are true if they get the facts right”. This does not entail that moral realists think that moral claims purport to report objective facts. But isn’t such a loose interpretation of “moral realism” vacuous? As you say:
If SEP’s usage of “fact” is taken to mean “a fact about the person holding the moral belief”, the fact being that the person does hold that belief, then I don’t understand what it would mean to say that there aren’t any moral facts (i.e. moral anti-realism).
The moral anti-realist can choose from among two main alternatives if she wishes to deny moral realism, which I understand as being committed to the following two theses: (1) moral claims purport to report some (not necessarily objective) facts, and (2) some moral claims are true. First, she can maintain that all moral claims are false, which is a plausible suggestion: perhaps our moral claims purport to be about some normative aspect of the world, but the world lacks this normative aspect. Second, she can maintain that no moral claims purport to report facts; instead, all moral claims express emotions. On this view, saying “setting cats on fire is wrong” is tantamount to exclaiming “Boo!” or “Ew!”
First, she can maintain that all moral claims are false, which is a plausible suggestion: perhaps our moral claims purport to be about some normative aspect of the world, but the world lacks this normative aspect.
That would still be discussing an objective claim—just one that happens to be false. On a part with discussing a mathematical proposition which is false, or an empirical hypothesis which is false: both of these are independent of the person who says them or believes in them. Just so, discussing normative aspects of the world—whether they exist or not, and whether they are as claimed or not—isn’t the same as discussing normative beliefs of a person.
So calling this moral anti-realism seems to use my sense of “moral realism” (objective fact), not the SEP’s.
Second, she can maintain that no moral claims purport to report facts; instead, all moral claims express emotions. On this view, saying “setting cats on fire is wrong” is tantamount to exclaiming “Boo!” or “Ew!”
In one way, this is again moral anti-realism in my sense of the phrase: the claim that morals don’t exist separately from the moral beliefs of concrete persons. (I hold this view.)
In another way, it can be read as a claim about what people mean when they talk about morals. In that case, the claim is plainly wrong, because many people are moral realists.
So to sum up, I’m afraid I still don’t see what it would mean to be a moral anti-realist in what you say is the SEP sense.
(For example, the meta-concept of morality, as opposed to a concrete set of moral beliefs such as Eliezer!morality or humanity!morality.)
But there isn’t a meta-concept of morality. If you try to abstract one, you just end up with something like “that which motivates”, which is empty unless you specify which specific minds can be motivated by it, and then you’re back where you started.
There are several different uses of morality, each which result from different meta-concepts. An Aristotelean, for example, would talk about morality as fitting a human’s purpose (as would a Christian), for example. Everybody uses the same word for several fundamentally different concepts, some of which have no or little basis in fact.
Different humans have somewhat different morals. I can still talk about “morals” in general, because they are a special kind of motivations in humans. Talking about morals in minds in general indeed makes little sense.
Talking about morals in minds in general indeed makes little sense.
To whom? AFAICS, if you have minds living in a community, and they can interact in ways that caus negative and positive utility to each other, then you the problem that morality solves...and that is a ery general set of conditions.
I think what Dan means is that different kinds of minds in different kinds of community might need quite different solutions to the problem of interacting effectively, which might lead to quite different notions of morality, and that if that’s true then you shouldn’t expect any single notion of morality to be universally applicable.
It’s often difficult to figure out which human preferences are moral v. amoral. That would be a vastly more challenging task for an alien species, such that we’d probably be better off in most cases by prohibiting ourselves from sorting alien values in that way.
Yes, it does. But it says it in the article Moral Anti-Realism, not the article cited above, Moral Realism. The former article is very interested in objectivity constraints, but expresses a great deal of confusion about how to make sense of them; the latter article mentions them only to toss them out for being too confused. (It would not be too surprising if this has something to do with the latter author being more convinced of the truth of ‘realism’, hence wanting to make the Realism brand simple, clean, and appealing to a wider audience.)
If your encyclopedia has an ‘Apples’ article and a ‘Non-Apples’ article, and the two articles completely disagree about what it means to be an ‘Apple’, then you have your first clue that the word ‘Apple’ should always come pre-tabooed.
(ETA: More generally, be aware that ‘the SEP says X’ is less reliable than ‘SEP article Y says X’, because articles may disagree with each other. SEP is an anthology of introductory essays. We wouldn’t normally say ‘Very Short Introductions says X’, even if we trust the VSI brand quite a bit.)
What the SEP actually says is, “Moral realists are those who think that, in these respects, things should be taken at face value—moral claims do purport to report facts and are true if they get the facts right,” and that’s it.
Almost. Moral realists (even on the more inclusive definitions) also demand that at least one moral claim of this sort be true. (This is asserted in the sentence right after your quotation terminates.) That’s why error theory is not a form of moral realism; realism is a (perhaps improper) subset of success theory.
Oh my god, the “moral anti-realism” article has what is possibly the best opening paragraph I’ve seen in the SEP:
It might be expected that it would suffice for the entry for “moral anti-realism” to contain only some links to other entries in this encyclopedia. It could contain a link to “moral realism” and stipulate the negation of the view there described. Alternatively, it could have links to the entries “anti-realism” and “morality” and could stipulate the conjunction of the materials contained therein. The fact that neither of these approaches would be adequate—and, more strikingly, that following the two procedures would yield substantively non-equivalent results—reveals the contentious and unsettled nature of the topic.
Oh, well, that makes some sense, actually. Since everybody knows that “cognitivism” means that moral statements have truth-values, whereas “realism” seems to be a confused notion—I actually interpreted it to mean the same thing as cognitivism because otherwise I don’t know what on earth realism should even be.
Eliezer is a realist, he’s just also an indexicalist. According to his theory, when you use the word “morality”, you refer to “Human!morality”, and there are objective facts about that. His theory just also says that when Clippy uses the word “morality”, it refery to “Clippy!morality” (about which there are also objective facts, which are logically independent of the facts about “Human!morality”). Just like when you say “water”, it refers to water, but when twin-you says water, it refers to XYZ.
I thought that when humans and Clippy speak about morality, they speak about the same thing (assuming that they are not lying and not making mistakes).
The difference is in connotations. For humans, morality has a connotation “the thing that should be done”. For Clippy, morality has a connotation “this weird stuff humans care about”.
So, you could explain the concept of morality to Clippy, and then also explain that X is obviously moral. And Clippy would agree with you. It just wouldn’t make Clippy any more likely to do X; the “should” emotion would not get across. The only result would be Clippy remembering that humans feel a desire to do X; and that information could be later used to create more paperclips.
Clippy’s equivalent of “should” is connected to maximizing the number of paperclips. The fact that X is moral is about as much important for it as an existence of a specific paperclip is for us. “Sure, X is moral. I see. I have no use of this fact. Now stop bothering me, because I want to make another paperclip.”
According to his theory, when you use the word “morality”, you refer to “Human!morality”, and there are objective facts about that.
If this is a theory about what people mean when they say “morality”, then he is wrong about a significant percentage of people, as a matter of simple fact.
And what kinds of things are the things that people mean? Semantic entities, or entities in the world? If semantic, intensions or Kaplanian characters or something else?
This is not a rhetorical question. I have absolutely no clue what “mean” means when applied to people. (Actually, I don’t even know what it means when applied to words, but that case feels intuitively much clearer than people meaning something.)
By “mean” I mean (no pun intended) that when people say a word, they use it to refer to a concept they have. This can be a semantic entity, or a physical entity, or a linguistic entity elsewhere in the same sentence, or anything else the speaker has a mental concept of that they can attach the word to, and which they expect the listeners to infer by hearing the word.
To put it another way: people use words to cause the listener to think thoughts which correspond in a certain way to the ones the speaker thinks. The thoughts of the speaker, which they intend to convey to the listener, are what they mean by the words.
Please be patient, I’m out of my depth somewhat.
If I say to you “invisible pink unicorn” or “spherical cube”, I would characterise myself as not having successfully meant anything, even though, if I’m not paying attention, it feels like I did. Am I wrong? Am I confusing meaning with reference, or some such? It certainly seems to me that I am in some way failing.
If I say to you “invisible pink unicorn” or “spherical cube”, I would characterise myself as not having successfully meant anything, even though, if I’m not paying attention, it feels like I did.
In both examples I understand you to mean two (non-existent in the real world) items with a set of seemingly contradictory characteristics. So you did mean something. Not an object in the real world, but you meant the concept of an object containing contradictory characteristics, and gave examples of what “contradictory characteristics” are.
Indeed that meaning of contradiction is the reason “Invisible Pink Unicorn” is used to parody religion, etc.
Now if someone used the words without understanding that they are contradictory, or even believing the things in question are real—they’d still have meant something: An item in their model of the world. They’d be wrong that such an item really existed in the outside world, but their words would still have meaning in pinpointing to said item in their mental model.
Hm, thoughts are tricky things, and identity conditions of thoughts are trickier yet. I was just trying to see if you had a better idea of what “mean” might mean than me. But it seems we have to get by with what little we have.
Because I share your intuition that there is something fishy about the referential intention in Eliezer’s picture. With terms like water, it’s plausible that people intend to refer to “this stuff here” or “this stuff that [complicated description of their experiences with water]”. With morality, it seems dubious that they should be intending to refer to “this thing that humans would all want if we were absolutely coherent etc.”
Group-level moral relativism just is the belief that moral truths are indexed to groups. Since relativism is uncontroversially opposed to realism, “indexical realist” is a bit of a contradiction.
“Indexicality” in the philosopher’s sense means that the reference of a word depends on who utters it in which circumstances. Putnam argues that “water” (and all other natural kind terms) has an indexical component because its reference depends on whether you or twin-you utters it.
Which is about equivalent to claiming that anything might be relative, because it might be indexical along some unknown axis, in this case unobserved possible worlds. I afraid I don’t think that is very interesting.
What’s that concept of “relativity” you’re talking about, anyway? The proposition expressed by the sentence “clippy shouldn’t convert humans into paperclips”, uttered by a speaker of English in the actual world, is simply true. That the proposition expressed by the sentence varies depending on who utters it in which world is a completely different thing. There is no relativism about whether I am sitting at my desk just because I can report this fact by saying “I’m sitting in my desk” (which you can’t do, because if you said that sentence, you would be expressing a different proposition, one that’s about you, not me).
“clippy shouldn’t convert humans into paperclips”, uttered by a speaker of English in the actual world, is simply true.
Only if moral realism is also true. If the above sentence is false when uttered by Clippy, it has a truth value which is indexical to who is uttering it, meaning that moral realism is false.
There is no relativism about whether I am sitting at my desk just because I can report this fact by saying “I’m sitting in my desk”
It’s not relative, and it is indexical, because “I” is indexical. The point you are making is again, not interesting.
Yes, of course. I was illustrating how the theory works.
If the above sentence is false when uttered by Clippy, it has a truth value which is indexical to who is uttering it, meaning that moral realism is false.
No, it doesn’t. The thing is that on the view I’m talking about here, sentences don’t have truth-conditions, but propositions have. (Some) sentences express a proposition dependent on the context of utterance. Moral realism thus has to be the position that moral statements express propositions, because it wouldn’t make any sense otherwise—sentences don’t have truth-conditions anyway. When clippy says “One shouldn’t convert humans into paperclips”, he is simply not expressing the same proposition that I am expressing when I utter that sentence.
The point you are making is again, not interesting.
Then why exactly are you having a discussion that seems to be based on you not understanding concepts that you find “uninteresting”? I find your sense of “relative”, which seems to be “in any conceivable way dependent on anything”, pretty uninteresting, actually...
When clippy says “One shouldn’t convert humans into paperclips”, he is simply not expressing the same proposition that I am expressing when I utter that sentence.
Why shouldn’t the truth-value attach to a (proposition, context) tuple? Why, for that matter shouldn’t it attach to a (sentence, language, context) tuple?
A (sentence,language,context) tuple uniquely determines a proposition, so I don’t mind if you attach a truth-value to that (relative to a world of evaluation, of course). But propositions don’t change their truth-value relative to a context by definition. A proposition is that thing which has a truth-value relative to a situation of evaluation.
But—see this comment—I may have been too charitable in interpreting “realism” as what is more properly called “cognitivism”. That’s because I can’t think of any other interpretation of “realism” that even makes any sense.
Cognitivism is compatible with the claim that moral statements have truth values that vary with the speaker. (despite lack of explicit indexicals, yadda yadda). The contrary claim is that they don’t. I don’t see why the one claim should be more readily comprehensible that its opposite.
The contrary claim is often called realism, although that muddies the water, since in addition to the epistemological claim it can be used to state the claim that moral terms have real referents.
“Cognitivism encompasses all forms of moral realism, but cognitivism can also agree with ethical irrealism or anti-realism. Aside from the subjectivist branch of cognitivism, some cognitive irrealist theories accept that ethical sentences can be objectively true or false, even if there exist no natural, physical or in any way real (or “worldly”) entities or objects to make them true or false.
There are a number of ways of construing how a proposition can be objectively true without corresponding to the world:
By the coherence rather than the correspondence theory of truth
In a figurative sense: it can be true that I have a cold, but that doesn’t mean that the word “cold” corresponds to a distinct entity.
In the way that mathematical statements are true for mathematical anti-realists. This would typically be the idea that a proposition can be true if it is a entailment of some intuitively appealing axiom — in other words, apriori anayltical reasoning.
Crispin Wright, John Skorupski and some others defend normative cognitivist irrealism. Wright asserts the extreme implausibility of both J. L. Mackie’s error-theory and non-cognitivism (including S. Blackburn’s quasi-realism) in view of both everyday and sophisticated moral speech and argument. The same point is often expressed as the Frege-Geach Objection. Skorupski distinguishes between receptive awareness, which is not possible in normative matters, and non-receptive awareness (including dialogical knowledge), which is possible in normative matters.
Hilary Putnam’s book Ethics without ontology (Harvard, 2004) argues for a similar view, that ethical (and for that matter mathematical) sentences can be true and objective without there being any objects to make them so.
Cognitivism points to the semantic difference between imperative sentences and declarative sentences in normative subjects. Or to the different meanings and purposes of some superficially declarative sentences. For instance, if a teacher allows one of her students to go out by saying “You may go out”, this sentence is neither true or false. It gives a permission. But, in most situations, if one of the students asks one of his classmates whether she thinks that he may go out and she answers “Of course you may go out”, this sentence is either true or false. It does not give a permission, it states that there is a permission.
Another argument for ethical cognitivism stands on the close resemblance between ethics and other normative matters, such as games. As much as morality, games consist of norms (or rules), but it would be hard to accept that it be not true that the chessplayer who checkmates the other one wins the game. If statements about game rules can be true or false, why not ethical statements? One answer is that we may want ethical statements to be categorically true, while we only need statements about right action to be contingent on the acceptance of the rules of a particular game—that is, the choice to play the game according to a given set of rules.”—WP
By the way, I suspect you call indexicality “uninteresting” because if it applies to “water”, then it probably applies to just about every word. This is true—but it is also why should be happy to count Eliezer’s position as moral realism, or do you want to call yourself a relativist about water?
I am not saying water is indexical because of PWs or whatever. I am saying that cases of indexicallity irrelvant to moral relativism are not interesting in the context of a discussion about moral relativism.
The SEP says that moral realism means thinking that (some) morality exists as objective fact
“Morality exists” and “as objective fact” are interpolations. The SEP article just defines moral realism as the claim that at least one moral statement is true (in the correspondence-theory sense of ‘true’). So moral realism is success theory (as contrasted with error theory), or success theory + moral-correspondence-theory.
some other process which would lead all right-thinking minds to agree about it
‘Right-thinking’ in what sense? Whence in the SEP article are you getting this claim?
‘The SEP says’ is also a mistake. The article you linked to defines ‘moral realism’ one way; the article on moral anti-realism defines it in a completely different way. (One that does try to make sense of an ‘objectivity’ constraint.) Good evidence that this is a bad word.
‘The SEP says’ is also a mistake. The article you linked to defines ‘moral realism’ one way; the article on moral anti-realism defines it in a completely different way.
I’m still bothered by the fact that different people mean different and in fact contradictory things by “moral realism”.
This is a strong argument against moral realism. If the thing were true, it would be easier to define—or at least, different people’s definitions would be of the same object, even if they explained it differently.
I’m still bothered by the fact that different people mean different and in fact contradictory things by “moral realism”.
The SEP says that moral realism means thinking that (some) morality exists as objective fact, which can be discovered through thinking or experimentation or some other process which would lead all right-thinking minds to agree about it. That is also how I understood the term before reading these posts.
And yet Eliezer seems to call himself (or be called?) a moral realist, even though he explicitly only talks about MoralGood!Eliezer (or !Humanity, !CEV, etc.) This is confusing and consequently irritating to people including myself.
So when you ask if:
What do you mean? I think it’s time to taboo “moral realism” because people have repeatedly failed to agree on what these words should mean.
I concur. It seems to me this sort of always devolves into a debate over defintions without anyone acknowledging that’s what is going on.
The SEP doesn’t say this. Actually, the SEP doesn’t even use the word “objective.” What the SEP actually says is, “Moral realists are those who think that, in these respects, things should be taken at face value—moral claims do purport to report facts and are true if they get the facts right,” and that’s it.
On Eliezer’s view, as I understand it, human!morality just is morality, simpliciter.
This is all a matter of misunderstanding the meaning of words, and nobody is objectively right or wrong about that, since the disagreement is widespread—I’m not the only one to complain.
To me, an unqualified “fact” is, by implication, a simple claim about the universe, not a fact about the person holding the belief in that fact. An unqualified “fact” should be true or false in itself, without requiring you to further specify you meant the instance-of-that-fact that applies to some particular person with particular moral beliefs.
If SEP’s usage of “fact” is taken to mean “a fact about the person holding the moral belief”, the fact being that the person does hold that belief, then I don’t understand what it would mean to say that there aren’t any moral facts (i.e. moral anti-realism). Would it mean to claim that people have no moral beliefs? That’s obviously false.
That’s exactly what bothers me—that he (and other people agree with this) redefines the word “morality” to mean human!morality, and this confuses people (I’m not the only one) who expect that word to mean something else, depending on context. (For example, the meta-concept of morality, as opposed to a concrete set of moral beliefs such as Eliezer!morality or humanity!morality.)
I agree that if everyone agreed to Eliezer’s usage, then discussing morality would be easier. But it’s just a fact that many people use the word differently from him. And when faced with such inconsistency, I would prefer that people either always qualify their usage, or taboo the word entirely.
It’s a fact that my height is less than six feet. It’s also a fact that I disapprove of torture. These are objective facts, not opinions or one person’s suspicions. It’s not just that I object to claims that I’m seven feet tall; such claims would be false. And if someone says of me that I approve of torture, they’re in error, as surely as if they said grass is red and ponies have seventeen hooves.
However, if when I say ‘torture is wrong’, I mean the fact that I disapprove of torture, I am using relativism. The statement “torture is wrong” is saying something about the speaker. But it’s also saying something about the listener; I expect the listener to react in some way to the idea I’m expressing. I don’t go around saying “torture is flooble”; I expect that listeners don’t assign any significance to floobleness, but they do to wrongness.
Relativism does not mean that moral claims become mere matters of passing fancy; it means that moral claims express preferences of particular minds (including speakers’ and listeners’); understanding them requires understanding something about the minds of those who make them.
Consider: As an English-speaker, you might find it distasteful if your neighbor named her daughter “Porn”. You might even think it was wrong, especially if you had concerns about how other English-speakers would react to a little girl named Porn. If you were a Thai-speaker living in a Thai language community, you probably wouldn’t see a problem, because “Porn” means “Blessing” in Thai and is a common female name. Understanding why the English-speaker is squicked by the idea of a little girl named Porn, but the Thai-speaker is not, requires knowing something about English and Thai languages, as well as about cultural responses to different sorts of mental imagery involving children.
But suppose that when I say “torture is wrong”, I mean “Any intelligent mind, no matter its origin, if it is capable of understanding what ‘torture’ means, will disapprove of torture.” That is, a relativisty-preferencey sort of “wrongness” follows from some fact that is true about all intelligent minds. That’s a very different claim. It’s a lot closer to what people tend to think of as “absolute, objective morality”.
Understanding their content, understanding why the speaker considers them true, or understanding why they are true-for_speaker?
Is the more general principle “don’t give your children embarrassing names” equally relative? How about “don’t embarass people in general ”? Or “don’t do unpleasant things to people in general”?
That is how Chris and SEP are using the term.
Then I don’t understand Chris’s comment. I said:
And Chris replied:
I took Chris’s meaning to be that moral realism (as defined by the SEP) says that moral claims are fact claims possessing truth values but says nothing about the discoverability or computability of those truth values. Your definition would have every moral realist insisting that every moral claim can be proven either true or false, but it seems to me that Chris’ definition allows moral realists to leave open Gödel-incompleteness status for moral claims, considering their truth or falsity to exist but be possibly incomputable, and still be moral realists. Or, to take no position on whether rational minds would come to the truth values of moral claims, only on whether the truth values existed. Your definition would exclude both of those from moral realism.
Chris, please correct me if this is not what you meant.
I have no problem with Godel-incompleteness, uncomputability, and so on in a system that allows you to state any moral proposition.
However: if a moral realist believes that “moral claims are fact claims possessing truth values”, then what does he belief regarding the proposition (1) “there exists at least one moral claim that can be proven true or false”? (Leaving aside claims that simply induce contradictions, are not well defined, etc.)
If he thinks such a claim exists, that is the same as saying there is a Universally Compelling Argument for or against that claim. And that is a logical impossibility. I can always construct a mind that is immune to any particular argument.
If he thinks no such claims exist, then it seems to be a kind of dualism—postulating a property “truth” of moral claims, which is not causally entangled with the physical world. It also seems pointless—why care about it if no actual mind can ever discover such truths?
ETA: talking about ‘proving’ claims true or false is a simplification. In reality we have degrees of beliefs in the truth-value of claims. But my point is that moral-realistic claims seem to be disengaged from reality; substitute “provide evidence for” in place of “prove” and my argument should still work.
If you needed my comment to decide that not understanding Chris’s comment is a much better hypothesis than not understanding Chris and SEP’s use of “fact,” then you have much worse problems than not understanding Chris’s comment.
I knew I didn’t understand something about Chris’s comment when I first read it. Could you explain it and help me understand, please?
I think the problem lies in your usage of the phrase “objective fact”.
For example, if I claim “broccoli is tasty”, my claim purports to report a fact. Plausibly, it purports to report a fact about me—namely, that I like broccoli. If someone else were to claim “broccoli is tasty”, her utterance would also purport to report a fact—plausibly, the fact that she likes broccoli. So two token utterances of the very same type may pick out different facts. If this is the case, “broccoli is tasty” is true when asserted by broccoli-lovers and false when asserted by broccoli-haters. This should not be surprising, provided that it is interpreted as a disguised indexical claim.
Clearly, there is no experimental process whereby all right-thinking people can conclude that broccoli is tasty (or, alternatively, that broccoli is not tasty), even though several right-thinking people can justifiably arrive at this conclusion (by eating broccoli and liking it, say). Crucially, this conclusion is consistent with being a realist about broccoli-tastiness, but inconsistent with thinking there are objective facts about broccoli-tastiness (as you use the term). Likewise, one can be a realist about morality without thinking there are objective facts about morality (again, as you use the term).
When I say “objective fact”, I mean (in context) a non-indexical one.
The original problem I raised was that some people who talked about things being “moral” meant those statements indexically, and others meant them objectively, and this created a lot of confusion.
I use the term “objective facts about morality” to mean “non-indexical facts which do not depend on picking out the person holding the moral beliefs”. Moral realism is the belief such objective facts about morality can and/or do exist.
Of course, one is free to interpret “moral realism” as you do—it’s a natural enough interpretation, and may even be the most common one among philosophers. However, this is not the definition given in the SEP. According to it, “moral realists are those who think that...moral claims do purport to report facts and are true if they get the facts right”. This does not entail that moral realists think that moral claims purport to report objective facts. But isn’t such a loose interpretation of “moral realism” vacuous? As you say:
The moral anti-realist can choose from among two main alternatives if she wishes to deny moral realism, which I understand as being committed to the following two theses: (1) moral claims purport to report some (not necessarily objective) facts, and (2) some moral claims are true. First, she can maintain that all moral claims are false, which is a plausible suggestion: perhaps our moral claims purport to be about some normative aspect of the world, but the world lacks this normative aspect. Second, she can maintain that no moral claims purport to report facts; instead, all moral claims express emotions. On this view, saying “setting cats on fire is wrong” is tantamount to exclaiming “Boo!” or “Ew!”
That would still be discussing an objective claim—just one that happens to be false. On a part with discussing a mathematical proposition which is false, or an empirical hypothesis which is false: both of these are independent of the person who says them or believes in them. Just so, discussing normative aspects of the world—whether they exist or not, and whether they are as claimed or not—isn’t the same as discussing normative beliefs of a person.
So calling this moral anti-realism seems to use my sense of “moral realism” (objective fact), not the SEP’s.
In one way, this is again moral anti-realism in my sense of the phrase: the claim that morals don’t exist separately from the moral beliefs of concrete persons. (I hold this view.)
In another way, it can be read as a claim about what people mean when they talk about morals. In that case, the claim is plainly wrong, because many people are moral realists.
So to sum up, I’m afraid I still don’t see what it would mean to be a moral anti-realist in what you say is the SEP sense.
But there isn’t a meta-concept of morality. If you try to abstract one, you just end up with something like “that which motivates”, which is empty unless you specify which specific minds can be motivated by it, and then you’re back where you started.
There are several different uses of morality, each which result from different meta-concepts. An Aristotelean, for example, would talk about morality as fitting a human’s purpose (as would a Christian), for example. Everybody uses the same word for several fundamentally different concepts, some of which have no or little basis in fact.
Literally true in isolation, but so completely irrelevant to this thread, I can only describe this comment as a lie.
Different humans have somewhat different morals. I can still talk about “morals” in general, because they are a special kind of motivations in humans. Talking about morals in minds in general indeed makes little sense.
To whom? AFAICS, if you have minds living in a community, and they can interact in ways that caus negative and positive utility to each other, then you the problem that morality solves...and that is a ery general set of conditions.
I think what Dan means is that different kinds of minds in different kinds of community might need quite different solutions to the problem of interacting effectively, which might lead to quite different notions of morality, and that if that’s true then you shouldn’t expect any single notion of morality to be universally applicable.
Or they might not. It isn’t at all obvious.
I came up with the meta-concept “behaving with positive regard to the preferences of others”. Does that suffer from those problems?
I f everyone agreed to EY’s usage, disucssing alien morality would be more difficult.
How so? You can just say “alien values”.
Not all values are moral.
It’s often difficult to figure out which human preferences are moral v. amoral. That would be a vastly more challenging task for an alien species, such that we’d probably be better off in most cases by prohibiting ourselves from sorting alien values in that way.
That isn’t a good reason to subsume moral values under values in the human case.
Deleted
If everyone agreed on any one usage, that would be far better than everyone disagreeing.
True enough. But I think for the members of LW to adopt EY’s usage would move us further away from that point, not closer.
Yes, it does. But it says it in the article Moral Anti-Realism, not the article cited above, Moral Realism. The former article is very interested in objectivity constraints, but expresses a great deal of confusion about how to make sense of them; the latter article mentions them only to toss them out for being too confused. (It would not be too surprising if this has something to do with the latter author being more convinced of the truth of ‘realism’, hence wanting to make the Realism brand simple, clean, and appealing to a wider audience.)
If your encyclopedia has an ‘Apples’ article and a ‘Non-Apples’ article, and the two articles completely disagree about what it means to be an ‘Apple’, then you have your first clue that the word ‘Apple’ should always come pre-tabooed.
(ETA: More generally, be aware that ‘the SEP says X’ is less reliable than ‘SEP article Y says X’, because articles may disagree with each other. SEP is an anthology of introductory essays. We wouldn’t normally say ‘Very Short Introductions says X’, even if we trust the VSI brand quite a bit.)
Almost. Moral realists (even on the more inclusive definitions) also demand that at least one moral claim of this sort be true. (This is asserted in the sentence right after your quotation terminates.) That’s why error theory is not a form of moral realism; realism is a (perhaps improper) subset of success theory.
Oh my god, the “moral anti-realism” article has what is possibly the best opening paragraph I’ve seen in the SEP:
Another hypothesis is that EY is inconsistent is his views, ie he attaches the standard meaning to MR, but doesn’t always espouse it.,
Welcome to metaethics!
I seem to recall Eliezer saying that he was a cognitivist, but not a realist.
Oh, well, that makes some sense, actually. Since everybody knows that “cognitivism” means that moral statements have truth-values, whereas “realism” seems to be a confused notion—I actually interpreted it to mean the same thing as cognitivism because otherwise I don’t know what on earth realism should even be.
Eliezer is a realist, he’s just also an indexicalist. According to his theory, when you use the word “morality”, you refer to “Human!morality”, and there are objective facts about that. His theory just also says that when Clippy uses the word “morality”, it refery to “Clippy!morality” (about which there are also objective facts, which are logically independent of the facts about “Human!morality”). Just like when you say “water”, it refers to water, but when twin-you says water, it refers to XYZ.
I thought that when humans and Clippy speak about morality, they speak about the same thing (assuming that they are not lying and not making mistakes).
The difference is in connotations. For humans, morality has a connotation “the thing that should be done”. For Clippy, morality has a connotation “this weird stuff humans care about”.
So, you could explain the concept of morality to Clippy, and then also explain that X is obviously moral. And Clippy would agree with you. It just wouldn’t make Clippy any more likely to do X; the “should” emotion would not get across. The only result would be Clippy remembering that humans feel a desire to do X; and that information could be later used to create more paperclips.
Clippy’s equivalent of “should” is connected to maximizing the number of paperclips. The fact that X is moral is about as much important for it as an existence of a specific paperclip is for us. “Sure, X is moral. I see. I have no use of this fact. Now stop bothering me, because I want to make another paperclip.”
Oh, yes. I was using “moral” the same way you used “should” here.
So why do humans have different words for would fo it, and should do it?
If this is a theory about what people mean when they say “morality”, then he is wrong about a significant percentage of people, as a matter of simple fact.
What does it mean for something to be theory about what people mean?
It means the thing the theory tries to model, predict, and explain, is “what do people mean”.
And what kinds of things are the things that people mean? Semantic entities, or entities in the world? If semantic, intensions or Kaplanian characters or something else?
This is not a rhetorical question. I have absolutely no clue what “mean” means when applied to people. (Actually, I don’t even know what it means when applied to words, but that case feels intuitively much clearer than people meaning something.)
By “mean” I mean (no pun intended) that when people say a word, they use it to refer to a concept they have. This can be a semantic entity, or a physical entity, or a linguistic entity elsewhere in the same sentence, or anything else the speaker has a mental concept of that they can attach the word to, and which they expect the listeners to infer by hearing the word.
To put it another way: people use words to cause the listener to think thoughts which correspond in a certain way to the ones the speaker thinks. The thoughts of the speaker, which they intend to convey to the listener, are what they mean by the words.
Please be patient, I’m out of my depth somewhat. If I say to you “invisible pink unicorn” or “spherical cube”, I would characterise myself as not having successfully meant anything, even though, if I’m not paying attention, it feels like I did.
Am I wrong? Am I confusing meaning with reference, or some such? It certainly seems to me that I am in some way failing.
In both examples I understand you to mean two (non-existent in the real world) items with a set of seemingly contradictory characteristics. So you did mean something. Not an object in the real world, but you meant the concept of an object containing contradictory characteristics, and gave examples of what “contradictory characteristics” are.
Indeed that meaning of contradiction is the reason “Invisible Pink Unicorn” is used to parody religion, etc.
Now if someone used the words without understanding that they are contradictory, or even believing the things in question are real—they’d still have meant something: An item in their model of the world. They’d be wrong that such an item really existed in the outside world, but their words would still have meaning in pinpointing to said item in their mental model.
Hm, thoughts are tricky things, and identity conditions of thoughts are trickier yet. I was just trying to see if you had a better idea of what “mean” might mean than me. But it seems we have to get by with what little we have.
Because I share your intuition that there is something fishy about the referential intention in Eliezer’s picture. With terms like water, it’s plausible that people intend to refer to “this stuff here” or “this stuff that [complicated description of their experiences with water]”. With morality, it seems dubious that they should be intending to refer to “this thing that humans would all want if we were absolutely coherent etc.”
Group-level moral relativism just is the belief that moral truths are indexed to groups. Since relativism is uncontroversially opposed to realism, “indexical realist” is a bit of a contradiction.
“Indexicality” in the philosopher’s sense means that the reference of a word depends on who utters it in which circumstances. Putnam argues that “water” (and all other natural kind terms) has an indexical component because its reference depends on whether you or twin-you utters it.
Which is about equivalent to claiming that anything might be relative, because it might be indexical along some unknown axis, in this case unobserved possible worlds. I afraid I don’t think that is very interesting.
What’s that concept of “relativity” you’re talking about, anyway? The proposition expressed by the sentence “clippy shouldn’t convert humans into paperclips”, uttered by a speaker of English in the actual world, is simply true. That the proposition expressed by the sentence varies depending on who utters it in which world is a completely different thing. There is no relativism about whether I am sitting at my desk just because I can report this fact by saying “I’m sitting in my desk” (which you can’t do, because if you said that sentence, you would be expressing a different proposition, one that’s about you, not me).
Only if moral realism is also true. If the above sentence is false when uttered by Clippy, it has a truth value which is indexical to who is uttering it, meaning that moral realism is false.
It’s not relative, and it is indexical, because “I” is indexical. The point you are making is again, not interesting.
Yes, of course. I was illustrating how the theory works.
No, it doesn’t. The thing is that on the view I’m talking about here, sentences don’t have truth-conditions, but propositions have. (Some) sentences express a proposition dependent on the context of utterance. Moral realism thus has to be the position that moral statements express propositions, because it wouldn’t make any sense otherwise—sentences don’t have truth-conditions anyway. When clippy says “One shouldn’t convert humans into paperclips”, he is simply not expressing the same proposition that I am expressing when I utter that sentence.
Then why exactly are you having a discussion that seems to be based on you not understanding concepts that you find “uninteresting”? I find your sense of “relative”, which seems to be “in any conceivable way dependent on anything”, pretty uninteresting, actually...
Why shouldn’t the truth-value attach to a (proposition, context) tuple? Why, for that matter shouldn’t it attach to a (sentence, language, context) tuple?
A (sentence,language,context) tuple uniquely determines a proposition, so I don’t mind if you attach a truth-value to that (relative to a world of evaluation, of course). But propositions don’t change their truth-value relative to a context by definition. A proposition is that thing which has a truth-value relative to a situation of evaluation.
But—see this comment—I may have been too charitable in interpreting “realism” as what is more properly called “cognitivism”. That’s because I can’t think of any other interpretation of “realism” that even makes any sense.
Cognitivism is compatible with the claim that moral statements have truth values that vary with the speaker. (despite lack of explicit indexicals, yadda yadda). The contrary claim is that they don’t. I don’t see why the one claim should be more readily comprehensible that its opposite.
The contrary claim is often called realism, although that muddies the water, since in addition to the epistemological claim it can be used to state the claim that moral terms have real referents.
“Cognitivism encompasses all forms of moral realism, but cognitivism can also agree with ethical irrealism or anti-realism. Aside from the subjectivist branch of cognitivism, some cognitive irrealist theories accept that ethical sentences can be objectively true or false, even if there exist no natural, physical or in any way real (or “worldly”) entities or objects to make them true or false.
There are a number of ways of construing how a proposition can be objectively true without corresponding to the world:
By the coherence rather than the correspondence theory of truth
In a figurative sense: it can be true that I have a cold, but that doesn’t mean that the word “cold” corresponds to a distinct entity.
In the way that mathematical statements are true for mathematical anti-realists. This would typically be the idea that a proposition can be true if it is a entailment of some intuitively appealing axiom — in other words, apriori anayltical reasoning.
Crispin Wright, John Skorupski and some others defend normative cognitivist irrealism. Wright asserts the extreme implausibility of both J. L. Mackie’s error-theory and non-cognitivism (including S. Blackburn’s quasi-realism) in view of both everyday and sophisticated moral speech and argument. The same point is often expressed as the Frege-Geach Objection. Skorupski distinguishes between receptive awareness, which is not possible in normative matters, and non-receptive awareness (including dialogical knowledge), which is possible in normative matters.
Hilary Putnam’s book Ethics without ontology (Harvard, 2004) argues for a similar view, that ethical (and for that matter mathematical) sentences can be true and objective without there being any objects to make them so.
Cognitivism points to the semantic difference between imperative sentences and declarative sentences in normative subjects. Or to the different meanings and purposes of some superficially declarative sentences. For instance, if a teacher allows one of her students to go out by saying “You may go out”, this sentence is neither true or false. It gives a permission. But, in most situations, if one of the students asks one of his classmates whether she thinks that he may go out and she answers “Of course you may go out”, this sentence is either true or false. It does not give a permission, it states that there is a permission.
Another argument for ethical cognitivism stands on the close resemblance between ethics and other normative matters, such as games. As much as morality, games consist of norms (or rules), but it would be hard to accept that it be not true that the chessplayer who checkmates the other one wins the game. If statements about game rules can be true or false, why not ethical statements? One answer is that we may want ethical statements to be categorically true, while we only need statements about right action to be contingent on the acceptance of the rules of a particular game—that is, the choice to play the game according to a given set of rules.”—WP
Nothing in this is at all illuminating as to what on earth realism is supposed to be.
Do understand what moral subjectivism is?
By the way, I suspect you call indexicality “uninteresting” because if it applies to “water”, then it probably applies to just about every word. This is true—but it is also why should be happy to count Eliezer’s position as moral realism, or do you want to call yourself a relativist about water?
I am not saying water is indexical because of PWs or whatever. I am saying that cases of indexicallity irrelvant to moral relativism are not interesting in the context of a discussion about moral relativism.
They are because they help to illustrate the theory.
No, Relativism is a type of Realism. You might be confusing it with Subjectivism.
“Morality exists” and “as objective fact” are interpolations. The SEP article just defines moral realism as the claim that at least one moral statement is true (in the correspondence-theory sense of ‘true’). So moral realism is success theory (as contrasted with error theory), or success theory + moral-correspondence-theory.
‘Right-thinking’ in what sense? Whence in the SEP article are you getting this claim?
‘The SEP says’ is also a mistake. The article you linked to defines ‘moral realism’ one way; the article on moral anti-realism defines it in a completely different way. (One that does try to make sense of an ‘objectivity’ constraint.) Good evidence that this is a bad word.
Thank you for pointing this out.
For the rest, please see my response here.
This is a strong argument against moral realism. If the thing were true, it would be easier to define—or at least, different people’s definitions would be of the same object, even if they explained it differently.