Both realism¹ and relativism are false. Unfortunately this comment is too short to contain the proof, but there’s a passable sequence on it.
¹ As you’ve defined it here, anyway. Moral realism as normally defined simply means “moral statements have truth values” and does not imply universal compellingness.
Well, there’s the more obvious sense, that there can always exist an “irrational” mind that simply refuses to believe in gravity, regardless of the strength of the evidence. “Gravity makes things fall” is true, because it does indeed make things fall. But not compelling to those types of minds.
But, in a more narrow sense, which we are more interested in when doing metaethics, a sentence of the form “action A is xyzzy” may be a true classification of A, and may be trivial to show, once “xyzzy” is defined. But an agent that did not care about xyzzy would not be moved to act based on that. It could recognise the truth of the statement but would not care.
For a stupid example, I could say to you “if you do 13 push-ups now, you’ll have done a prime number of push-ups”. Well, the statement is true, but the majority of the world’s population would be like “yeah, so what?”.
In contrast, a statement like “if you drink-drive, you could kill someone!” is generally (but sadly not always) compelling to humans. Because humans like to not kill people, they will generally choose not to drink-drive once they are convinced of the truth of the statement.
But isn’t the whole debate about moral realism vs. anti-realism is whether “Don’t murder” is universally compelling to humans. Noticing that pebblesorters aren’t compelled by our values doesn’t explain whether humans should necessarily find “don’t murder” compelling.
I identify as a moral realist, but I don’t believe all moral facts are universally compelling to humans, at least not if “universally compelling” is meant descriptively rather than normatively. I don’t take moral realism to be a psychological thesis about what particular types of intelligences actually find compelling; I take it to be the claim that there are moral obligations and that certain types of agents should adhere to them (all other things being equal), irrespective of their particular desire sets and whether or not they feel any psychological pressure to adhere to these obligations. This is a normative claim, not a descriptive one.
What? Moral realism (in the philosophy literature) is about whether moral statements have truth values, that’s it.
When I said universally compelling, I meant universally. To all agents, not just humans. Or any large class. For any true statement, you can probably expect to find a surprisingly large number of agents who just don’t care about it.
Whether “don’t murder” (or rather, “murder is bad” since commands don’t have truth values, and are even less likely to be generally compelling) is compelling to all humans is a question for psychology. As it happens, given the existence of serial killers and sociopaths, probably the answer is no, it isn’t. Though I would hope it to be compelling to most.
I have shown you two true but non-universally-compelling arguments. Surely the difference must be clear now.
What? Moral realism (in the philosophy literature) is about whether moral statements have truth values, that’s it.
This is incorrect, in my experience. Although “moral realism” is a notoriously slippery phrase and gets used in many subtly different ways, I think most philosophers engaged in the moral realism vs. anti-realism debate aren’t merely debating whether moral statements have truth values. The position you’re describing is usually labeled “moral cognitivism”.
Anyway, I suspect you mis-spoke here, and intended to say that moral realists claim that (certain) moral statements are true, rather than just that they have truth values (“false” is a truth value, after all). But I don’t think that modification captures the tenor of the debate either. Moral realists are usually defending a whole suite of theses—not just that some moral statements are true, but that they are true objectively and that certain sorts of agents are under some sort of obligation to adhere to them.
I think you guys should taboo “moral realism”. I understand that it’s important to get the terminology right, but IMO debates about nothing but terminology have little value.
Anyway, I suspect you mis-spoke here, and intended to say that moral realists claim that (certain) moral statements are true, rather than just that they have truth values (“false” is a truth value, after all).
Err, right, yes, that’s what I meant. Error theorists do of course also claim that moral statements have truth values.
Moral realists are usually defending a whole suite of theses—not just that some moral statements are true, but that they are true objectively and that certain sorts of agents are under some sort of obligation to adhere to them.
True enough, though I guess I’d prefer to talk about a single well-specified claim than a “usually” cluster in philosopher-space.
If that philosopher believes that statements like “murder is wrong” are true, then they are indeed a realist. Did I say something that looked like I would disagree?
You guys are talking past each other, because you mean something different by ‘compelling’. I think Tim means that X is compelling to all human beings if any human being will accept X under ideal epistemic circumstances. You seem to take ‘X is universally compelling’ to mean that all human beings already do accept X, or would on a first hearing.
Would agree that all human beings would accept all true statements under ideal epistemic circumstances (i.e. having heard all the arguments, seen all the evidence, in the best state of mind)?
I guess I must clarify. When I say ‘compelling’ here I am really talking mainly about motivational compellingness. Saying “if you drink-drive, you could kill someone!” to a human is generally, motivationally compelling as an argument for not drink-driving: because humans don’t like killing people, a human will decide not to drink-drive (one in a rational state of mind, anyway).
This is distinct from accepting statements as true or false! Any rational agent, give or take a few, will presumably believe you about the causal relationship between drink-driving and manslaughter once presented with sufficient evidence. But it is a tiny subset of these who will change their decisions on this basis. A mind that doesn’t care whether it kills people will see this information as an irrelevant curiosity.
Having looked over that sequence, I haven’t found any proof that moral realism (on either definition) or moral relativism is false. Could you point me more specifically to what you have in mind (or just put the argument in your own words, if you have the time)?
Edit: (Sigh), I appreciate the link, but I can’t make heads or tails of ‘No Universally Compelling Arguments’. I speak from ignorance as to the meaning of the article, but I can’t seem to identify the premises of the argument.
If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization “All minds m: X(m)” has two to the trillionth chances to be false, while each existential generalization “Exists mind m: X(m)” has two to the trillionth chances to be true.
This would seem to argue that for every argument A, howsoever convincing it may seem to us, there exists at least one possible mind that doesn’t buy it.
So, there’s some sort of assumption as to what minds are:
I also wish to establish the notion of a mind as a causal, lawful, physical system… [emphasis original]
and an assumption that a suitably diverse set of minds can be described in less than a trillion bits. Presumably the reason for that upper bound is because there are a few Fermi estimates that the information content of a human brain is in the neighborhood of one trillion bits.
Of course, if you restrict the set of minds to those with special properties (e.g., human minds), then you might find universally compelling arguments on that basis:
Oh, there might be argument sequences that would compel any neurologically intact human...
From which we get Coherent Extrapolated Volition and friends.
If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization “All minds m: X(m)” has two to the trillionth chances to be false, while each existential generalization “Exists mind m: X(m)” has two to the trillionth chances to be true.
This doesn’t seem true to me, at least not as a general rule. For example, given every terrestrial DNA sequence describable in a trillion bits or less, it is not the case that every generalization of the form ‘s:X(s)’ has two to the trillionth chances to be false (e.g. ‘have more than one base pair’, ‘involve hydrogen’ etc.). Given that this doesn’t hold true of many other things, is this supposed to be a special fact about minds? Even then, it would seem odd to say that while all generalizations of the form m:X(m) have two to the trillionth chances to be false, nevertheless the generalization ‘for all minds, a generalization of the form m:X(m) has two to the trillionth chances to be false’ (which does seem to be of the form m:X(m)) is somehow more likely.
Also, doesn’t this inference imply that ‘being convinced by an argument’ is a bit that can flip on or off independently of any others? Eliezer doesn’t think that’s true, and I can’t imagine why he would think his (hypothetical) interlocutor would accept it.
I mean to say, I think the argument is something of a paradox:
The claim the argument purports to defeat is something like this: for all minds, A is convincing. Lets call this m:A(m).
The argument goes like this: for all minds (at or under a trillion bits etc.), a generalization of the form m:X(m) has a one in two to the trillionth chance of being true for each mind. Call this m:U(m), if you grant me that this claim has the form m:X(m).
If we infer from m:U(m) that any claim of the form m:X(m) is unlikely to be true, then to whatever extent I am persuaded that m:A(m) is unlikely to be true, to that extent I ought to be persuaded that m:U(m) is unlikely to be true. You cannot accept the argument, because accepting it as decisive entails accepting decisive reasons for rejecting it.
The argument seems to be fixable at this stage, since there’s a lot of room to generate significant distinctions between m:A(m) and m:U(m). If you were pressed to defend it (presuming you still wish to be generous with your time) how would you fix this? Or am I getting something very wrong?
for all minds (at or under a trillion bits etc.), a generalization of the form m:X(m) has a one in two to the trillionth chance of being true for each mind.
That’s not what it says; compare the emphasis in both quotes.
If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization “All minds m: X(m)” has two to the trillionth chances to be false, while each existential generalization “Exists mind m: X(m)” has two to the trillionth chances to be true.
Sorry, I may have misunderstood and presumed that ‘two to the trillionth chances to be false’ meant ‘one in two to the trillionth chances to be true’. That may be wrong, but it doesn’t affect my argument at all: EY’s argument for the implausibility of m:A(m) is that claims of the form m:X(m) are all implausible. His argument to the effect that all claims of the form m:X(m) are implausible is itself a claim of the form m:X(m).
Sorry, I was speaking ambiguously. I mean’t ‘rational’ not in the normative sense that distinguishes good agents from bad ones, but ‘rational’ in the broader, descriptive sense that distinguishes anything capable of responding to reasons (even terrible or false ones) from something that isn’t. I assumed that was the sense of ‘rational’ Prawn was using, but that may have been wrong.
Irrelevant. I am talking about rational minds, he is talking about physically possible ones.
UFAI sounds like a counterexample, but I’m not interested in arguing with you about it. I only responded because someone asked for a shortcut in the metaethics sequence.
Both realism¹ and relativism are false. Unfortunately this comment is too short to contain the proof, but there’s a passable sequence on it.
¹ As you’ve defined it here, anyway. Moral realism as normally defined simply means “moral statements have truth values” and does not imply universal compellingness.
What does it mean for a statement to be true but not universally compelling?
If it isn’t universally compelling for all agents to believe “gravity causes things to fall,” then what do we mean when we say the sentence is true?
Well, there’s the more obvious sense, that there can always exist an “irrational” mind that simply refuses to believe in gravity, regardless of the strength of the evidence. “Gravity makes things fall” is true, because it does indeed make things fall. But not compelling to those types of minds.
But, in a more narrow sense, which we are more interested in when doing metaethics, a sentence of the form “action A is xyzzy” may be a true classification of A, and may be trivial to show, once “xyzzy” is defined. But an agent that did not care about xyzzy would not be moved to act based on that. It could recognise the truth of the statement but would not care.
For a stupid example, I could say to you “if you do 13 push-ups now, you’ll have done a prime number of push-ups”. Well, the statement is true, but the majority of the world’s population would be like “yeah, so what?”.
In contrast, a statement like “if you drink-drive, you could kill someone!” is generally (but sadly not always) compelling to humans. Because humans like to not kill people, they will generally choose not to drink-drive once they are convinced of the truth of the statement.
But isn’t the whole debate about moral realism vs. anti-realism is whether “Don’t murder” is universally compelling to humans. Noticing that pebblesorters aren’t compelled by our values doesn’t explain whether humans should necessarily find “don’t murder” compelling.
I identify as a moral realist, but I don’t believe all moral facts are universally compelling to humans, at least not if “universally compelling” is meant descriptively rather than normatively. I don’t take moral realism to be a psychological thesis about what particular types of intelligences actually find compelling; I take it to be the claim that there are moral obligations and that certain types of agents should adhere to them (all other things being equal), irrespective of their particular desire sets and whether or not they feel any psychological pressure to adhere to these obligations. This is a normative claim, not a descriptive one.
What? Moral realism (in the philosophy literature) is about whether moral statements have truth values, that’s it.
When I said universally compelling, I meant universally. To all agents, not just humans. Or any large class. For any true statement, you can probably expect to find a surprisingly large number of agents who just don’t care about it.
Whether “don’t murder” (or rather, “murder is bad” since commands don’t have truth values, and are even less likely to be generally compelling) is compelling to all humans is a question for psychology. As it happens, given the existence of serial killers and sociopaths, probably the answer is no, it isn’t. Though I would hope it to be compelling to most.
I have shown you two true but non-universally-compelling arguments. Surely the difference must be clear now.
This is incorrect, in my experience. Although “moral realism” is a notoriously slippery phrase and gets used in many subtly different ways, I think most philosophers engaged in the moral realism vs. anti-realism debate aren’t merely debating whether moral statements have truth values. The position you’re describing is usually labeled “moral cognitivism”.
Anyway, I suspect you mis-spoke here, and intended to say that moral realists claim that (certain) moral statements are true, rather than just that they have truth values (“false” is a truth value, after all). But I don’t think that modification captures the tenor of the debate either. Moral realists are usually defending a whole suite of theses—not just that some moral statements are true, but that they are true objectively and that certain sorts of agents are under some sort of obligation to adhere to them.
I think you guys should taboo “moral realism”. I understand that it’s important to get the terminology right, but IMO debates about nothing but terminology have little value.
Err, right, yes, that’s what I meant. Error theorists do of course also claim that moral statements have truth values.
True enough, though I guess I’d prefer to talk about a single well-specified claim than a “usually” cluster in philosopher-space.
So, a philosopher who says:
is not a moral realist? Because that philosopher does not seem to be a subjectivist, an error theorist, or non-cognitivist.
If that philosopher believes that statements like “murder is wrong” are true, then they are indeed a realist. Did I say something that looked like I would disagree?
You guys are talking past each other, because you mean something different by ‘compelling’. I think Tim means that X is compelling to all human beings if any human being will accept X under ideal epistemic circumstances. You seem to take ‘X is universally compelling’ to mean that all human beings already do accept X, or would on a first hearing.
Would agree that all human beings would accept all true statements under ideal epistemic circumstances (i.e. having heard all the arguments, seen all the evidence, in the best state of mind)?
I guess I must clarify. When I say ‘compelling’ here I am really talking mainly about motivational compellingness. Saying “if you drink-drive, you could kill someone!” to a human is generally, motivationally compelling as an argument for not drink-driving: because humans don’t like killing people, a human will decide not to drink-drive (one in a rational state of mind, anyway).
This is distinct from accepting statements as true or false! Any rational agent, give or take a few, will presumably believe you about the causal relationship between drink-driving and manslaughter once presented with sufficient evidence. But it is a tiny subset of these who will change their decisions on this basis. A mind that doesn’t care whether it kills people will see this information as an irrelevant curiosity.
Having looked over that sequence, I haven’t found any proof that moral realism (on either definition) or moral relativism is false. Could you point me more specifically to what you have in mind (or just put the argument in your own words, if you have the time)?
No Universally Compelling Arguments is the argument against universal compellingness, as the name suggests.
Inseparably Right; or Joy in the Merely Good gives part of the argument that humans should be able to agree on ethical values. Another substantial part is in Moral Error and Moral Disagreement.
Thanks!
Edit: (Sigh), I appreciate the link, but I can’t make heads or tails of ‘No Universally Compelling Arguments’. I speak from ignorance as to the meaning of the article, but I can’t seem to identify the premises of the argument.
The central point is a bit buried.
So, there’s some sort of assumption as to what minds are:
and an assumption that a suitably diverse set of minds can be described in less than a trillion bits. Presumably the reason for that upper bound is because there are a few Fermi estimates that the information content of a human brain is in the neighborhood of one trillion bits.
Of course, if you restrict the set of minds to those with special properties (e.g., human minds), then you might find universally compelling arguments on that basis:
From which we get Coherent Extrapolated Volition and friends.
This doesn’t seem true to me, at least not as a general rule. For example, given every terrestrial DNA sequence describable in a trillion bits or less, it is not the case that every generalization of the form ‘s:X(s)’ has two to the trillionth chances to be false (e.g. ‘have more than one base pair’, ‘involve hydrogen’ etc.). Given that this doesn’t hold true of many other things, is this supposed to be a special fact about minds? Even then, it would seem odd to say that while all generalizations of the form m:X(m) have two to the trillionth chances to be false, nevertheless the generalization ‘for all minds, a generalization of the form m:X(m) has two to the trillionth chances to be false’ (which does seem to be of the form m:X(m)) is somehow more likely.
Also, doesn’t this inference imply that ‘being convinced by an argument’ is a bit that can flip on or off independently of any others? Eliezer doesn’t think that’s true, and I can’t imagine why he would think his (hypothetical) interlocutor would accept it.
It’s not a proof, no, but it seems plausible.
I mean to say, I think the argument is something of a paradox:
The claim the argument purports to defeat is something like this: for all minds, A is convincing. Lets call this m:A(m).
The argument goes like this: for all minds (at or under a trillion bits etc.), a generalization of the form m:X(m) has a one in two to the trillionth chance of being true for each mind. Call this m:U(m), if you grant me that this claim has the form m:X(m).
If we infer from m:U(m) that any claim of the form m:X(m) is unlikely to be true, then to whatever extent I am persuaded that m:A(m) is unlikely to be true, to that extent I ought to be persuaded that m:U(m) is unlikely to be true. You cannot accept the argument, because accepting it as decisive entails accepting decisive reasons for rejecting it.
The argument seems to be fixable at this stage, since there’s a lot of room to generate significant distinctions between m:A(m) and m:U(m). If you were pressed to defend it (presuming you still wish to be generous with your time) how would you fix this? Or am I getting something very wrong?
That’s not what it says; compare the emphasis in both quotes.
Sorry, I may have misunderstood and presumed that ‘two to the trillionth chances to be false’ meant ‘one in two to the trillionth chances to be true’. That may be wrong, but it doesn’t affect my argument at all: EY’s argument for the implausibility of m:A(m) is that claims of the form m:X(m) are all implausible. His argument to the effect that all claims of the form m:X(m) are implausible is itself a claim of the form m:X(m).
“Rational” is broader than “human” and narrower than “physically possible”.
Do you really mean to say that there are physically possible minds that are not rational? In virtue of what are they ‘minds’ then?
Yes. There are irrational people, and they still have minds.
Ah, I think I just misunderstood which sense of ‘rational’ you intended.
Haven’t you met another human?
Sorry, I was speaking ambiguously. I mean’t ‘rational’ not in the normative sense that distinguishes good agents from bad ones, but ‘rational’ in the broader, descriptive sense that distinguishes anything capable of responding to reasons (even terrible or false ones) from something that isn’t. I assumed that was the sense of ‘rational’ Prawn was using, but that may have been wrong.
Irrelevant. I am talking about rational minds, he is talking about physically possible ones.
As noted at the time
UFAI sounds like a counterexample, but I’m not interested in arguing with you about it. I only responded because someone asked for a shortcut in the metaethics sequence.
I have essentially being arguing against a strong likelihood of UFAI, so that would be more like gainsaying.
Congratulations on being able to discern an overall message to EY’s metaethical disquisitions. I never could.