It seems you are not even building and fighting a strawman, you are fighting straw-windmills. You are so sure you are right about contentious topics, it’s off-putting.
“Can you explain the difference between relativistic and constructivist foundations for morality?”
When you talk about moral relativism, what you’re saying is that morality is relative to the beliefs of some community or something like that, right? If a community decides that something is moral, then it’s moral. That’s what it means to be moral. And people who are outside the community have nothing to say about that, have no rights, no leverage, to critique it. Whereas constructivism is just saying that morality is constructed. In other words, it’s not out there in the world to be discovered. It’s something that human beings come together and individually and collectively construct on the basis of something.
And there’ll be a difference between human constructivists who think that different people might ultimately construct different, perfectly plausible, sensible versions of morality because their individual inclinations and their passions are different. Whereas a Kantian constructivist will say that there is one uniquely rational moral system that you could construct. But there’s nothing in any of that that says that morality is relative to some community or that anything could be a decent formulation of morality as long as some community believes it, or that some person outside the community is not allowed to critique it. It’s just admitting that morality is not objectively real out there in the world like scientific facts are out there in the world.
Once I have my morality, I’m going to feel perfectly free to criticize other people who don’t go along with it. The critique is not on the basis that those people are objectively making a mistake, that they’re making a mistake if they say 2 plus 2 equals 5 or the universe is contracting or something like that. It’s a different criticism. It’s saying that according to my version of morality, they’re doing something wrong. Here’s why I think my version of morality is good. That’s it. Okay? It’s not objective and foundational, but I have no reason to say I’m not allowed to make that critique.
Once I have my morality, I’m going to feel perfectly free to criticize other people who don’t go along with it.
And vice versa.
The critique is not on the basis that those people are objectively making a mistake, that they’re making a mistake if they say 2 plus 2 equals 5 or the universe is contracting or something like that. It’s a different criticism. It’s saying that according to my version of morality, they’re doing something wrong. Here’s why I think my version of morality is good. That’s it. Okay? It’s not objective and foundational, but I have no reason to say I’m not allowed to make that critique.
The symmetry problem, the fact that every relativist can equally criticise every other,
is a bug not a feature. If there is no reasoned way to resolve a dispute, force will take the place of reason. In fact, it’s a straw man to say that the realist objection to relativism.is that relativists can’t criticise… The actual point is that it is in vain … No relativist has. motivation to.change their mind.
I don’t have much in the way of any general opposition to Carrol’s remarks in the quote you provide but I do think Carroll characterizes relativism in a way that may be inaccurate, or at least incomplete. According to Carroll:
“If a community decides that something is moral, then it’s moral. That’s what it means to be moral. And people who are outside the community have nothing to say about that, have no rights, no leverage, to critique it.”
This may be true of some forms of moral relativism, but not all or the most defensible forms. Nothing about moral relativism prohibits the relativist from judging the moral actions of other people, or the cultural standards of other cultures, nor does relativism entail that they have no right or leverage to criticize those cultures. After all, the latter appear to be moral or at least normative claims themselves, and if you’re a relativist, you could reasonably ask: no right or leverage relative to what moral standard? The standards of the people or cultures I am judging, or relative to my own standards? A relativist does not have to think they can only judge people according to those people’s standards; they can endorse appraiser relativism, and think that they can judge others relative to their own standards.
One shortcoming in descriptions of moral relativism is that they frequently fail to distinguish between agent and appraiser relativism. Agent relativism holds that moral standards are true or false relative to the agent performing the action (or that agent’s culture). Appraiser relativism holds that moral standards are true or false relative to the moral framework of the agent (or the culture of the agent) judging the action in question. Here’s how the SEP distinguishes them:
“Appraiser relativism suggests that we do or should make moral judgments on the basis of our own standards, while agent relativism implies that the relevant standards are those of the persons we are judging.”
Many common depictions of relativism focus on agent relativism. And this seems consistent with Carrol’s description. Yet I suspect this emphasis stems from a tendency to characterize relativism in ways that seem to have more straightforward normative implications: people often reject relativism because it purportedly encourages or mandates indifference towards people with different moral standards. But this would only be true of at best some forms of moral relativism. Incidentally, Gowans, the author of the SEP article on moral relativism, says:
“Appraiser relativism is the more common position, and it will usually be assumed in the discussion that follows.”
I don’t know if this is true. But if it is, there’s something odd about depictions of relativism that seem closer to agent relativism than appraiser relativism. Appraiser relativism can get you something pretty close to the kind of constructivism Carroll describes, so I don’t think the relativism/constructivism distinction was necessary here. Relativism itself has the resources to do what Carroll proposes.
My goal with the remark is to accurately characterize relativism. Not to defend it. If someone wants to object to relativism on the grounds that it doesn’t achieve anything, that’s orthogonal to the point I was making. I’m not really sure I understand the objection, though. When you say the judgments achieve nothing, can you clarify what you mean? If I judge others as doing something wrong, I’m not sure why it would be an objection to tell me that this doesn’t achieve anything. Would it avoid the objection by achieving something in particular? If so, what?
Ethics is supposed to do things, not be an ivory tower approach.
The symmetry problem, the fact that every relativist can equally criticise every other, is a bug not a feature.
Alice: stop that, it’s wrong-for-me!
Bob: It’s Ok by me, so I’m going to carry on.
Etc,ad infinitum.
If there is no reasoned way to resolve a dispute, force will take the place of reason. In fact, it’s a straw man to say that the realist objection to relativism is that relativists can’t criticise… The actual point is that it is in vain … No relativist has a motivation to.change their mind.
This is a common feature of moral disputes even when no relativism is involved. Compare:
“You shouldn’t do that.” “It’s fine according to my values, and that’s all that matters.”
“You shouldn’t do that.” “Yes I should; you’re wrong about morality.”
If there’s an important difference between these that makes 1 problematic and 2 not, I’m failing to see it. In practice, the way you convince someone to change their behaviour is some combination of (a) appealing to moral ideas they do agree with you about and (b) influencing them not-explicitly-rationally to change their values (e.g., by exposing them to people they currently condemn so that they can see for themselves that they’re decent human beings). And both of these work equally well (or badly) whether either or both of the parties are moral realists.
“You shouldn’t do that.” “It’s fine according to my values, and that’s all that matters.”
“You shouldn’t do that.” “Yes I should; you’re wrong about morality.”
If there’s an important difference between these that makes 1 problematic and 2 not, I’m failing to see it.
1 is necessarily subjective, and 2 isnt.
In practice, the way you convince someone to change their behaviour is some combination of (a) appealing to moral ideas they do agree with you about and (b) influencing them not-explicitly-rationally to change their values
Maybe in normie-land, but in philosophy you can go up meta levels.
Yes, 1 is necessarily subjective and 2 isn’t. But since what you were trying to do is to show that subjectivism is bad, it’s not really on to take “it’s subjective!” as a criticism.
Philosophers and other intellectual sorts may indeed be more open than normies to rational persuasion in matters of ethics. (So probably more of (a) and less of (b).) They’re also not much given to resolving their disagreements by brute force, realist or not, relativist or not, so your concern that “force will take the place of reason” doesn’t seem very applicable to them. Is there any evidence that philosophers who are moral realists are more readily persuaded to change their ethical positions than philosophers who are moral nonrealists? For what it’s worth, my intuition expects not.
Your argument was that for subjectivists “such judgements achieve nothing” on the grounds that “every relativist can equally criticise every another” because when criticized someone can say “It’s OK by me, so I’m going to carry on”, so that “force will take the place of reason” since “no relativist has a motivation to change their mind”.
I objected that this argument actually applies just as much to moral realists, the only difference being that the response changes from “It’s OK by me” to “It’s OK objectively”. No one is going to be convinced just by being told “X is wrong”; you have to offer some sort of argument starting from premises they share, and that’s exactly as true whether the people involved are realists or not, subjectivists or not, relativists or not. (Or, in either case, you can try to persuade by not-explicitly-rational means like just showing them the consequences of their alleged principles, or making them personally acquainted with people they are inclined to condemn, or whatever; this, too, works or fails just the same whether anyone involved is objectivist or subjectivist.)
When I made this objection, your reply was that “It’s OK by me” is “necessarily subjective” and “It’s OK objectively” isn’t. But if your argument against subjectivism depends on it being bad for something to be subjective then it is a circular argument.
Maybe that’s not what you meant. Maybe you were just doubling down on the claim that being “necessarily subjective” means there’s no hope of convincing anyone to change their moral judgements. But that’s exactly the thing I’m disagreeing with, and you’re not offering any counterargument by merely reiterating the claim I’m disagreeing with.
No one is going to be convinced just by being told “X is wrong”;
Obviously they are not, and that was not my argument.
you have to offer some sort of argument starting from premises they share, and that’s exactly as true whether the people involved are realists or not, subjectivists or not, relativists or not
I know.
But if your argument against subjectivism depends on it being bad for something to be subjective then it is a circular argument.
My argument was:-
that for subjectivists “such judgements achieve nothing” on the grounds that “every relativist can equally criticise every another”
Yeah, that was your argument originally. But when I explained why I didn’t buy it you switched to “1 is necessarily subjective, and 2 isn’t” as if being subjective is known to be a fatal problem—but the question at issue is precisely whether being subjective is a problem or not!
Anyway: Anyone can equally criticize anyone, relativist or not, subjectivist or not, realist or not. Can you give some actual, reasonably concrete examples of moral disagreements in which moral nonrealism makes useful discussion impossible or pointless or something, and where in an equivalent scenario involving moral realists progress would be possible?
If I try to imagine such an example, the sort of thing I come up with goes like this. X and Y are moral nonrealists. X is torturing kittens. Y says “Stop that! It’s wrong!” X says “Not according to my values.” And then, if I understand you aright, Y is supposed to give up in despair because “every relativist can equally criticise every other” or something. But in practice, (1) Y need not give up, because maybe there are things in X’s values that Y thinks actually lead to the conclusion that one shouldn’t torture kittens, and (2) in a parallel scenario involving moral realists, the only difference is that X just says “No it isn’t”, and if Y wants not to give up here then they have to do the same as in the nonrealist scenario: find things X agrees with from which one can get to “don’t torture kittens”. And all the arguments are just the same in the two cases, except that in one Y has to be explicit about where they’re explicitly appealing to some potentially controversial matter of values. This is, it seems to me, not a disadvantage. (Those controversial matters are just as controversial for moral realists.)
Perhaps this isn’t the kind of scenario you have in mind. Or perhaps there’s some specific kind of argument you think realist-Y can make that might actually convince realist-X, that doesn’t have a counterpart in the nonrealist version of the scenario. If so, I’m all ears: show me the details!
I can think of one kind of scenario where progress is easier for realists. Kinda. Suppose X and Y are “the same kind” of moral realist: e.g., they are both divine command theorists and they belong to the same religion, or they are both hedonistic act-utilitarians, or something. In this case, they should be able to reduce their argument about torturing kittens to a more straightforwardly factual argument about what their scriptures say or what gives who how much pleasure. But this isn’t really about realism versus nonrealism. If we imagine the nearest nonrealist equivalents of these guys, then we find e.g. that X and Y both say “What I choose to value is maximizing the net pleasure minus pain in the world”—and then, just as if they were realists, X and Y can in principle resolve their moral disagreement by arguing about matters of nonmoral fact. And if we let X and Y remain realists, but have them be “of different kinds”—maybe X is a divine command theorist and Y is a utilitarian—then they can be as utterly stuck as any nonrealists could be. Y says: but look, torturing kittens produces all this suffering! X says: so what? suffering has nothing to do with value; the gods have commanded that I torture kittens. And the difficulty they have in making progress from there is exactly the same sort of difficulty as their nonrealist equivalents would have.
(I remark that “It would be awful if X were true, therefore X is false” is not a valid form of argument, so even if you are correct about moral nonrealism making it impossible or futile to argue about morality that wouldn’t be any reason to disbelieve moral realism. But I don’t think you are in fact correct about it.)
Anyway: Anyone can equally criticize anyone, relativist or not, subjectivist or not, realist or not
Only in the ultimate clown universe where there are no facts or rules.
need not give up, because maybe there are things in X’s values that Y thinks actually lead to the conclusion that one shouldn’t torture kittens
But if those things are subjective, the same problem re-applies.
Perhaps this isn’t the kind of scenario you have in mind. Or perhaps there’s some specific kind of argument you think realist-Y can make that might actually convince realist-X, that doesn’t have a counterpart in the nonrealist version of the scenario. If so, I’m all ears: show me the details!.
Any realist argument that could do that. So long as there is such a thing. I think your real objection is that there are no good realist arguments. But you can’t be completely sure of that. If there is a 1% chance of a succesfull realist argument , then rational. debaters who want to converge on the truth should take that chance , rather than blocking it off by assuming subjectvism.
If you assume subjectivism , you are guaranteed not to get onto a realistic argument. If you assume realism , there is a possibility, but not a guarantee, of getting onto a realistic solution.
I remark that “It would be awful if X were true, therefore X is false” is not a valid form of argument
It’s entirely valid if you are constructing something. Bridges that fall down are awful, so don’t construct them that way.
I think that when you say “if those things are subjective, the same problem re-applies” you are either arguing in a circle, or claiming something that’s just false.
Suppose X is a moral nonrealist (but not a nihilist: he does have moral values, he just doesn’t think they’re built into the structure of the universe somehow), and he’s doing something that actually isn’t compatible with his moral values but he hasn’t noticed. Crudely simple toy example for clarity: he’s torturing kittens because he’s a utilitarian and enjoys torturing kittens, but he somehow hasn’t considered the kittens’ suffering at all in his moral reckoning. Y (who, let’s suppose, is also a moral nonrealist, though it doesn’t particularly matter) points out that the kittens are suffering terribly. X thinks about it for a while and agrees that indeed his values say he shouldn’t torture kittens, and reluctantly stops doing it.
This seems to me a perfectly satisfactory way for things to go, and in particular it is no less satisfactory than if X is a moral realist who believes that hedonistic utilitarianism is an objective truth and stops torturing kittens because Y convinces him that the objective truth of hedonistic utilitarianism implies the objective truth that one shouldn’t torture kittens, rather than “merely” that his own acceptance of hedonistic utilitarianism implies that he shouldn’t torture kittens.
“Oh, but instead of being convinced X could just say: meh, maybe you’re right but who cares? And then Y will have no good arguments.” Sure. But that’s an argument not against moral nonrealism but against moral nihilism: against not actually having any moral values of any sort at all.
“Oh, sure, X may be convinced, but that doesn’t count because it wasn’t a realist argument. Only realist arguments count.” Well, then your argument is perfectly circular: nonrealism is bad because nonrealists can’t make realist arguments. And, sure, I will gladly concede that if you take it as axiomatic that nonrealism is bad then you can conclude that nonrealism is bad, but so what?
No, my real objection is not that there are no good realist arguments. I’m not sure quite what you mean by that phrase, though.
If you mean arguments that start from only nonmoral premises and deduce moral truths then as it happens I don’t believe there are any; if there are then indeed moral realism is correct; but, also, if there are then they should have as much force for an intelligent and openminded nonrealist (who will, on understanding the arguments, stop being a nonrealist) as for a realist.
If you mean arguments that assume realism but not anything more specific then I rather doubt that that assumption buys you anything, though I’m willing to be shown the error of my ways. At any rate, I can’t see how that assumption is ever going to be any use in, say, arguing that X shouldn’t be torturing kittens.
If you mean arguments that assume some specific sort of realism (e.g., that every moral claim in the New Testament is true, or that the best thing to do is whatever gives the greatest expected excess of pleasure over pain) then (1) these will have no more force for a realist who doesn’t accept that particular kind of realism than for a nonrealist and (2) they will have as much force for a nonrealist who embraces the same moral system (not very common for divine-command theories, I guess, but there are definitely nonrealist utilitarians).
Again: I would like to see a concrete example of how this is supposed to work. You say “any realist argument” but it seems to me that that’s obviously wrong for the reason I’ve already given above: “you shouldn’t torture kittens because hedonistic utilitarianism is objectively right and torturing kittens produces net excess suffering” is a realist argument, but it is exactly paralleled by “you shouldn’t torture kittens because you are a hedonistic utilitarian, and torturing kittens produces net excess suffering” which is a perfectly respectable argument to make to a nonrealist hedonistic utilitarian.
Of course I agree that I can’t be completely sure that there are no good realist arguments (whatever exactly you mean by that), or indeed of anything else. If a genuinely strong argument for moral realism comes along, I hope I’ll see its merits and be convinced. I’m not sure what I’ve said to make you think otherwise.
It seems to me that your last paragraph amounts to a wholehearted embrace of moral nonrealism. If moral realism versus nonrealism is something we are constructing, something we could choose to be one way or the other according to what gives the better outcomes—why, then, in fact moral realism is false. (Because if it is true, then we don’t have the freedom to choose to believe something else in pursuit of better outcomes, at least not if we first and foremost want our beliefs to be true rather than false.)
I sense that there may have been a bit of a miscommunication. I don’t think that constructivism per se is crazy—I think it’s wrong, but it’s held by smart respectable people. It’s cultural relativism that’s held by no-one reasonable—the idea that, if society approves of vicious torture, it’s okay to torture people is crazy. This is one reason why there are virtually no contemporary defenders of cultural relativism. Also, I’m not so sure that I’m right—I’m 85% confident in moral realism and 70% confident in non-physicalism!
Moral relativism does not necessarily entail that if society approves of torture, then torture is “okay.” It only entails that it’s okay relative to that culture’s moral standards. But it does not follow that other individuals or cultures must also think it’s okay. They can think it’s not okay.
Relativism holds that moral claims are true or false relative to the standards of individuals or groups. So a claim like “torture is not wrong,” would mean something like “torture is not inconsistent with our culture’s moral standards.” If it isn’t inconsistent with a culture’s moral standards, the statement would be trivially true. Furthermore, an appraiser relativist does not have to tolerate another individual or culture with different moral standards acting in accordance with those moral standards. At best, only certain forms of agent relativism which hold that an action is morally right or wrong relative to the standards of the agent performing an act (or that agent’s culture). As Gowans notes in the SEP entry on agent and appraiser relativism:
”[...] that to which truth or justification is relative may be the persons making the moral judgments or the persons about whom the judgments are made. These are sometimes called appraiser and agent relativism respectively. Appraiser relativism suggests that we do or should make moral judgments on the basis of our own standards, while agent relativism implies that the relevant standards are those of the persons we are judging (of course, in some cases these may coincide). Appraiser relativism is the more common position, and it will usually be assumed in the discussion that follows.”
Are you rejecting agent relativism, appraiser relativism, or both with your example of torture?
As far as most philosophers not being relativists: this isn’t to say you’re mistaken (since that’s also my impression) but what are you basing that conclusion off of?
I agree relativism doesn’t entail that—cultural relativism does, however. Cultural relativism holds that right means approved of by my culture. This applies to both appraiser and agent relativism—as long as someone thinks something is right just because it’s supported by society, it will have a similar reductio.
Ethics teachers report that their classes consist almost entirely of relativists, and they have to start the course by putting a preliminary case for realism , just to get the students to realise there is more than one option.
Yes, and, in addition to that, the best current studies on how nonphilosophers think about these issues find that across a variety of paradigms, respondents in the US tended to favor antirealism at a ratio of about 3:1, with most endorsing some type of relativism. See Pölzler and Wright (2020). In other words, when given the option to endorse a variety of metaethical positions, about 75% of the respondents in this study favored some type of antirealiasm.
Note that P&W’s studies relied on online samples from a population that is disproportionately nonreligious, and student samples, which are disproportionately more inclined towards relativism (see Beebe & Sackris, 2016), so they are probably not representative of the United States population as a whole.
References
Beebe, J. R., & Sackris, D. (2016). Moral objectivism across the lifespan. Philosophical Psychology, 29(6), 912-929.
Pölzler, T., & Wright, J. C. (2020). Anti-realist pluralism: A new approach to folk metaethics. Review of Philosophy and Psychology, 11(1), 53-82.
The problem isn’t that he’s overly sure about “contentious topics.” These are easy questions that people should be sure about. The problem is that he’s sure in the wrong direction.
They are not easy questions, and if you think they are , you dont understand the subject. If a subject has five counterargument for every argument, as philosophy does, the less you know, the more any individual claim seems plausible.
Incidentally, I am unable to guess what you think the one true ethics is.
Can you clarify which questions you take to be easy? I’m not necessarily disagreeing. I’m trying to get clear on what you take to be easy questions, and what you take the answer to be.
On the question of morality, objective morality is not a coherent idea. When people say “X is morally good,” it can mean a few things:
Doing X will lead to human happiness
I want you to do X
Most people want you to do X
Creatures evolving under similar conditions as us will typically develop a preference for X
If you don’t do X, you’ll be made to regret it
etc...
But believers in objective morality will say that goodness means more than all of these. It quickly becomes clear that they want their own preferences to be some kind of cosmic law, but they can’t explain why that’s the case, or what it would even mean if it were.
On the question of consciousness, our subjective experiences are fully explained by physics.
The best argument for this is that our speech is fully explained by physics. Therefore physics explains why people say all of the things they say about consciousness. For example, it can explain why someone looks at a sunset and says, “This experience of color seems to be occurring on some non-physical movie screen.” If physics can give us a satisfying explanation for statements like that, it’s safe to say that it can dissolve any mysteries about consciousness.
Same here. Yet what I’ve found is that philosophers often make claims about other people’s experiences, but don’t bother to ask anyone or gather data on what other people report about their experiences. Hence why experimental philosophy is important.
You’ll get no disagreement from me. I’m a proponent of the view that standard accounts of moral realism are typically either unintelligible (non-naturalist accounts usually, or any accounts that maintain that there are irreducibly normative facts, or categorical reasons, or external reasons, etc.), or trivial (naturalist realist accounts that reduce moral facts to descriptive claims that have normative authority).
Surprisingly, the claim that moral realism isn’t coherent is not popular in contemporary metaethics and I almost never see anyone arguing for it, aside from myself, so it’s nice to see someone make a similar claim.
It seems you are not even building and fighting a strawman, you are fighting straw-windmills. You are so sure you are right about contentious topics, it’s off-putting.
Here is what a reasonable take on moral relativism might look like, an example from Sean Carroll https://www.preposterousuniverse.com/podcast/2022/12/05/ama-december-2022/ :
Consider learning from the masters.
That should probably read Humean.
And vice versa.
The symmetry problem, the fact that every relativist can equally criticise every other, is a bug not a feature. If there is no reasoned way to resolve a dispute, force will take the place of reason. In fact, it’s a straw man to say that the realist objection to relativism.is that relativists can’t criticise… The actual point is that it is in vain … No relativist has. motivation to.change their mind.
You use the logic “A->B, B is unpleasant, hence A is false”.
No, I use the logic “thing needs additional component to work”. My approach is based on replacing is-true with is-useful.
I don’t have much in the way of any general opposition to Carrol’s remarks in the quote you provide but I do think Carroll characterizes relativism in a way that may be inaccurate, or at least incomplete. According to Carroll:
This may be true of some forms of moral relativism, but not all or the most defensible forms. Nothing about moral relativism prohibits the relativist from judging the moral actions of other people, or the cultural standards of other cultures, nor does relativism entail that they have no right or leverage to criticize those cultures. After all, the latter appear to be moral or at least normative claims themselves, and if you’re a relativist, you could reasonably ask: no right or leverage relative to what moral standard? The standards of the people or cultures I am judging, or relative to my own standards? A relativist does not have to think they can only judge people according to those people’s standards; they can endorse appraiser relativism, and think that they can judge others relative to their own standards.
One shortcoming in descriptions of moral relativism is that they frequently fail to distinguish between agent and appraiser relativism. Agent relativism holds that moral standards are true or false relative to the agent performing the action (or that agent’s culture). Appraiser relativism holds that moral standards are true or false relative to the moral framework of the agent (or the culture of the agent) judging the action in question. Here’s how the SEP distinguishes them:
Many common depictions of relativism focus on agent relativism. And this seems consistent with Carrol’s description. Yet I suspect this emphasis stems from a tendency to characterize relativism in ways that seem to have more straightforward normative implications: people often reject relativism because it purportedly encourages or mandates indifference towards people with different moral standards. But this would only be true of at best some forms of moral relativism. Incidentally, Gowans, the author of the SEP article on moral relativism, says:
I don’t know if this is true. But if it is, there’s something odd about depictions of relativism that seem closer to agent relativism than appraiser relativism. Appraiser relativism can get you something pretty close to the kind of constructivism Carroll describes, so I don’t think the relativism/constructivism distinction was necessary here. Relativism itself has the resources to do what Carroll proposes.
Again, that isn’t the objection. The objection is that such judgements achieve nothing.
My goal with the remark is to accurately characterize relativism. Not to defend it. If someone wants to object to relativism on the grounds that it doesn’t achieve anything, that’s orthogonal to the point I was making. I’m not really sure I understand the objection, though. When you say the judgments achieve nothing, can you clarify what you mean? If I judge others as doing something wrong, I’m not sure why it would be an objection to tell me that this doesn’t achieve anything. Would it avoid the objection by achieving something in particular? If so, what?
Ethics is supposed to do things, not be an ivory tower approach.
The symmetry problem, the fact that every relativist can equally criticise every other, is a bug not a feature.
Alice: stop that, it’s wrong-for-me! Bob: It’s Ok by me, so I’m going to carry on.
Etc,ad infinitum.
If there is no reasoned way to resolve a dispute, force will take the place of reason. In fact, it’s a straw man to say that the realist objection to relativism is that relativists can’t criticise… The actual point is that it is in vain … No relativist has a motivation to.change their mind.
This is a common feature of moral disputes even when no relativism is involved. Compare:
“You shouldn’t do that.” “It’s fine according to my values, and that’s all that matters.”
“You shouldn’t do that.” “Yes I should; you’re wrong about morality.”
If there’s an important difference between these that makes 1 problematic and 2 not, I’m failing to see it. In practice, the way you convince someone to change their behaviour is some combination of (a) appealing to moral ideas they do agree with you about and (b) influencing them not-explicitly-rationally to change their values (e.g., by exposing them to people they currently condemn so that they can see for themselves that they’re decent human beings). And both of these work equally well (or badly) whether either or both of the parties are moral realists.
1 is necessarily subjective, and 2 isnt.
Maybe in normie-land, but in philosophy you can go up meta levels.
Yes, 1 is necessarily subjective and 2 isn’t. But since what you were trying to do is to show that subjectivism is bad, it’s not really on to take “it’s subjective!” as a criticism.
Philosophers and other intellectual sorts may indeed be more open than normies to rational persuasion in matters of ethics. (So probably more of (a) and less of (b).) They’re also not much given to resolving their disagreements by brute force, realist or not, relativist or not, so your concern that “force will take the place of reason” doesn’t seem very applicable to them. Is there any evidence that philosophers who are moral realists are more readily persuaded to change their ethical positions than philosophers who are moral nonrealists? For what it’s worth, my intuition expects not.
I’ve already given the argument against subjectivism.
Your argument was that for subjectivists “such judgements achieve nothing” on the grounds that “every relativist can equally criticise every another” because when criticized someone can say “It’s OK by me, so I’m going to carry on”, so that “force will take the place of reason” since “no relativist has a motivation to change their mind”.
I objected that this argument actually applies just as much to moral realists, the only difference being that the response changes from “It’s OK by me” to “It’s OK objectively”. No one is going to be convinced just by being told “X is wrong”; you have to offer some sort of argument starting from premises they share, and that’s exactly as true whether the people involved are realists or not, subjectivists or not, relativists or not. (Or, in either case, you can try to persuade by not-explicitly-rational means like just showing them the consequences of their alleged principles, or making them personally acquainted with people they are inclined to condemn, or whatever; this, too, works or fails just the same whether anyone involved is objectivist or subjectivist.)
When I made this objection, your reply was that “It’s OK by me” is “necessarily subjective” and “It’s OK objectively” isn’t. But if your argument against subjectivism depends on it being bad for something to be subjective then it is a circular argument.
Maybe that’s not what you meant. Maybe you were just doubling down on the claim that being “necessarily subjective” means there’s no hope of convincing anyone to change their moral judgements. But that’s exactly the thing I’m disagreeing with, and you’re not offering any counterargument by merely reiterating the claim I’m disagreeing with.
Obviously they are not, and that was not my argument.
I know.
My argument was:-
Yeah, that was your argument originally. But when I explained why I didn’t buy it you switched to “1 is necessarily subjective, and 2 isn’t” as if being subjective is known to be a fatal problem—but the question at issue is precisely whether being subjective is a problem or not!
Anyway: Anyone can equally criticize anyone, relativist or not, subjectivist or not, realist or not. Can you give some actual, reasonably concrete examples of moral disagreements in which moral nonrealism makes useful discussion impossible or pointless or something, and where in an equivalent scenario involving moral realists progress would be possible?
If I try to imagine such an example, the sort of thing I come up with goes like this. X and Y are moral nonrealists. X is torturing kittens. Y says “Stop that! It’s wrong!” X says “Not according to my values.” And then, if I understand you aright, Y is supposed to give up in despair because “every relativist can equally criticise every other” or something. But in practice, (1) Y need not give up, because maybe there are things in X’s values that Y thinks actually lead to the conclusion that one shouldn’t torture kittens, and (2) in a parallel scenario involving moral realists, the only difference is that X just says “No it isn’t”, and if Y wants not to give up here then they have to do the same as in the nonrealist scenario: find things X agrees with from which one can get to “don’t torture kittens”. And all the arguments are just the same in the two cases, except that in one Y has to be explicit about where they’re explicitly appealing to some potentially controversial matter of values. This is, it seems to me, not a disadvantage. (Those controversial matters are just as controversial for moral realists.)
Perhaps this isn’t the kind of scenario you have in mind. Or perhaps there’s some specific kind of argument you think realist-Y can make that might actually convince realist-X, that doesn’t have a counterpart in the nonrealist version of the scenario. If so, I’m all ears: show me the details!
I can think of one kind of scenario where progress is easier for realists. Kinda. Suppose X and Y are “the same kind” of moral realist: e.g., they are both divine command theorists and they belong to the same religion, or they are both hedonistic act-utilitarians, or something. In this case, they should be able to reduce their argument about torturing kittens to a more straightforwardly factual argument about what their scriptures say or what gives who how much pleasure. But this isn’t really about realism versus nonrealism. If we imagine the nearest nonrealist equivalents of these guys, then we find e.g. that X and Y both say “What I choose to value is maximizing the net pleasure minus pain in the world”—and then, just as if they were realists, X and Y can in principle resolve their moral disagreement by arguing about matters of nonmoral fact. And if we let X and Y remain realists, but have them be “of different kinds”—maybe X is a divine command theorist and Y is a utilitarian—then they can be as utterly stuck as any nonrealists could be. Y says: but look, torturing kittens produces all this suffering! X says: so what? suffering has nothing to do with value; the gods have commanded that I torture kittens. And the difficulty they have in making progress from there is exactly the same sort of difficulty as their nonrealist equivalents would have.
(I remark that “It would be awful if X were true, therefore X is false” is not a valid form of argument, so even if you are correct about moral nonrealism making it impossible or futile to argue about morality that wouldn’t be any reason to disbelieve moral realism. But I don’t think you are in fact correct about it.)
Only in the ultimate clown universe where there are no facts or rules.
But if those things are subjective, the same problem re-applies.
Any realist argument that could do that. So long as there is such a thing. I think your real objection is that there are no good realist arguments. But you can’t be completely sure of that. If there is a 1% chance of a succesfull realist argument , then rational. debaters who want to converge on the truth should take that chance , rather than blocking it off by assuming subjectvism.
If you assume subjectivism , you are guaranteed not to get onto a realistic argument. If you assume realism , there is a possibility, but not a guarantee, of getting onto a realistic solution.
It’s entirely valid if you are constructing something. Bridges that fall down are awful, so don’t construct them that way.
I think that when you say “if those things are subjective, the same problem re-applies” you are either arguing in a circle, or claiming something that’s just false.
Suppose X is a moral nonrealist (but not a nihilist: he does have moral values, he just doesn’t think they’re built into the structure of the universe somehow), and he’s doing something that actually isn’t compatible with his moral values but he hasn’t noticed. Crudely simple toy example for clarity: he’s torturing kittens because he’s a utilitarian and enjoys torturing kittens, but he somehow hasn’t considered the kittens’ suffering at all in his moral reckoning. Y (who, let’s suppose, is also a moral nonrealist, though it doesn’t particularly matter) points out that the kittens are suffering terribly. X thinks about it for a while and agrees that indeed his values say he shouldn’t torture kittens, and reluctantly stops doing it.
This seems to me a perfectly satisfactory way for things to go, and in particular it is no less satisfactory than if X is a moral realist who believes that hedonistic utilitarianism is an objective truth and stops torturing kittens because Y convinces him that the objective truth of hedonistic utilitarianism implies the objective truth that one shouldn’t torture kittens, rather than “merely” that his own acceptance of hedonistic utilitarianism implies that he shouldn’t torture kittens.
“Oh, but instead of being convinced X could just say: meh, maybe you’re right but who cares? And then Y will have no good arguments.” Sure. But that’s an argument not against moral nonrealism but against moral nihilism: against not actually having any moral values of any sort at all.
“Oh, sure, X may be convinced, but that doesn’t count because it wasn’t a realist argument. Only realist arguments count.” Well, then your argument is perfectly circular: nonrealism is bad because nonrealists can’t make realist arguments. And, sure, I will gladly concede that if you take it as axiomatic that nonrealism is bad then you can conclude that nonrealism is bad, but so what?
No, my real objection is not that there are no good realist arguments. I’m not sure quite what you mean by that phrase, though.
If you mean arguments that start from only nonmoral premises and deduce moral truths then as it happens I don’t believe there are any; if there are then indeed moral realism is correct; but, also, if there are then they should have as much force for an intelligent and openminded nonrealist (who will, on understanding the arguments, stop being a nonrealist) as for a realist.
If you mean arguments that assume realism but not anything more specific then I rather doubt that that assumption buys you anything, though I’m willing to be shown the error of my ways. At any rate, I can’t see how that assumption is ever going to be any use in, say, arguing that X shouldn’t be torturing kittens.
If you mean arguments that assume some specific sort of realism (e.g., that every moral claim in the New Testament is true, or that the best thing to do is whatever gives the greatest expected excess of pleasure over pain) then (1) these will have no more force for a realist who doesn’t accept that particular kind of realism than for a nonrealist and (2) they will have as much force for a nonrealist who embraces the same moral system (not very common for divine-command theories, I guess, but there are definitely nonrealist utilitarians).
Again: I would like to see a concrete example of how this is supposed to work. You say “any realist argument” but it seems to me that that’s obviously wrong for the reason I’ve already given above: “you shouldn’t torture kittens because hedonistic utilitarianism is objectively right and torturing kittens produces net excess suffering” is a realist argument, but it is exactly paralleled by “you shouldn’t torture kittens because you are a hedonistic utilitarian, and torturing kittens produces net excess suffering” which is a perfectly respectable argument to make to a nonrealist hedonistic utilitarian.
Of course I agree that I can’t be completely sure that there are no good realist arguments (whatever exactly you mean by that), or indeed of anything else. If a genuinely strong argument for moral realism comes along, I hope I’ll see its merits and be convinced. I’m not sure what I’ve said to make you think otherwise.
It seems to me that your last paragraph amounts to a wholehearted embrace of moral nonrealism. If moral realism versus nonrealism is something we are constructing, something we could choose to be one way or the other according to what gives the better outcomes—why, then, in fact moral realism is false. (Because if it is true, then we don’t have the freedom to choose to believe something else in pursuit of better outcomes, at least not if we first and foremost want our beliefs to be true rather than false.)
I sense that there may have been a bit of a miscommunication. I don’t think that constructivism per se is crazy—I think it’s wrong, but it’s held by smart respectable people. It’s cultural relativism that’s held by no-one reasonable—the idea that, if society approves of vicious torture, it’s okay to torture people is crazy. This is one reason why there are virtually no contemporary defenders of cultural relativism. Also, I’m not so sure that I’m right—I’m 85% confident in moral realism and 70% confident in non-physicalism!
Moral relativism does not necessarily entail that if society approves of torture, then torture is “okay.” It only entails that it’s okay relative to that culture’s moral standards. But it does not follow that other individuals or cultures must also think it’s okay. They can think it’s not okay.
Relativism holds that moral claims are true or false relative to the standards of individuals or groups. So a claim like “torture is not wrong,” would mean something like “torture is not inconsistent with our culture’s moral standards.” If it isn’t inconsistent with a culture’s moral standards, the statement would be trivially true. Furthermore, an appraiser relativist does not have to tolerate another individual or culture with different moral standards acting in accordance with those moral standards. At best, only certain forms of agent relativism which hold that an action is morally right or wrong relative to the standards of the agent performing an act (or that agent’s culture). As Gowans notes in the SEP entry on agent and appraiser relativism:
”[...] that to which truth or justification is relative may be the persons making the moral judgments or the persons about whom the judgments are made. These are sometimes called appraiser and agent relativism respectively. Appraiser relativism suggests that we do or should make moral judgments on the basis of our own standards, while agent relativism implies that the relevant standards are those of the persons we are judging (of course, in some cases these may coincide). Appraiser relativism is the more common position, and it will usually be assumed in the discussion that follows.”
Are you rejecting agent relativism, appraiser relativism, or both with your example of torture?
As far as most philosophers not being relativists: this isn’t to say you’re mistaken (since that’s also my impression) but what are you basing that conclusion off of?
I agree relativism doesn’t entail that—cultural relativism does, however. Cultural relativism holds that right means approved of by my culture. This applies to both appraiser and agent relativism—as long as someone thinks something is right just because it’s supported by society, it will have a similar reductio.
What’s the reductio, exactly?
Ethics teachers report that their classes consist almost entirely of relativists, and they have to start the course by putting a preliminary case for realism , just to get the students to realise there is more than one option.
Yes, and, in addition to that, the best current studies on how nonphilosophers think about these issues find that across a variety of paradigms, respondents in the US tended to favor antirealism at a ratio of about 3:1, with most endorsing some type of relativism. See Pölzler and Wright (2020). In other words, when given the option to endorse a variety of metaethical positions, about 75% of the respondents in this study favored some type of antirealiasm.
Note that P&W’s studies relied on online samples from a population that is disproportionately nonreligious, and student samples, which are disproportionately more inclined towards relativism (see Beebe & Sackris, 2016), so they are probably not representative of the United States population as a whole.
References
Beebe, J. R., & Sackris, D. (2016). Moral objectivism across the lifespan. Philosophical Psychology, 29(6), 912-929.
Pölzler, T., & Wright, J. C. (2020). Anti-realist pluralism: A new approach to folk metaethics. Review of Philosophy and Psychology, 11(1), 53-82.
The problem isn’t that he’s overly sure about “contentious topics.” These are easy questions that people should be sure about. The problem is that he’s sure in the wrong direction.
They are not easy questions, and if you think they are , you dont understand the subject. If a subject has five counterargument for every argument, as philosophy does, the less you know, the more any individual claim seems plausible.
Incidentally, I am unable to guess what you think the one true ethics is.
Can you clarify which questions you take to be easy? I’m not necessarily disagreeing. I’m trying to get clear on what you take to be easy questions, and what you take the answer to be.
On the question of morality, objective morality is not a coherent idea. When people say “X is morally good,” it can mean a few things:
Doing X will lead to human happiness
I want you to do X
Most people want you to do X
Creatures evolving under similar conditions as us will typically develop a preference for X
If you don’t do X, you’ll be made to regret it
etc...
But believers in objective morality will say that goodness means more than all of these. It quickly becomes clear that they want their own preferences to be some kind of cosmic law, but they can’t explain why that’s the case, or what it would even mean if it were.
On the question of consciousness, our subjective experiences are fully explained by physics.
The best argument for this is that our speech is fully explained by physics. Therefore physics explains why people say all of the things they say about consciousness. For example, it can explain why someone looks at a sunset and says, “This experience of color seems to be occurring on some non-physical movie screen.” If physics can give us a satisfying explanation for statements like that, it’s safe to say that it can dissolve any mysteries about consciousness.
I’m not trying to explain other peoples reports, I’m trying to explain my own experience.
Same here. Yet what I’ve found is that philosophers often make claims about other people’s experiences, but don’t bother to ask anyone or gather data on what other people report about their experiences. Hence why experimental philosophy is important.
Thanks for clarifying.
You’ll get no disagreement from me. I’m a proponent of the view that standard accounts of moral realism are typically either unintelligible (non-naturalist accounts usually, or any accounts that maintain that there are irreducibly normative facts, or categorical reasons, or external reasons, etc.), or trivial (naturalist realist accounts that reduce moral facts to descriptive claims that have normative authority).
Surprisingly, the claim that moral realism isn’t coherent is not popular in contemporary metaethics and I almost never see anyone arguing for it, aside from myself, so it’s nice to see someone make a similar claim.