Tons of people have said “Ethical realism is false”, for a very long time, without needing to invent the term “meta-ethics” to describe what they were doing. They just called it ethics. Often they went beyond that and offered systems they thought it was a good idea to adopt even so, and they called that ethics, too. None of that was because anybody was confused in any way.
“Meta-ethics” lies within the traditional scope of ethics, and it’s intertwined enough with the fundamental concerns of ethics that it’s not really worth separating it out… not often enough to call it a separate subject anyway. Maybe occasionally enough to use the words once in a great while.
Ethics (in philosophy as opposed to social sciences) is, roughly, “the study of what one Should Do(TM) (or maybe how one Should Be) (and why)”. It’s considered part of that problem to determine what meanings of “Should”, what kinds of Doing or Being, and what kinds of whys, are in scope. Narrowing any of those without acknowledging what you’re doing is considered cheating. It’s not less cheating if you claim to have done it under some separate magisterium that you’ve named “meta-ethics”. You’re still narrowing what the rest of the world has always called ethical problems.
When you say “ethical realism is false”, you’re making a meta-ethical statement. You believe this statement is true, hence you perforce must believe in meta-ethical realism.
The phrase “Ethical realism”, as normally used, refers to an idea about actual, object-level prescriptions: specifically the idea that you can get to them by pointing to some objective “Right stuff” floating around in a shared external reality. I’m actually using it kind of loosely, in that I really should not only deny that there’s no objective external standard, but also separately deny that you can arrive at such prescriptions in a purely analytic way. I don’t think that second one is technically usually considered to be part of ethical realism. Not only that, but I’m using the phrase to allude to other similar things that also aren’t technically ethical realism (like the one described below).
But none of the things I’m talking about or alluding to refers to itself. In practice nobody gets confused about that, even without resorting to the term “meta-ethics”, and definitely without talking about it like it’s a really separate field.
To go ahead and use the term without accepting the idea that meta-ethics qualifies as a subject, the meta-ethical statement (technically I guess a degree 2 meta-ethical statement) that “ethical realism is false” is pretty close to analytic, in that even if you point to some actual thing in the world that you claim implies the Right ways to Be or Do, I can always deny what whatever you’re pointing to matters… because there’s no predefined standard for standards either. God can come down from heaven and say “This is the Way”, and you can simultaneously prove that it leads to infinite universal flourishing, and also provide polls proving within epsilon that it’s also a universal human intuition… and somebody can always deny that any of those makes it Right(TM).
But even if we were talking about a more ordinary sort of matter of fact, even if what you were looking for was not “official” ethical realism of the form “look here, this is Obviously Right as a brute part of reality”, but “here’s a proof that any even approximately rational agent[1] would adopt this code in practice”, then (a) that’s not what ethical realism means, (b) there’s a bunch of empirical evidence against it, and essentially no evidence that it’s true, and (c) if it is true, we obviously have a whole lot of not-aproximately-rational agents running around, which sharply limits the utility of the fact. Close enough to false for any practical purpose.
… under whatever formal definition of rationality you happened to be trying to get people to accept, perhaps under the claim that that definition was itself Obviously Right, which is exactly the kind of cheating I’m complaining about…
I’m using the term “meta-ethics” in the standard sense of analytic philosophy. Not sure what bothers you so greatly about it.
I find your manner of argumentation quite biased: you preemptively defend yourself by radical skepticism against any claim you might oppose, but when it comes to a claim you support (in this case “ethical realism is false”), suddenly this claim is “pretty close to analytic”. The latter maneuver seems to me the same thing as the “Obviously Right” you criticize later.
Also, this brand of radical skepticism is an example of the Charybdis I was warning against. Of course you can always deny that anything matters. You can also deny Occam’s razor or the evidence of your own eyes or even that 2+2=4. After all, “there’s no predefined standard for standards”. (I guess you might object that your reasoning only applies to value-related claims, not to anything strictly value-neutral: but why not?)
Under the premises of radical skepticism, why are we having this debate? Why did you decide to reply to my comment? If anyone can deny anything, why would any of us accept the other’s arguments?
To have any sort of productive conversation, we need to be at least open to the possibility that some new idea, if you delve deeply and honestly into understanding it, might become persuasive by the force of the intuitions it engenders and its inner logical coherence combined. To deny the possibility preemptively is to close the path to any progress.
As to your “(b) there’s a bunch of empirical evidence against it” I honestly don’t know what you’re talking about there.
P.S.
I wish to also clarify my positions on a slightly lower level of meta.
First, “ethics” is a confusing term because, on my view, the colloquial meaning of “ethics” is inescapably intertwined with how human societies negotiate of over norms. On the other hand, I want to talk purely about individual preferences, since I view it as more fundamental.
We can still distinguish between “theories of human preferences” and “metatheories of preferences”, similarly to the distinction between “ethics” and “meta-ethics”. Namely, “theories of human preferences” would have to describe the actual human preferences, whereas “metatheories of preferences” would only have to describe what does it even mean to talk about someone’s preferences at all (whether this someone is human or not: among other things, such a metatheory would have to establish what kind of entities have preferences in a meaningful sense).
The relevant difference between the theory and the metatheory is that Occam’s razor is only fully applicable to the latter. In general, we should expect simple answers to simple questions. “What are human preferences?” is not a simple question, because it references the complex object “human”. On the other hand “what does it mean to talk about preferences?” does seem to me to be a simple question. As an analogy, “what is the shape of Africa?” is not a simple question because it references the specific continent of Africa on the specific planet Earth, whereas “what are the general laws of continent formation” is at least a simpler question (perhaps not quite as simple, since the notion of “continent” is not so fundamental).
Therefore, I expect there to be a (relatively) simple metatheory of preferences, but I do not expect there to be anything like a simple theory of human preferences. This is why this distinction is quite important.
I guess you might object that your reasoning only applies to value-related claims, not to anything strictly value-neutral: but why not?
Mostly because I don’t (or didn’t) see this as a discussion about epistemology.
In that context, I tend to accept in principle that I Can’t Know Anything… but then to fall back on the observation that I’m going to have to act like my reasoning works regardless of whether it really does; I’m going to have to act on my sensory input as if it reflected some kind of objective reality regardless of whether it really does; and, not only that, but I’m going to have to act as though that reality were relatively lawful and understandable regardless of whether it really is. I’m stuck with all of that and there’s not a lot of point in worrying about any of it.
That’s actually what I also tend to do when I actually have to make ethical decisions: I rely mostly on my own intuitions or “ethical perceptions” or whatever, seasoned with a preference not to be too inconsistent.
BUT.
I perceive others to be acting as though their own reasoning and sensory input looked a lot like mine, almost all the time. We may occasionally reach different conclusions, but if we spend enough time on it, we can generally either come to agreement, or at least nail down the source of our disagreement in a pretty tractable way. There’s not a lot of live controversy about what’s going to happen if we drop that rock.
On the other hand, I don’t perceive others to be acting nearly so much as though their ethical intuitions looked like mine, and if you distinguish “meta-intuitions” about how to reconcile different degree zero intuitions about how to act, the commonality is still less.
Yes, sure, we share a lot of things, but there’s also enough difference to have a major practical effect. There truly are lots of people who’ll say that God turning up and saying something was Right wouldn’t (or would) make it Right, or that the effects of an action aren’t dispositive about its Rightness, or that some kinds of ethical intuitions should be ignored (usually in favor of others), or whatever. They’ll mean those things. They’re not just saying them for the sake of argument; they’re trying to live by them. The same sorts differences exist for other kinds of values, but disputes about the ones people tend to call “ethical” seem to have the most practical impact.
Radical or not, skepticism that you’re actually going to encounter, and that matters to people, seems a lot more salient than skepticism that never really comes up outside of academic exercises. Especially if you’re starting from a context where you’re trying to actually design some technology that you believe may affect everybody in ways that they care about, and especially if you think you might actually find yourself having disagreements with the technology itself.
As to your “(b) there’s a bunch of empirical evidence against it” I honestly don’t know what you’re talking about there.
Nothing complicated. I was talking about the particular hypothetical statement I’d just described, not about any actual claim you might be making[1].
I’m just saying that if there were some actual code of ethics[2] that every “approximately rational” agent would adopt[3], and we in fact have such agents, then we should be seeing all of them adopting it. Our best candidates for existing approximately rational agents are humans, and they don’t seem to have overwhelmingly adopted any particular code. That’s a lot of empirical evidence against the existence of such a code[4].
The alternative, where you reject the idea that humans are approximately rational, thus rendering them irrelevant as evidence, is the other case I was talking about where “we have a lot of not-approximately-rational agents”.
I understand, and originally undestood, that you did not say there was any stance that every approximately rational agent would adopt, and also did you did not say that you were looking for such a stance. It was just an example of the sort of thing one might be looking for, meant to illustrate a fine distinction about what qualified as ethical realism.
For some definition of “adopt”… to follow it, to try to follow it, to claim that it should be followed, whatever. But not “adopt” in the sense that we’re all following a code that says “it’s unethical to travel faster than light”, or even in the sense that we’re all following a particular code when we act as large numbers of other codes would also prescribe. If you’re looking at actions, then I think you can only sanely count actions actions done at least partially because of the code.
As per footnote 3[3:1][5], I don’t think, for example, the fact that most people don’t regularly go on murder sprees is significantly evidence of them having adopted a particular shared code. Whatever codes they have may share that particular prescription, but that doesn’t make them the same code.
I’m sorry. I love footnotes. I love having a discussion system that does footnotes well. I try to be better, but my adherence to that code is imperfect…
I reject the idea that I’m confused at all.
Tons of people have said “Ethical realism is false”, for a very long time, without needing to invent the term “meta-ethics” to describe what they were doing. They just called it ethics. Often they went beyond that and offered systems they thought it was a good idea to adopt even so, and they called that ethics, too. None of that was because anybody was confused in any way.
“Meta-ethics” lies within the traditional scope of ethics, and it’s intertwined enough with the fundamental concerns of ethics that it’s not really worth separating it out… not often enough to call it a separate subject anyway. Maybe occasionally enough to use the words once in a great while.
Ethics (in philosophy as opposed to social sciences) is, roughly, “the study of what one Should Do(TM) (or maybe how one Should Be) (and why)”. It’s considered part of that problem to determine what meanings of “Should”, what kinds of Doing or Being, and what kinds of whys, are in scope. Narrowing any of those without acknowledging what you’re doing is considered cheating. It’s not less cheating if you claim to have done it under some separate magisterium that you’ve named “meta-ethics”. You’re still narrowing what the rest of the world has always called ethical problems.
The phrase “Ethical realism”, as normally used, refers to an idea about actual, object-level prescriptions: specifically the idea that you can get to them by pointing to some objective “Right stuff” floating around in a shared external reality. I’m actually using it kind of loosely, in that I really should not only deny that there’s no objective external standard, but also separately deny that you can arrive at such prescriptions in a purely analytic way. I don’t think that second one is technically usually considered to be part of ethical realism. Not only that, but I’m using the phrase to allude to other similar things that also aren’t technically ethical realism (like the one described below).
But none of the things I’m talking about or alluding to refers to itself. In practice nobody gets confused about that, even without resorting to the term “meta-ethics”, and definitely without talking about it like it’s a really separate field.
To go ahead and use the term without accepting the idea that meta-ethics qualifies as a subject, the meta-ethical statement (technically I guess a degree 2 meta-ethical statement) that “ethical realism is false” is pretty close to analytic, in that even if you point to some actual thing in the world that you claim implies the Right ways to Be or Do, I can always deny what whatever you’re pointing to matters… because there’s no predefined standard for standards either. God can come down from heaven and say “This is the Way”, and you can simultaneously prove that it leads to infinite universal flourishing, and also provide polls proving within epsilon that it’s also a universal human intuition… and somebody can always deny that any of those makes it Right(TM).
But even if we were talking about a more ordinary sort of matter of fact, even if what you were looking for was not “official” ethical realism of the form “look here, this is Obviously Right as a brute part of reality”, but “here’s a proof that any even approximately rational agent[1] would adopt this code in practice”, then (a) that’s not what ethical realism means, (b) there’s a bunch of empirical evidence against it, and essentially no evidence that it’s true, and (c) if it is true, we obviously have a whole lot of not-aproximately-rational agents running around, which sharply limits the utility of the fact. Close enough to false for any practical purpose.
… under whatever formal definition of rationality you happened to be trying to get people to accept, perhaps under the claim that that definition was itself Obviously Right, which is exactly the kind of cheating I’m complaining about…
I’m using the term “meta-ethics” in the standard sense of analytic philosophy. Not sure what bothers you so greatly about it.
I find your manner of argumentation quite biased: you preemptively defend yourself by radical skepticism against any claim you might oppose, but when it comes to a claim you support (in this case “ethical realism is false”), suddenly this claim is “pretty close to analytic”. The latter maneuver seems to me the same thing as the “Obviously Right” you criticize later.
Also, this brand of radical skepticism is an example of the Charybdis I was warning against. Of course you can always deny that anything matters. You can also deny Occam’s razor or the evidence of your own eyes or even that 2+2=4. After all, “there’s no predefined standard for standards”. (I guess you might object that your reasoning only applies to value-related claims, not to anything strictly value-neutral: but why not?)
Under the premises of radical skepticism, why are we having this debate? Why did you decide to reply to my comment? If anyone can deny anything, why would any of us accept the other’s arguments?
To have any sort of productive conversation, we need to be at least open to the possibility that some new idea, if you delve deeply and honestly into understanding it, might become persuasive by the force of the intuitions it engenders and its inner logical coherence combined. To deny the possibility preemptively is to close the path to any progress.
As to your “(b) there’s a bunch of empirical evidence against it” I honestly don’t know what you’re talking about there.
P.S.
I wish to also clarify my positions on a slightly lower level of meta.
First, “ethics” is a confusing term because, on my view, the colloquial meaning of “ethics” is inescapably intertwined with how human societies negotiate of over norms. On the other hand, I want to talk purely about individual preferences, since I view it as more fundamental.
We can still distinguish between “theories of human preferences” and “metatheories of preferences”, similarly to the distinction between “ethics” and “meta-ethics”. Namely, “theories of human preferences” would have to describe the actual human preferences, whereas “metatheories of preferences” would only have to describe what does it even mean to talk about someone’s preferences at all (whether this someone is human or not: among other things, such a metatheory would have to establish what kind of entities have preferences in a meaningful sense).
The relevant difference between the theory and the metatheory is that Occam’s razor is only fully applicable to the latter. In general, we should expect simple answers to simple questions. “What are human preferences?” is not a simple question, because it references the complex object “human”. On the other hand “what does it mean to talk about preferences?” does seem to me to be a simple question. As an analogy, “what is the shape of Africa?” is not a simple question because it references the specific continent of Africa on the specific planet Earth, whereas “what are the general laws of continent formation” is at least a simpler question (perhaps not quite as simple, since the notion of “continent” is not so fundamental).
Therefore, I expect there to be a (relatively) simple metatheory of preferences, but I do not expect there to be anything like a simple theory of human preferences. This is why this distinction is quite important.
Confining myself to actual questions...
Mostly because I don’t (or didn’t) see this as a discussion about epistemology.
In that context, I tend to accept in principle that I Can’t Know Anything… but then to fall back on the observation that I’m going to have to act like my reasoning works regardless of whether it really does; I’m going to have to act on my sensory input as if it reflected some kind of objective reality regardless of whether it really does; and, not only that, but I’m going to have to act as though that reality were relatively lawful and understandable regardless of whether it really is. I’m stuck with all of that and there’s not a lot of point in worrying about any of it.
That’s actually what I also tend to do when I actually have to make ethical decisions: I rely mostly on my own intuitions or “ethical perceptions” or whatever, seasoned with a preference not to be too inconsistent.
BUT.
I perceive others to be acting as though their own reasoning and sensory input looked a lot like mine, almost all the time. We may occasionally reach different conclusions, but if we spend enough time on it, we can generally either come to agreement, or at least nail down the source of our disagreement in a pretty tractable way. There’s not a lot of live controversy about what’s going to happen if we drop that rock.
On the other hand, I don’t perceive others to be acting nearly so much as though their ethical intuitions looked like mine, and if you distinguish “meta-intuitions” about how to reconcile different degree zero intuitions about how to act, the commonality is still less.
Yes, sure, we share a lot of things, but there’s also enough difference to have a major practical effect. There truly are lots of people who’ll say that God turning up and saying something was Right wouldn’t (or would) make it Right, or that the effects of an action aren’t dispositive about its Rightness, or that some kinds of ethical intuitions should be ignored (usually in favor of others), or whatever. They’ll mean those things. They’re not just saying them for the sake of argument; they’re trying to live by them. The same sorts differences exist for other kinds of values, but disputes about the ones people tend to call “ethical” seem to have the most practical impact.
Radical or not, skepticism that you’re actually going to encounter, and that matters to people, seems a lot more salient than skepticism that never really comes up outside of academic exercises. Especially if you’re starting from a context where you’re trying to actually design some technology that you believe may affect everybody in ways that they care about, and especially if you think you might actually find yourself having disagreements with the technology itself.
Nothing complicated. I was talking about the particular hypothetical statement I’d just described, not about any actual claim you might be making[1].
I’m just saying that if there were some actual code of ethics[2] that every “approximately rational” agent would adopt[3], and we in fact have such agents, then we should be seeing all of them adopting it. Our best candidates for existing approximately rational agents are humans, and they don’t seem to have overwhelmingly adopted any particular code. That’s a lot of empirical evidence against the existence of such a code[4].
The alternative, where you reject the idea that humans are approximately rational, thus rendering them irrelevant as evidence, is the other case I was talking about where “we have a lot of not-approximately-rational agents”.
I understand, and originally undestood, that you did not say there was any stance that every approximately rational agent would adopt, and also did you did not say that you were looking for such a stance. It was just an example of the sort of thing one might be looking for, meant to illustrate a fine distinction about what qualified as ethical realism.
In the loose sense of some set of principles about how to act, how to be, how to encourage others to act or be, etc blah blah blah.
For some definition of “adopt”… to follow it, to try to follow it, to claim that it should be followed, whatever. But not “adopt” in the sense that we’re all following a code that says “it’s unethical to travel faster than light”, or even in the sense that we’re all following a particular code when we act as large numbers of other codes would also prescribe. If you’re looking at actions, then I think you can only sanely count actions actions done at least partially because of the code.
As per footnote 3[3:1][5], I don’t think, for example, the fact that most people don’t regularly go on murder sprees is significantly evidence of them having adopted a particular shared code. Whatever codes they have may share that particular prescription, but that doesn’t make them the same code.
I’m sorry. I love footnotes. I love having a discussion system that does footnotes well. I try to be better, but my adherence to that code is imperfect…