Except since those are simply hypothetical imperatives, the Moral Non-Realist won’t see the need to call these theories ‘moral’ in nature. The Error Theorist agrees that if you want A then you should do B, but he wouldn’t call that a theory of morality.
There are all kinds of preferences, and distinguishing moral preferences from other types of preferences is still useful, even if you don’t believe that those preferences are commands from existence.
The Error Theorist might not call that a theory of morality. My reply to him is that what others call moral preferences have practical differences to hat preferences. Treating them all the same is throwing out the conceptual baby with the bathwater.
And others, perhaps you, might not want to call these theories “moral” either, because you seem to want “imperatives”, and my account of morality doesn’t include imperatives from the universe, or anything else.
The problem is that the line between what has felt like a “moral” preference and what has felt like some other kind of preference has been different in different social contexts. There may not even be agreement in a particular culture.
For example, some folks think an individual’s sexual preferences are “moral preferences,” such that a particular preference can be immoral. Other folks think a sexual preference is more like a gastric preference. Some people like broccoli, some don’t. Good and evil don’t enter into that discussion at all.
If the error theory were false, I would expect the line dividing different types of preferences would be more stable over time, even if value drift caused moral preferences to change over time. In other words, the Aztecs thought human sacrifice was good, we now think it is evil. But the question has always been understood as a moral question. I’m asserting that some questions have not always been seen as “moral” questions, and the movement of that line is evidence for the error theory.
If the error theory were false, I would expect the line dividing different types of preferences would be more stable over time, even if value drift caused moral preferences to change over time.
The line between “truth” and “belief” is also not stable across cultures.
I meant in the same sense that you meant the statement about cultures, i.e., if you ask an average member of the culture, you’ll get different answers for what is true depending on the culture.
I was talking about community consensus, not whatever nonsense is being spouted by the man-on-the-street.
As you noted, the belief of the average person is seldom a reliable indicator (our even all that coherent). That’s why we don’t measure a society’s scientific knowledge that way.
Sorry, I was in a hurry when I posted the grandparent and was unclear:
Specifically my point was that the form of extreme be-yourself-ism implicit in your statement is still a moral theory, one that would make statements like:
“If you’re a paper clip maximizer, then maximize paperclips.”
I think I agree with Eugine_Nier that it isn’t a moral theory to be able to draw conclusions. One doesn’t need to commit to any ethical or meta-ethical principles to notice that Clippy’s preferences will be met better if Clippy creates some paperclips.
At the level of abstraction we are talking in now, moral theories exist to tell us what preferences to have, and meta-ethical theories tell us what kinds of moral theories are worth considering.
It sounds to me like you only think a person has a moral theory then the moral theory has them.
moral theories exist to tell us what preferences to have,
For you, under your moral theories. Not for me. I’m happy to have theories that tell me what moral values I do have, and what moral values other people have.
Obviously not—but it isn’t your moral theory that tells you how Clippy will maximize its preferences.
Alice the consequentialist and Bob the deontologist disagree about moral reasoning. But Bob does not need to become a consequentialist to predict what Alice will maximize, and vice versa.
What do you want to call those kinds of theories?
Reasoning? More generally, thinking (and caring about) the consequences of actions is not limited to consequentialists. A competent deontologist knows that pointing guns at people and pulling the trigger tends to cause murder—that’s why she tends not to do that.
moral theories exist to tell us what preferences to have,
For you, under your moral theories. Not for me.
I should be working now, but I don’t want to. So I’m here, relaxing and discussing philosophy. But I am committing a minor wrong in that I am acting on a preference that is inconsistent with my moral obligation to support my family (as I see my obligations). Does that type of inconsistency between preference and right action never happen to you?
A moral non-realist can have moral theories in the “If, then” form. If you value A.B.C, then you value D.
If you’re a paper clip maximizer, then …
Except since those are simply hypothetical imperatives, the Moral Non-Realist won’t see the need to call these theories ‘moral’ in nature. The Error Theorist agrees that if you want A then you should do B, but he wouldn’t call that a theory of morality.
There are all kinds of preferences, and distinguishing moral preferences from other types of preferences is still useful, even if you don’t believe that those preferences are commands from existence.
The Error Theorist might not call that a theory of morality. My reply to him is that what others call moral preferences have practical differences to hat preferences. Treating them all the same is throwing out the conceptual baby with the bathwater.
And others, perhaps you, might not want to call these theories “moral” either, because you seem to want “imperatives”, and my account of morality doesn’t include imperatives from the universe, or anything else.
The problem is that the line between what has felt like a “moral” preference and what has felt like some other kind of preference has been different in different social contexts. There may not even be agreement in a particular culture.
For example, some folks think an individual’s sexual preferences are “moral preferences,” such that a particular preference can be immoral. Other folks think a sexual preference is more like a gastric preference. Some people like broccoli, some don’t. Good and evil don’t enter into that discussion at all.
If the error theory were false, I would expect the line dividing different types of preferences would be more stable over time, even if value drift caused moral preferences to change over time. In other words, the Aztecs thought human sacrifice was good, we now think it is evil. But the question has always been understood as a moral question. I’m asserting that some questions have not always been seen as “moral” questions, and the movement of that line is evidence for the error theory.
The line between “truth” and “belief” is also not stable across cultures.
The line between “true” and “not true” is different in different cultures? I wasn’t aware that airplanes don’t work in China.
I meant in the same sense that you meant the statement about cultures, i.e., if you ask an average member of the culture, you’ll get different answers for what is true depending on the culture.
I was talking about community consensus, not whatever nonsense is being spouted by the man-on-the-street.
As you noted, the belief of the average person is seldom a reliable indicator (our even all that coherent). That’s why we don’t measure a society’s scientific knowledge that way.
Ok, my point still stands.
That’s still a moral theory.
Which was the point I was making.
“A moral non-realist can have moral theories …” So I presented the form of the moral theory a moral non-realist could have.
Sorry, I was in a hurry when I posted the grandparent and was unclear:
Specifically my point was that the form of extreme be-yourself-ism implicit in your statement is still a moral theory, one that would make statements like:
“If you’re a paper clip maximizer, then maximize paperclips.”
“If you’re a Nazi, kill Jews.”
“If you’re a liberal, try to stop the Nazis.”
Those aren’t accurate statements of the kinds of moral theories I was speaking of.
I gave the example:
That’s not an imperative, it’s an identification of the relationship between different values, in this case that A,B,C imply D.
Ok, that’s not a moral theory unless you’re sneaking in the statements I made in the parent as connotations.
To me, a theory that identifies a moral value implied by other moral values would count as a moral theory.
What kind of theory do you want to call it?
I think I agree with Eugine_Nier that it isn’t a moral theory to be able to draw conclusions. One doesn’t need to commit to any ethical or meta-ethical principles to notice that Clippy’s preferences will be met better if Clippy creates some paperclips.
At the level of abstraction we are talking in now, moral theories exist to tell us what preferences to have, and meta-ethical theories tell us what kinds of moral theories are worth considering.
Does one need to commit to a theory to have one?
It sounds to me like you only think a person has a moral theory then the moral theory has them.
For you, under your moral theories. Not for me. I’m happy to have theories that tell me what moral values I do have, and what moral values other people have.
What do you want to call those kinds of theories?
Obviously not—but it isn’t your moral theory that tells you how Clippy will maximize its preferences.
Alice the consequentialist and Bob the deontologist disagree about moral reasoning. But Bob does not need to become a consequentialist to predict what Alice will maximize, and vice versa.
Reasoning? More generally, thinking (and caring about) the consequences of actions is not limited to consequentialists. A competent deontologist knows that pointing guns at people and pulling the trigger tends to cause murder—that’s why she tends not to do that.
I should be working now, but I don’t want to. So I’m here, relaxing and discussing philosophy. But I am committing a minor wrong in that I am acting on a preference that is inconsistent with my moral obligation to support my family (as I see my obligations). Does that type of inconsistency between preference and right action never happen to you?