Clearly there’s a group of people who dislike what I’ve said in this thread, as I’ve been downvoted quite a bit.
I’m not perfectly clear on why. My only position at any point has been this:
I see a universe which contains intelligent agents trying to fulfill their preferences. Then I see conversations about morality and ethics talking about actions being “right” or “wrong”. From the context and explanations, “right” seems to mean very different things. Like:
“Those actions which I prefer” or “Those actions which most agents in a particular place prefer” or “Those actions which fulfill arbitrary metric X”
Likewise, “wrong” inherits its meaning from whatever definition is given for “right”. It makes sense to me to talk about preferences. They’re important. If that’s what people are talking about when they discuss morality, then that makes perfect sense. What I do not understand is when people use the words “right” or “wrong” independently of any agent’s preferences. I don’t see what they are referring to, or what those words even mean in that context.
Does anyone care to explain what I’m missing, or if there’s something specific I did to elicit downvotes?
What I do not understand is when people use the words “right” or “wrong” independently of any agent’s preferences. I don’t see what they are referring to, or what those words even mean in that context.
Does anyone care to explain what I’m missing, or if there’s something specific I did to elicit downvotes?
I don’t know anything about downvotes, but I do think that there is a way of understanding ‘right’ and ‘wrong’ independently of preferences. But it takes a conceptual shift.
Don’t think of morality as a doctrine guiding you as to how to behave. Instead, imagine it as a doctrine teaching you how to judge the behavior of others (and to a lesser extent, yourself).
Morality teaches you when to punish and reward (and when to expect punishment and reward). It is a second-order concept, and hence not directly tied to preferences.
I do think that there is a way of understanding ‘right’ and ‘wrong’ independently of preferences...Morality teaches you when to punish and reward (and when to expect punishment and reward). It is a second-order concept, and hence not directly tied to preferences.
Sociology? Psychology? Game theory? Mathematics? What does moral philosophy add to the sciences that is useful, that helps us to dissolve confusion and understand the nature of reality?
What does moral philosophy add to the sciences that is useful, that helps us to dissolve confusion and understand the nature of reality?
Moral philosophy, like all philosophy, does nothing directly to illuminate the nature of reality. What it does is to illuminate the nature of confusion.
How does someone who thinks that ‘morality’ is meaningless discuss the subject with someone who attaches meaning to the word? Answer: They talk to each other carefully and respectfully.
What do you call the subject matter of that discussion? Answer: Metaethics.
What do you call success in this endeavor? Answer: “Dissolving the confusion”.
Moral philosophy, like all philosophy, does nothing directly to illuminate the nature of reality. What it does is to illuminate the nature of confusion.
Moral philosophy does not illuminate the nature of confusion, it is the confusion. I am asking, what is missing and what confusion is left if you disregard moral philosophy and talk about right and wrong in terms of preferences?
I’m tempted to reply that what is missing is the ability to communicate with anyone who believes in virtue ethics or deontological ethics, and therefore doesn’t see how preferences are even involved. But maybe I am not understanding your point.
Perhaps an example would help. Suppose I say, “It is morally wrong for Alice to lie to Bob.” How would you analyze that moral intuition in terms of preferences. Whose preferences are we talking about here? Alice’s, Bob’s, mine, everybody else’s? For comparison purposes, also analyze the claim “It is morally wrong for Bob to strangle Alice.”
Due to your genetically hard-coded intuitions about appropriate behavior within groups of primates, your upbringing, cultural influences, rational knowledge about the virtues of truth-telling and preferences involving the well-being of other people, you feel obliged to influence the intercourse between Alice and Bob in a way that persuades Alice to do what you want, without feeling inappropriately influenced by you, by signaling your objection to certain behaviors as an appeal to the order of higher authority .
“It is morally wrong for Bob to strangle Alice.”
If you say, “I don’t want you to strangle Alice.”, Bob might reply, “I don’t care what you want!”.
If you say, “Strangling Alice might have detrimental effects on your other preferences.”, Bob might reply, “I assign infinite utility to the death of Alice!” (which might very well be the case for humans in a temporary rage).
But if you say, “It is morally wrong to strangle Alice.”, Bob might get confused and reply, “You are right, I don’t want to be immoral!”. Which is really a form of coercive persuasion. Since when you say, “It is morally wrong to strangle Alice.”, you actually signal, “If you strangle Alice you will feel guilty.”. It is a manipulative method that might make Bob say, “You are right, I don’t want to be immoral!”, when what he actually means is, “I don’t want to feel guilty!”.
Primates don’t like to be readily controled by other primates. To get them to do what you want you have to make them believe that, for some non-obvious reason, they actually want to do it themselves.
This sounds like you are trying to explain-away the phenomenon, rather than explain it. At the very least, I would think, such a theory of morality needs to make some predictions or explain some distinctions. For example, what is it about the situation that causes me to try to influence Alice and Bob using moral arguments in these cases, whereas I use other methods of influence in other cases?
For example, what is it about the situation that causes me to try to influence Alice and Bob using moral arguments in these cases, whereas I use other methods of influence in other cases?
Complex influences, like your culture and upbringing.That’s also why some people don’t say that it is morally wrong to burn a paperback book while others are outraged by the thought. And those differences and similarities can be studied, among other fields, in terms of cultural anthropology and evolutionary psychology.
It needs a multidisciplinary approach to tackle such questions. But moral philosophy shouldn’t be part of the solution because it is largely mistaken about cause and effect. Morality is an effect of our societal and cultural evolution, shaped by our genetically predisposition as primates living in groups. In this sense moral philosophy is a meme that is part of a larger effect and therefore can’t be part of a reductionist explanation of itself. The underlying causes of cultural norms and our use of language can be explained by social and behavioural sciences, applied mathematics like game theory, computer science and linguistics.
But rationality shouldn’t be part of the solution because it is largely mistaken about cause and effect. Rationalitty is an effect of our societal and cultural evolution, shaped by our genetically predisposition as primates living in groups. In this sense rationality is a meme that is part of a larger effect and therefore can’t be part of a reductionist explanation of itself.
However, these claims are false, so you have to make a different argument.
I’ve seen this sort of substitution-argument a few times recently, so I’ll take this opportunity to point out that arguments have contexts, and if it seems that an argument does not contain all the information necessary to support its conclusions (because directly substituting in other words produces falsehood), this is because words have meanings, steps are elided, and there are things true and false in the world. This does not invalidate those arguments! These elisions are in fact necessary to prevent each argument from being a re-derivation of human society from mathematical axioms. Arguers should try to be sensitive to the way in which the context of an argument may or may not change how that argument applies to other subjects (A simple example: “You should not enter that tunnel because your truck is taller than the ceiling’s clearance” is a good argument only if the truck in question is actually taller than the ceiling’s clearance.). This especially applies when arguments are not meant to be formal, or in fact when they are not intended to be arguments.
These substitution arguments are quite a shortcut. The perpetrator doesn’t actually have to construct something that supports a specific point; instead, they can take an argument they disagree with, swap some words around, leave out any words that are inconvenient, post it, and if the result doesn’t make sense, the perpetrator wins!
Making a valid argument about why the substitution argument doesn’t make sense requires more effort than creating the substitution argument, so if we regard discussions here as a war of attrition, the perpetrator wins even if you create a well-reasoned reply to him.
Substitution arguments are garbage. I wish I knew a clean way to get rid of them. Thanks for identifying them as a thing to be confronted.
Cool, glad I’m not just imagining things! I think that sometimes this sort of argument can be valuable (“That person also has a subjective experience of divine inspiration, but came to a different conclusion”, frex), but I’ve become more suspicious of them recently—especially when I’m tempted to use one myself.
if it seems that an argument does not contain all the information necessary to support its conclusions (because directly substituting in other words produces falsehood), this is because words have meanings, steps are elided, and there are things true and false in the world. This does not invalidate those arguments!
Thing is, this is a general response to virtually any criticism whatsoever. And it’s often true! But it’s not always a terribly useful response. Sometimes it’s better to make explicit that bit of context, or that elided step.
Moreover it’s also a good thing to remember about the other guy’s argument next time you think his conclusions obviously do not follow from his (explicitly stated) premises—that is, next time you see what looks to you to be an invalid argument, it may not be even if strictly on a formal level it is, precisely because you are not necessarily seeing everything the other guy is seeing.
So, it’s not just about substitutions. It’s a general point.
Thing is, this is a general response to virtually any criticism whatsoever. And it’s often true! But it’s not always a terribly useful response. Sometimes it’s better to make explicit that bit of context, or that elided step.
True! This observation does not absolve us of our eternal vigilance.
Moreover it’s also a good thing to remember about the other guy’s argument next time you think his conclusions obviously do not follow from his (explicitly stated) premises—that is, next time you see what looks to you to be an invalid argument, it may not be even if strictly on a formal level it is, precisely because you are not necessarily seeing everything the other guy is seeing.
So, it’s not just about substitutions. It’s a general point.
For example, what is it about the situation that causes me to try to influence Alice and Bob using moral arguments in these cases, whereas I use other methods of influence in other cases?
Guilt works here, for example. (But XiXiDu covered that.) Social pressure also. Veiled threat and warning, too. Signaling your virtue to others as well. Moral arguments are so handy that they accomplish all of these in one blow.
ETA: I’m not suggesting that you in particular are trying to guilt trip people, pressure them, threaten them, or signal. I’m saying that those are all possible explanations as to why someone might prefer to couch their arguments in moral terms: it is more persuasive (as Dark Arts) in certain cases. Though I reject moralist language if we are trying to have a clear discussion and get at the truth, I am not against using Dark Arts to convince Bob not to strangle Alice.
Morality teaches you when to punish and reward (and when to expect punishment and reward). It is a second-order concept, and hence not directly tied to preferences.
Sometimes you’ll want to explain why your punishment of others is justified. If you don’t want to engage Perplexed’s “moral realism”, then either you don’t think there’s anything universal enough (for humans, or in general) in it to be of explanatory use in the judgments people actually make, or you don’t think it’s a productive system for manufacturing (disingenuous yet generally persuasive) explanations that will sometimes excuse you.
Assuming I haven’t totally lost track of context here, I think I am saying that moral language works for persuasion (partially as Dark Arts), but is not really suitable for intellectual discourse.
Okay. Whatever he hopes is real (but you think is only confused), will allow you to form persuasive arguments to similar people. So it’s still worth talking about.
Virtue ethicists and deontologists merely express a preference for certain codes of conduct because they believe adhering to these codes will maximize their utility, usually via the mechanism of lowering their time preference.
ETA: And also, as XiXiDu points out, to signal virtuosity.
Upvoted because I strongly agree with the spirit of this post, but I don’t think moral philosophy succeeds in dissolving the confusion. So far it has failed miserably, and I suspect that it is entirely unnecessary. That is, I think this is one field that can be dissolved away.
imagine it as a doctrine teaching you how to judge the behavior of others (and to a lesser extent, yourself).
Which metrics do I use to judge others?
There has been some confusion over the word “preference” in the thread, so perhaps I should use “subjective value”. Would you agree that the only tools I have for judging others are subjective values? (This includes me placing value on other people reaching a state of subjective high value)
Or do you think there’s a set of metrics for judging people which has some spooky, metaphysical property that makes it “better”?
Or do you think there’s a set of metrics for judging people which has some spooky, metaphysical property that makes it “better”?
And why would that even matter as long as I am able to realize what I want without being instantly struck by thunder if I desire or do something that violates the laws of morality? If I live a happy and satisfied life of fulfilled preferences but constantly do what is objectively wrong, why exactly would that matter, to whom would it matter and why would I care if I am happy and my preferences are satisfied? Is it some sort of game that I am losing, where those who are the most right win? What if I don’t want to play that game, what if I don’t care who wins?
If I live a happy and satisfied life of fulfilled preferences but constantly do what is objectively wrong, why exactly would that matter,
Because it harms other people directly or indirectly. Most immoral actions have that property.
to whom would it matter
To the person you harm. To the victim’s friends and relatives. To everyone in the society which is kept smoothly running by the moral code which you flout.
and why would I care if I am happy and my preferences are satisfied?
Because you will probably be punished, and that tends to not satisfy your preferences.
Is it some sort of game that I am losing, where those who are the most right win?
If the moral code is correctly designed, yes.
What if I don’t want to play that game, what if I don’t care who wins?
Then you are, by definition, irrational, and a sane society will eventually lock you up as being a danger to yourself and everyone else.
Because it harms other people directly or indirectly. Most immoral actions have that property.
Begging the question.
To the person you harm. To the victim’s friends and relatives.
Either that is part of my preferences or it isn’t.
To everyone in the society which is kept smoothly running by the moral code which you flout.
Either society is instrumental to my goals or it isn’t.
Because you will probably be punished, and that tends to not satisfy your preferences.
Game theory? Instrumental rationality? Cultural anthropology?
If the moral code is correctly designed, yes.
If I am able to realize my goals, satisfy my preferences, don’t want to play some sort of morality game with agreed upon goals and am not struck by thunder once I violate those rules, why would I care?
Then you are, by definition, irrational...
What is your definition of irrationality? I wrote that if I am happy, able to reach all of my goals and satisfy all of my preferences while constantly violating the laws of morality, how am I irrational?
… in response to “Because you will probably be punished, and that tends to not satisfy your preferences.” ?
I think you mean that you should correctly predict the odds and disutility (over your life) of potential punishments, and then act rationally selfishly. I think this may be too computationally expensive in practice, and you may not have considered the severity of the (unlikely event) that you end up severely punished by a reputation of being an effectively amoral person.
Yes, we see lots of examples of successful and happy unscrupulous people in the news. But consider selection effects (that contradiction of conventional moral wisdom excites people and sells advertisements).
I meant that we already do have a field of applied mathematics and science that talks about those things, why do we need moral philosophy?
I am not saying that it is a clear cut issue that we, as computationally bounded agents, should abandon moral language, or that we even would want to do that. I am not advocating to reduce the complexity of natural language. But this community seems to be committed to reductionism, minimizing vagueness and the description of human nature in terms of causal chains. I don’t think that moral philosophy fits this community.
This community doesn’t talk about theology either, it talks about probability and Occam’s razor. Why would it talk about moral philosophy when all of it can be described in terms of cultural anthropology, sociology, evolutionary psychology and game theory?
This community doesn’t talk about theology either[...]Why would it talk about moral philosophy when all of it can be described in terms of cultural anthropology, sociology, evolutionary psychology and game theory?
It is a useful umbrella term—rather like “advertising”.
There’s nothing to dispute. You have a defensible position.
However, I think most humans have as part of what satisfies them (they may not know it until they try it), the desire to feel righteous, which can most fully be realized with a hard-to-shake belief. For a rational person, moral realism may offer this without requiring tremendous self-delusion. (disclaimer: I haven’t tried this).
Is it worth the cost? Probably you can experiment. It’s true that if you formerly felt guilty and afraid of punishment, then deleting the desire to be virtuous (as much as possible) will feel liberating. In most cases, our instinctual fears are overblown in the context of a relatively anonymous urban society.
Still, reputation matters, and you can maintain it more surely by actually being what you present yourself as, rather than carefully (and eventually sloppily and over-optimistically) weighing each case in terms of odds of discovery and punishment. You could work on not feeling bad about your departures from moral perfection more directly, and then enjoy the real positive feeling-of-virtue (if I’m right about our nature), as well as the practical security. The only cost then would be lost opportunities to cheat.
It’s hard to know who to trust as having honest thoughts and communication on the issue, rather than presenting an advantageous image, when so much is at stake. Most people seem to prefer tasteful hypocrisy and tasteful hypocrites. Only those trying to impress you with their honesty, or those with whom you’ve established deep loyalties, will advertise their amorality.
What is your definition of irrationality? I wrote that if I am happy, able to reach all of my goals and satisfy all of my preferences while constantly violating the laws of morality, how am I irrational?
It’s irrational to think that the evaluative buck stops with your own preferences.
I’m claiming that there is a particular moral code which has the spooky game-theoretical property that it produces the most utility for you and for others. That is, it is the metric which is Pareto optimal and which is also a ‘fair’ bargain.
So you’re saying that there’s one single set of behaviors, which, even though different agents will assign drastically different values to the same potential outcomes, balances their conflicting interests to provide the most net utility across the group. That could be true, although I’m not convinced.
Even if it is, though, what the optimal strategy is will change if the net values across the group changes. The only point I have ever tried to make in these threads is that the origin of any applicable moral value must be the subjective preferences of the agents involved.
The reason any agent would agree to follow such a rule set is if you could demonstrate convincingly that such behaviors maximize that agent’s utility. It all comes down to subjective values. There exists no other motivating force.
… what the optimal strategy is will change if the net values across the group changes.
True, but that may not be as telling an objection as you seem to think. For example, if you run into someone (not me!) who claims that the entire moral code is based on the ‘Golden Rule’ of “Do unto others as you would have others do unto you.” Tell that guy that moral behavior changes if preferences change. He will respond “Well, duh! What is your point?”.
Not to me. I didn’t downvote, and in any case I was the first to use the rude “duh!”, so if you were rude back I probably deserved it. Unfortunately, I’m afraid I still don’t understand your point.
Perhaps you were rude to those unnamed people who you suggest “do not recognize this”.
It’s easy to bristle when someone in response to you points out something you thought it was obvious that you knew. This happens all the time when people think they’re smart :)
I’m fond of including clarification like, “subjective values (values defined in the broadest possible sense, to include even things like your desire to get right with your god, to see other people happy, to not feel guilty, or even to “be good”).”
Some ways I’ve found to dissolve people’s language back to subjective utility:
If someone says something is good, right, bad, or wrong, ask, “For what purpose?”
If someone declares something immoral, unjust, unethical, ask, “So what unhappiness will I suffer as a result?”
But use sparingly, because there is a big reason many people resist dissolving this confusion.
Don’t think of morality as a doctrine guiding you as to how to behave. Instead, imagine it as a doctrine teaching you how to judge the behavior of others (and to a lesser extent, yourself).
Yes! That’s a point that I’ve repeated so often to so many different people [not on LW, though] that I’d more-or-less “given up”—it began to seem as futile as swatting flies in summer. Maybe I’ll resume swatting now I know I’m not alone.
Don’t think of morality as a doctrine guiding you as to how to behave.
This is mainly how I use morality. I control my own actions, not the actions of other people, so for me it makes sense to judge my own actions as good or bad, right or wrong. I can change them. Judging someone else changes nothing about the state of the world unless I can persuade them to act differently.
Judging someone else changes nothing about the state of the world unless I can persuade them to act differently.
Avoiding a person (a) does not (necessarily) persuade them to act differently, but (b) definitely changes the state of the world. This is not a minor nitpicking point. Avoiding people is also called social ostracism, and it’s a major way that people react to misbehavior. It has the primary effect of protecting themselves. It often has the secondary effect of convincing the ostracized person to improve their behavior.
Then I would consider that a case where I could change their behaviour. There are instances where avoiding someone would bother them enough to have an effect, and other cases where it wouldn’t.
Avoiding people who misbehave will change the state of the world even if that does not affect their behavior. It changes the world by protecting you. You are part of the world.
it makes sense to judge my own actions as good or bad, right or wrong. I can change them.
Yes, but if you judge a particular action of your own to be ‘wrong’, then why should you avoid that action? The definition of wrong that I supply solves that problem. By definition if an action is wrong, then it is likely to elicit punishment. So you have a practical reason for doing right rather than doing wrong.
Furthermore, if you do your duty and reward and/or punish other people for their behavior, then they too will have a practical reason to do right rather than wrong.
Before you object “But that is not morality!”, ask yourself how you learned the difference between right and wrong.
ask yourself how you learned the difference between right and wrong.
It’s a valid point that I probably learned morality this way. I think that’s actually the definition of ‘preconventional’ morality-it’s based on reward/punishment. Maybe all my current moral ideas have roots in that childhood experience, but they aren’t covered by it anymore. There are actions that would be rewarded by most of the people around me, but which I avoid because I consider there to be a “better” alternative. (I should be able to think of more examples of this, but I guess one is laziness at work. I feel guilty if I don’t do the cleaning and maintenance that needs doing even though everyone else does almost nothing. I also try to follow a “golden rule” that if I don’t want something to happen to me, I won’t do it to someone else even if the action is socially acceptable amidst my friends and wouldn’t be punished.
I think that’s actually the definition of ‘preconventional’ morality-it’s based on reward/punishment.
Ah. Thanks for bringing up the Kohlberg stages—I hadn’t been thinking in those terms.
The view of morality I am promoting here is a kind of meta-pre-conventional viewpoint. That is, morality is not ‘that which receives reward and punishment’, it is instead ‘that which (consequentially) ought to receive reward and punishment, given that many people are stuck at the pre-conventional level’.
‘that which (consequentially) ought to receive reward and punishment, given that many people are stuck at the pre-conventional level’.
How many people? I think (I remember reading in my first-year psych textbook) that most adults functionning at a “normal” level in society are at the conventional level: they have internalized whatever moral standards surround them and obey them as rules, rather than thinking directly of punishment or reward. (They may still be thinking indirectly of punishment and reward; a conventionally moral person obeys the law because it’s the law and it’s wrong to break the law, implicitly because they would be punished if they did.) I’m not really sure how to separate how people actually reason on moral issues, versus how they think they do, and whether the two are often (or ever???) the same thing.
How many people are stuck at that level? I don’t know.
How many people must be stuck there to justify the use of punishment as deterrent? My gut feeling is that we are not punishing too much unless the good done (to society) by deterrence is outweighed by the evil done (to the ‘criminal’) by the punishment.
And also remember that we can use carrots as well as sticks. A smile and a “Thank you” provide a powerful carrot to many people. How many? Again, I don’t know, but I suspect that it is only fair to add these carrot-loving pre-conventionalists in with the ones who respond only to sticks.
What I do not understand is when people use the words “right” or “wrong” independently of any agent’s preferences
Assuming Amanojack explained your position correctly, then there aren’t just people fulfilling their preferences. There are people doing all kinds of things that fulfill or fail to fulfill their preferences—and, not entirely coincidentally, which bring happiness and grief to themselves or others. So then a common reasonable definition of morality (that doesn’t involve the word preferences) is that set of habits that are most likely to bring long-term happiness to oneself and those around one.
there aren’t just people fulfilling their preferences.
You missed a word in my original. I said that there were agents trying to fulfill their preferences. Now, per my comment at the end of your subthread with Amanojack, I realize that the word “preferences” may be unhelpful. Let me try to taboo it:
There are intelligent agents who assign higher values to some futures than others. I observe them generally making an effort to actualize those futures, but sometimes failing due to various immediate circumstances, which we could call cognitive overrides. What I mean by that is that these agents have biases and heuristics which lead them to poorly evaluate the consequences of actions.
Even if a human sleeping on the edge of a cliff knows that the cliff edge is right next to him, he will jolt if startled by noise or movement. He may not want to fall off the cliff, but the jolt reaction occurs before he is able to analyze it. Similarly, under conditions of sufficient hunger, thirst, fear, or pain, the analytical parts of the agent’s mind give way to evolved heuristics.
definition of morality (that doesn’t involve the word preferences) is that set of habits that are most likely to bring long-term happiness to oneself and those around one.
If that’s how you would like to define it, that’s fine. Would you agree then, that the contents of that set of habits is contingent upon what makes you and those around you happy?
He may not want to fall off the cliff, but the jolt reaction occurs before he is able to analyze it
I suspect it’s a matter of degree rather than either-or. People sleeping on the edges of cliffs are much less likely to jot when startled than people sleeping on soft beds, but not 0% likely. The interplay between your biases and your reason is highly complex.
Would you agree then, that the contents of that set of habits is contingent upon what makes you and those around you happy?
Yes; absolutely. I suspect that a coherent definition of morality that isn’t contingent on those will have to reference a deity.
I don’t understand what you mean by preferences when you say “intelligent agents trying to fulfill their preferences”. I have met plenty of people who were trying to do things contrary to their preferences. Perhaps before you try (or someone tries for you) to distinguish morality from preferences, it might be helpful to distinguish precisely how preferences and behavior can differ?
Example? I prefer not to stay up late, but here I am doing it. It’s not that I’m acting against my preferences, because my current preference is to continue typing this sentence. It’s simply that English doesn’t differentiate very well between “current preferences”= “my preferences right this moment” and “current preferences”= “preferences I have generally these days.”
But I want an example of people acting contrary to their preferences, you’re giving one of yourself acting according to your current preferences. Hopefully, NMJablonski has an example of a common action that is genuinely contrary to the actor’s preferences. Otherwise, the word “preference” simply means “behavior” to him and shouldn’t be used by him. He would be able to simplify “the actions I prefer are the actions I perform,” or “morality is just behavior”, which isn’t very interesting to talk about.
“This-moment preferences” are synonymous with “behavior,” or more precisely, “(attempted/wished-for) action.” In other words, in this moment, my current preferences = what I am currently striving for.
Jablonski seems to be using “morality” to mean something more like the general preferences that one exhibits on a recurring basis, not this-moment preferences. And this is a recurring theme: that morality is questions like, “What general preferences should I cultivate?” (to get more enjoyment out of life)
Ok, so if I understand you correctly:
It is actually meaningful to ask “what general preferences should I cultivate to get more enjoyment out of life?” If so, you describe two types of preference: the higher-order preference (which I’ll call a Preference) to get enjoyment out of life, and the lower-order “preference” (which I’ll call a Habit or Current Behavior rather than a preference, to conform to more standard usage) of eating soggy bland french fries if they are sitting in front of you regardless of the likelihood of delicious pizza arriving. So because you prefer to save room for delicious pizza yet have the Habit of eating whatever is nearby and convenient, you can decide to change that Habit. You may do so by changing your behavior today and tomorrow and the day after, eventually forming a new Habit that conforms better to your preference for delicious foods.
Am I describing this appropriately?
If so, by the above usage, is morality a matter of Behavior, Habit, or Preference?
Sounds fairly close to what I think Jablonski is saying, yes.
Preference isn’t the best word choice. Ultimately it comes down to realizing that I want different things at different times, but in English future wanting is sometimes hard to distinguish from present wanting, which can easily result in a subtle equivocation. This semantic slippage is injecting confusion into the discussion.
Perhaps we have all had the experience of thinking something like, “When 11pm rolls around, I want to want to go to sleep.” And it makes sense to ask, “How can I make it so that I want to go to sleep when 11pm rolls around?” Sure, I presently want to go to sleep early tonight, but will I want to then? How can I make sure I will want to? Such questions of pure personal long-term utility seem to exemplify Jablonksi’s definition of morality.
Amanojack has, I think, explained my meaning well. It may be useful to reduce down to physical brains and talk about actual computational facts (i.e. utility function) that lead to behavior rather than use the slippery words “want” or “preference”.
Clearly there’s a group of people who dislike what I’ve said in this thread, as I’ve
been downvoted quite a bit.
Same here.
“Those actions which I prefer” or “Those actions which most agents in a particular
place prefer” or “Those actions which fulfill arbitrary metric X”
It doesn’t mean any of those things, since any of them can be judged wrong.
Likewise, “wrong” inherits its meaning from whatever definition is given for “right”.
It makes sense to me to talk about preferences. They’re important. If that’s what
people are talking about when they discuss morality, then that makes perfect sense.
Morality is about having the right preferences, as rationality is about having true beliefs.
What I do not understand is when people use the words “right” or “wrong” >independently of any agent’s preferences. I don’t see what they are referring to, or >what those words even mean in that context.
Do you think the sentence “there are truths no-one knows” is meaningful?
Morality is about having the right preferences, as rationality is about having true beliefs.
I understand what it would mean to have a true belief, as truth is noticeably independent of belief. I can be surprised, and I can anticipate. I have an understanding of a physical world of which I am part, and which generates my experiences.
It does not make any sense for there to be some “correct” preferences. Unlike belief, where there is an actual territory to map, preferences are merely a byproduct of the physical processes of intelligence. They have no higher or divine purpose which demands certain preferences be held. Evolution selects for those which aid survival, and it doesn’t matter if survival means aggression or cooperation. The universe doesn’t care.
I think you and other objective moralists in this thread suffer from extremely anthropocentric thinking. If you rewind the universe to a time before there are humans, in a time of early expansion and the first formation of galaxies, does there exist then the “correct” preferences that any agent must strive to discover? Do they exist independent of what kinds of life evolve in what conditions?
If you are able to zoom out of your skull, and view yourself and the world around you as interesting molecules going about their business, you’ll see how absurd this is. Play through the evolution of life on a planetary scale in your mind. Be aware of the molecular forces at work. Run it on fast forward. Stop and notice the points where intelligence is selected for. Watch social animals survive or die based on certain behaviors. See the origin of your own preferences, and why they are so different from some other humans.
Objective morality is a fantasy of self-importance, and a hold-over from ignorant quasi-religious philosophy which has now cloaked itself in scientific terms and hides in university philosophy departments. Physics is going to continue to play out. The only agents who can ever possibly care what you do are other physical intelligences in your light cone.
Do you think mathematical statements are true and false? Do you think mathematics has an actual territory?
It is plainly the case that people can have morally wrong preferences, and therefore
no argument against ethics that ethics are not forced on people. People will suffer if
they hold incorrect or irrational factual beliefs, and they will suffer if they have evil preferences. In both cases there is a distinction between right and wrong, and in both
cases there is an option.
I think you and others on this thread suffer from a confusion between ontology and epistemology. There can be objective truths in mathematics without having
the number 23 floating around in space. Moral objectivity likewise does not demand
the physical existence of moral objects.
There are things I don’t want done to me. I should not therefore do them to others. I can reason my way to that conclusion without the need for moral objects, and without denying that I am made of atoms.
Wait. So you don’t believe in an objective notion of morality, in the sense of a morality that would be true even if there were no people? Instead, you think of morality as, like, a set of reasonable principles a person can figure out that prevent their immediate desires from stomping on their well-being, and/or that includes in their “selfishness” a desire for the well-being of others?
Everything is non objective for some value of objective. It is doubtful that there are mathematical truths without mathematicians. But that does not make math as subjective as art.
Okay. The distinction I am drawing is: are moral facts something “out there” to be discovered, self-justifying, etc., or are they facts about people, their minds, their situations, and their relationships.
Could you answer the question for that value of objective? Or, if not, could you answer the question by ignoring the word “objective” or providing a particular value for it?
I translate that as: it’s better to talk about “moral values” than “moral facts” (moral facts being facts about what moral values are, I guess), and moral values are (approximately) reasonable principles a person can figure out that prevent their immediate desires from stomping on their well-being, and/or that includes in their “selfishness” a desire for the well-being of others.
Something like that? If not, could you translate for me instead?
I take this to mean that, other than that, you agree.
(This is the charitable reading, however. You seem to be sending strong signals that you do not wish to have a productive discussion. If this is not your intent, be careful—I expect that it is easy to interpret posts like this as sending such signals.)
If this is true, then I think the vast majority of the disagreements you’ve been having in this thread have been due to unnecessary miscommunication.
Do you think mathematical statements are true and false? Do you think mathematics has an actual territory?
Mathematics is not Platonically real. If it is we get Tegmark IV and then every instant of sensible ordered universe is evidence against it, unless we are Boltzmann brains. So, no, mathematics does not have an actual territory. It is an abstraction of physical behaviors that intelligences can use because intelligences are also physical. Mathematics works because we can perform isomorphic physical operations inside our brains.
It is plainly the case that people can have morally wrong preferences
You can say that as many times as you like, but that wont make it true.
ETA: You also still haven’t explained how a person can know that.
Mathematics is not Platonically real. If it is we get Tegmark IV and then every instant of sensible ordered universe is evidence against it, unless we are Boltzmann brains.
Only if is-real is a boolean. If it’s a number, then mathematics can be “platonically real” without us being Boltzmann brains.
As opposed to what? Subjective? What are the options? Because that helps to clarify what you mean by “objective”. Prices are created indirectly by subjective preferences and they fluctuate, but if I had to pick between calling them “subjective” or calling them “objective” I would pick “objective”, for a variety of reasons.
No; morality reduces to values that can only be defined with respect to an agent, or a set of agents plus an aggregation process. However, almost all of the optimizing agents (humans) that we know about share some values in common, which creates a limited sort of objectivity in that most of the contexts we would define morality with respect to agree qualitatively with each other, which usually allows people to get away with failing to specify the context.
It still isn’t clear what it means for a preference for murder to be “wrong”!
So far I can only infer your definition of “wrong” to be:
“Not among the correct preferences”
… but you still haven’t explained to us why you think there are correct preferences, besides to stamp your foot and say over and over again “There are obviously correct preferences” even when many people do not agree.
I see no reason to believe that there is a set of “correct” preferences to check against.
Even if there’s no such thing as objective right and wrong, they might easily be able to reason that being bloodthirsty is not in their best selfish interest.
Can people reason that bloodthirst is not a good preference to have...?
For me, now, it isn’t practical. In other circumstances it would be. It need not ever be a terminal goal but it could be an instrumental goal built in deeply.
Funny how you never quite answer the question as stated. Can you even say it is subjectively wrong?
It isn’t ‘funny’ at all. You were trying to force someone into a lose lose morality signalling position. It is appropriate to ignore such attempts and instead state what your actual position is.
In keeping with my analogy let’s translate your position into the corresponding position on physics:
I see a universe which contains intelligent agents with opinions and/or beliefs. Then I see conversations about physics and reality talking about beliefs being “true” or “false”. From the context and explanations, “true” seems to mean very different things. Like:
“My beliefs” or “The beliefs of most agents in a particular place” or “Those beliefs which fulfill arbitrary metric X”
Likewise, “false” inherits its meaning from whatever definition is given for “true”. It makes sense to me to talk about opinions and/or beliefs . They’re important. If that’s what people are talking about when they discuss truth, then that makes perfect sense. What I do not understand is when people use the words “true” or “false” independently of any agent’s opinion. I don’t see what they are referring to, or what those words even mean in that context.
Do you still agree with the changed version? If not, why not?
(I never realized how much fun it could be to play a chronophone.)
Based upon my experiences, physical truths appear to be concrete and independent of beliefs and opinions. I see no cases where “right” has a meaning outside of an agent’s preferences. I don’t know how one would go about discovering the “rightness” of something, as one would a physical truth.
It is a poor analogy.
Edit: Seriously? I’m not trying to be obstinate here. Would people prefer I go away?
Seriously? I’m not trying to be obstinate here. Would people prefer I go away?
You’re not being obstinate. You’re more or less right, at least in the parent. There are a few nuances left to pick up but you are not likely to find them by arguing with Eugine.
Clearly there’s a group of people who dislike what I’ve said in this thread, as I’ve been downvoted quite a bit.
I’m not perfectly clear on why. My only position at any point has been this:
I see a universe which contains intelligent agents trying to fulfill their preferences. Then I see conversations about morality and ethics talking about actions being “right” or “wrong”. From the context and explanations, “right” seems to mean very different things. Like:
“Those actions which I prefer” or “Those actions which most agents in a particular place prefer” or “Those actions which fulfill arbitrary metric X”
Likewise, “wrong” inherits its meaning from whatever definition is given for “right”. It makes sense to me to talk about preferences. They’re important. If that’s what people are talking about when they discuss morality, then that makes perfect sense. What I do not understand is when people use the words “right” or “wrong” independently of any agent’s preferences. I don’t see what they are referring to, or what those words even mean in that context.
Does anyone care to explain what I’m missing, or if there’s something specific I did to elicit downvotes?
You signaled disagreement with someone about morality. What did you expect? :)
Your explanation is simple and fits the facts!
I like it :)
I don’t know anything about downvotes, but I do think that there is a way of understanding ‘right’ and ‘wrong’ independently of preferences. But it takes a conceptual shift.
Don’t think of morality as a doctrine guiding you as to how to behave. Instead, imagine it as a doctrine teaching you how to judge the behavior of others (and to a lesser extent, yourself).
Morality teaches you when to punish and reward (and when to expect punishment and reward). It is a second-order concept, and hence not directly tied to preferences.
Sociology? Psychology? Game theory? Mathematics? What does moral philosophy add to the sciences that is useful, that helps us to dissolve confusion and understand the nature of reality?
Moral philosophy, like all philosophy, does nothing directly to illuminate the nature of reality. What it does is to illuminate the nature of confusion.
How does someone who thinks that ‘morality’ is meaningless discuss the subject with someone who attaches meaning to the word? Answer: They talk to each other carefully and respectfully.
What do you call the subject matter of that discussion? Answer: Metaethics.
What do you call success in this endeavor? Answer: “Dissolving the confusion”.
Moral philosophy does not illuminate the nature of confusion, it is the confusion. I am asking, what is missing and what confusion is left if you disregard moral philosophy and talk about right and wrong in terms of preferences?
I’m tempted to reply that what is missing is the ability to communicate with anyone who believes in virtue ethics or deontological ethics, and therefore doesn’t see how preferences are even involved. But maybe I am not understanding your point.
Perhaps an example would help. Suppose I say, “It is morally wrong for Alice to lie to Bob.” How would you analyze that moral intuition in terms of preferences. Whose preferences are we talking about here? Alice’s, Bob’s, mine, everybody else’s? For comparison purposes, also analyze the claim “It is morally wrong for Bob to strangle Alice.”
Due to your genetically hard-coded intuitions about appropriate behavior within groups of primates, your upbringing, cultural influences, rational knowledge about the virtues of truth-telling and preferences involving the well-being of other people, you feel obliged to influence the intercourse between Alice and Bob in a way that persuades Alice to do what you want, without feeling inappropriately influenced by you, by signaling your objection to certain behaviors as an appeal to the order of higher authority .
If you say, “I don’t want you to strangle Alice.”, Bob might reply, “I don’t care what you want!”.
If you say, “Strangling Alice might have detrimental effects on your other preferences.”, Bob might reply, “I assign infinite utility to the death of Alice!” (which might very well be the case for humans in a temporary rage).
But if you say, “It is morally wrong to strangle Alice.”, Bob might get confused and reply, “You are right, I don’t want to be immoral!”. Which is really a form of coercive persuasion. Since when you say, “It is morally wrong to strangle Alice.”, you actually signal, “If you strangle Alice you will feel guilty.”. It is a manipulative method that might make Bob say, “You are right, I don’t want to be immoral!”, when what he actually means is, “I don’t want to feel guilty!”.
Primates don’t like to be readily controled by other primates. To get them to do what you want you have to make them believe that, for some non-obvious reason, they actually want to do it themselves.
This sounds like you are trying to explain-away the phenomenon, rather than explain it. At the very least, I would think, such a theory of morality needs to make some predictions or explain some distinctions. For example, what is it about the situation that causes me to try to influence Alice and Bob using moral arguments in these cases, whereas I use other methods of influence in other cases?
Complex influences, like your culture and upbringing.That’s also why some people don’t say that it is morally wrong to burn a paperback book while others are outraged by the thought. And those differences and similarities can be studied, among other fields, in terms of cultural anthropology and evolutionary psychology.
It needs a multidisciplinary approach to tackle such questions. But moral philosophy shouldn’t be part of the solution because it is largely mistaken about cause and effect. Morality is an effect of our societal and cultural evolution, shaped by our genetically predisposition as primates living in groups. In this sense moral philosophy is a meme that is part of a larger effect and therefore can’t be part of a reductionist explanation of itself. The underlying causes of cultural norms and our use of language can be explained by social and behavioural sciences, applied mathematics like game theory, computer science and linguistics.
But rationality shouldn’t be part of the solution because it is largely mistaken about cause and effect. Rationalitty is an effect of our societal and cultural evolution, shaped by our genetically predisposition as primates living in groups. In this sense rationality is a meme that is part of a larger effect and therefore can’t be part of a reductionist explanation of itself.
However, these claims are false, so you have to make a different argument.
I’ve seen this sort of substitution-argument a few times recently, so I’ll take this opportunity to point out that arguments have contexts, and if it seems that an argument does not contain all the information necessary to support its conclusions (because directly substituting in other words produces falsehood), this is because words have meanings, steps are elided, and there are things true and false in the world. This does not invalidate those arguments! These elisions are in fact necessary to prevent each argument from being a re-derivation of human society from mathematical axioms. Arguers should try to be sensitive to the way in which the context of an argument may or may not change how that argument applies to other subjects (A simple example: “You should not enter that tunnel because your truck is taller than the ceiling’s clearance” is a good argument only if the truck in question is actually taller than the ceiling’s clearance.). This especially applies when arguments are not meant to be formal, or in fact when they are not intended to be arguments.
These substitution arguments are quite a shortcut. The perpetrator doesn’t actually have to construct something that supports a specific point; instead, they can take an argument they disagree with, swap some words around, leave out any words that are inconvenient, post it, and if the result doesn’t make sense, the perpetrator wins!
Making a valid argument about why the substitution argument doesn’t make sense requires more effort than creating the substitution argument, so if we regard discussions here as a war of attrition, the perpetrator wins even if you create a well-reasoned reply to him.
Substitution arguments are garbage. I wish I knew a clean way to get rid of them. Thanks for identifying them as a thing to be confronted.
Cool, glad I’m not just imagining things! I think that sometimes this sort of argument can be valuable (“That person also has a subjective experience of divine inspiration, but came to a different conclusion”, frex), but I’ve become more suspicious of them recently—especially when I’m tempted to use one myself.
Thing is, this is a general response to virtually any criticism whatsoever. And it’s often true! But it’s not always a terribly useful response. Sometimes it’s better to make explicit that bit of context, or that elided step.
Moreover it’s also a good thing to remember about the other guy’s argument next time you think his conclusions obviously do not follow from his (explicitly stated) premises—that is, next time you see what looks to you to be an invalid argument, it may not be even if strictly on a formal level it is, precisely because you are not necessarily seeing everything the other guy is seeing.
So, it’s not just about substitutions. It’s a general point.
True! This observation does not absolve us of our eternal vigilance.
Emphatically agreed.
Guilt works here, for example. (But XiXiDu covered that.) Social pressure also. Veiled threat and warning, too. Signaling your virtue to others as well. Moral arguments are so handy that they accomplish all of these in one blow.
ETA: I’m not suggesting that you in particular are trying to guilt trip people, pressure them, threaten them, or signal. I’m saying that those are all possible explanations as to why someone might prefer to couch their arguments in moral terms: it is more persuasive (as Dark Arts) in certain cases. Though I reject moralist language if we are trying to have a clear discussion and get at the truth, I am not against using Dark Arts to convince Bob not to strangle Alice.
Perplexed wrote earlier:
Sometimes you’ll want to explain why your punishment of others is justified. If you don’t want to engage Perplexed’s “moral realism”, then either you don’t think there’s anything universal enough (for humans, or in general) in it to be of explanatory use in the judgments people actually make, or you don’t think it’s a productive system for manufacturing (disingenuous yet generally persuasive) explanations that will sometimes excuse you.
Assuming I haven’t totally lost track of context here, I think I am saying that moral language works for persuasion (partially as Dark Arts), but is not really suitable for intellectual discourse.
Okay. Whatever he hopes is real (but you think is only confused), will allow you to form persuasive arguments to similar people. So it’s still worth talking about.
Virtue ethicists and deontologists merely express a preference for certain codes of conduct because they believe adhering to these codes will maximize their utility, usually via the mechanism of lowering their time preference.
ETA: And also, as XiXiDu points out, to signal virtuosity.
Upvoted because I strongly agree with the spirit of this post, but I don’t think moral philosophy succeeds in dissolving the confusion. So far it has failed miserably, and I suspect that it is entirely unnecessary. That is, I think this is one field that can be dissolved away.
Like if an atheist is talking to a religious person then the subject matter is metatheology?
Which metrics do I use to judge others?
There has been some confusion over the word “preference” in the thread, so perhaps I should use “subjective value”. Would you agree that the only tools I have for judging others are subjective values? (This includes me placing value on other people reaching a state of subjective high value)
Or do you think there’s a set of metrics for judging people which has some spooky, metaphysical property that makes it “better”?
And why would that even matter as long as I am able to realize what I want without being instantly struck by thunder if I desire or do something that violates the laws of morality? If I live a happy and satisfied life of fulfilled preferences but constantly do what is objectively wrong, why exactly would that matter, to whom would it matter and why would I care if I am happy and my preferences are satisfied? Is it some sort of game that I am losing, where those who are the most right win? What if I don’t want to play that game, what if I don’t care who wins?
Because it harms other people directly or indirectly. Most immoral actions have that property.
To the person you harm. To the victim’s friends and relatives. To everyone in the society which is kept smoothly running by the moral code which you flout.
Because you will probably be punished, and that tends to not satisfy your preferences.
If the moral code is correctly designed, yes.
Then you are, by definition, irrational, and a sane society will eventually lock you up as being a danger to yourself and everyone else.
Begging the question.
Either that is part of my preferences or it isn’t.
Either society is instrumental to my goals or it isn’t.
Game theory? Instrumental rationality? Cultural anthropology?
If I am able to realize my goals, satisfy my preferences, don’t want to play some sort of morality game with agreed upon goals and am not struck by thunder once I violate those rules, why would I care?
What is your definition of irrationality? I wrote that if I am happy, able to reach all of my goals and satisfy all of my preferences while constantly violating the laws of morality, how am I irrational?
Also, what did you mean by
… in response to “Because you will probably be punished, and that tends to not satisfy your preferences.” ?
I think you mean that you should correctly predict the odds and disutility (over your life) of potential punishments, and then act rationally selfishly. I think this may be too computationally expensive in practice, and you may not have considered the severity of the (unlikely event) that you end up severely punished by a reputation of being an effectively amoral person.
Yes, we see lots of examples of successful and happy unscrupulous people in the news. But consider selection effects (that contradiction of conventional moral wisdom excites people and sells advertisements).
I meant that we already do have a field of applied mathematics and science that talks about those things, why do we need moral philosophy?
I am not saying that it is a clear cut issue that we, as computationally bounded agents, should abandon moral language, or that we even would want to do that. I am not advocating to reduce the complexity of natural language. But this community seems to be committed to reductionism, minimizing vagueness and the description of human nature in terms of causal chains. I don’t think that moral philosophy fits this community.
This community doesn’t talk about theology either, it talks about probability and Occam’s razor. Why would it talk about moral philosophy when all of it can be described in terms of cultural anthropology, sociology, evolutionary psychology and game theory?
It is a useful umbrella term—rather like “advertising”.
Can all of it be described in those terms? Isn’t that a philosophical claim?
There’s nothing to dispute. You have a defensible position.
However, I think most humans have as part of what satisfies them (they may not know it until they try it), the desire to feel righteous, which can most fully be realized with a hard-to-shake belief. For a rational person, moral realism may offer this without requiring tremendous self-delusion. (disclaimer: I haven’t tried this).
Is it worth the cost? Probably you can experiment. It’s true that if you formerly felt guilty and afraid of punishment, then deleting the desire to be virtuous (as much as possible) will feel liberating. In most cases, our instinctual fears are overblown in the context of a relatively anonymous urban society.
Still, reputation matters, and you can maintain it more surely by actually being what you present yourself as, rather than carefully (and eventually sloppily and over-optimistically) weighing each case in terms of odds of discovery and punishment. You could work on not feeling bad about your departures from moral perfection more directly, and then enjoy the real positive feeling-of-virtue (if I’m right about our nature), as well as the practical security. The only cost then would be lost opportunities to cheat.
It’s hard to know who to trust as having honest thoughts and communication on the issue, rather than presenting an advantageous image, when so much is at stake. Most people seem to prefer tasteful hypocrisy and tasteful hypocrites. Only those trying to impress you with their honesty, or those with whom you’ve established deep loyalties, will advertise their amorality.
It’s irrational to think that the evaluative buck stops with your own preferences.
Maybe he doesn’t care about the “evaluative buck”, which while rather unfortunate, is certainly possible.
If he doesn’t care about rationality, he is still being irrational,
This.
I’m claiming that there is a particular moral code which has the spooky game-theoretical property that it produces the most utility for you and for others. That is, it is the metric which is Pareto optimal and which is also a ‘fair’ bargain.
So you’re saying that there’s one single set of behaviors, which, even though different agents will assign drastically different values to the same potential outcomes, balances their conflicting interests to provide the most net utility across the group. That could be true, although I’m not convinced.
Even if it is, though, what the optimal strategy is will change if the net values across the group changes. The only point I have ever tried to make in these threads is that the origin of any applicable moral value must be the subjective preferences of the agents involved.
The reason any agent would agree to follow such a rule set is if you could demonstrate convincingly that such behaviors maximize that agent’s utility. It all comes down to subjective values. There exists no other motivating force.
True, but that may not be as telling an objection as you seem to think. For example, if you run into someone (not me!) who claims that the entire moral code is based on the ‘Golden Rule’ of “Do unto others as you would have others do unto you.” Tell that guy that moral behavior changes if preferences change. He will respond “Well, duh! What is your point?”.
There are people who do not recognize this. It was, in fact, my point.
Edit: Hmm, did I say something rude Perplexed?
Not to me. I didn’t downvote, and in any case I was the first to use the rude “duh!”, so if you were rude back I probably deserved it. Unfortunately, I’m afraid I still don’t understand your point.
Perhaps you were rude to those unnamed people who you suggest “do not recognize this”.
I think we may have reached the somewhat common on LW point where we’re arguing even though we have no disagreement.
It’s easy to bristle when someone in response to you points out something you thought it was obvious that you knew. This happens all the time when people think they’re smart :)
I’m fond of including clarification like, “subjective values (values defined in the broadest possible sense, to include even things like your desire to get right with your god, to see other people happy, to not feel guilty, or even to “be good”).”
Some ways I’ve found to dissolve people’s language back to subjective utility:
If someone says something is good, right, bad, or wrong, ask, “For what purpose?”
If someone declares something immoral, unjust, unethical, ask, “So what unhappiness will I suffer as a result?”
But use sparingly, because there is a big reason many people resist dissolving this confusion.
Yes! That’s a point that I’ve repeated so often to so many different people [not on LW, though] that I’d more-or-less “given up”—it began to seem as futile as swatting flies in summer. Maybe I’ll resume swatting now I know I’m not alone.
This is mainly how I use morality. I control my own actions, not the actions of other people, so for me it makes sense to judge my own actions as good or bad, right or wrong. I can change them. Judging someone else changes nothing about the state of the world unless I can persuade them to act differently.
Avoiding a person (a) does not (necessarily) persuade them to act differently, but (b) definitely changes the state of the world. This is not a minor nitpicking point. Avoiding people is also called social ostracism, and it’s a major way that people react to misbehavior. It has the primary effect of protecting themselves. It often has the secondary effect of convincing the ostracized person to improve their behavior.
Then I would consider that a case where I could change their behaviour. There are instances where avoiding someone would bother them enough to have an effect, and other cases where it wouldn’t.
Avoiding people who misbehave will change the state of the world even if that does not affect their behavior. It changes the world by protecting you. You are part of the world.
Yes, but if you judge a particular action of your own to be ‘wrong’, then why should you avoid that action? The definition of wrong that I supply solves that problem. By definition if an action is wrong, then it is likely to elicit punishment. So you have a practical reason for doing right rather than doing wrong.
Furthermore, if you do your duty and reward and/or punish other people for their behavior, then they too will have a practical reason to do right rather than wrong.
Before you object “But that is not morality!”, ask yourself how you learned the difference between right and wrong.
It’s a valid point that I probably learned morality this way. I think that’s actually the definition of ‘preconventional’ morality-it’s based on reward/punishment. Maybe all my current moral ideas have roots in that childhood experience, but they aren’t covered by it anymore. There are actions that would be rewarded by most of the people around me, but which I avoid because I consider there to be a “better” alternative. (I should be able to think of more examples of this, but I guess one is laziness at work. I feel guilty if I don’t do the cleaning and maintenance that needs doing even though everyone else does almost nothing. I also try to follow a “golden rule” that if I don’t want something to happen to me, I won’t do it to someone else even if the action is socially acceptable amidst my friends and wouldn’t be punished.
Ah. Thanks for bringing up the Kohlberg stages—I hadn’t been thinking in those terms.
The view of morality I am promoting here is a kind of meta-pre-conventional viewpoint. That is, morality is not ‘that which receives reward and punishment’, it is instead ‘that which (consequentially) ought to receive reward and punishment, given that many people are stuck at the pre-conventional level’.
How many people? I think (I remember reading in my first-year psych textbook) that most adults functionning at a “normal” level in society are at the conventional level: they have internalized whatever moral standards surround them and obey them as rules, rather than thinking directly of punishment or reward. (They may still be thinking indirectly of punishment and reward; a conventionally moral person obeys the law because it’s the law and it’s wrong to break the law, implicitly because they would be punished if they did.) I’m not really sure how to separate how people actually reason on moral issues, versus how they think they do, and whether the two are often (or ever???) the same thing.
How many people are stuck at that level? I don’t know.
How many people must be stuck there to justify the use of punishment as deterrent? My gut feeling is that we are not punishing too much unless the good done (to society) by deterrence is outweighed by the evil done (to the ‘criminal’) by the punishment.
And also remember that we can use carrots as well as sticks. A smile and a “Thank you” provide a powerful carrot to many people. How many? Again, I don’t know, but I suspect that it is only fair to add these carrot-loving pre-conventionalists in with the ones who respond only to sticks.
Cool! Swat away. Though I’m not particularly happy with the metaphor.
Assuming Amanojack explained your position correctly, then there aren’t just people fulfilling their preferences. There are people doing all kinds of things that fulfill or fail to fulfill their preferences—and, not entirely coincidentally, which bring happiness and grief to themselves or others. So then a common reasonable definition of morality (that doesn’t involve the word preferences) is that set of habits that are most likely to bring long-term happiness to oneself and those around one.
You missed a word in my original. I said that there were agents trying to fulfill their preferences. Now, per my comment at the end of your subthread with Amanojack, I realize that the word “preferences” may be unhelpful. Let me try to taboo it:
There are intelligent agents who assign higher values to some futures than others. I observe them generally making an effort to actualize those futures, but sometimes failing due to various immediate circumstances, which we could call cognitive overrides. What I mean by that is that these agents have biases and heuristics which lead them to poorly evaluate the consequences of actions.
Even if a human sleeping on the edge of a cliff knows that the cliff edge is right next to him, he will jolt if startled by noise or movement. He may not want to fall off the cliff, but the jolt reaction occurs before he is able to analyze it. Similarly, under conditions of sufficient hunger, thirst, fear, or pain, the analytical parts of the agent’s mind give way to evolved heuristics.
If that’s how you would like to define it, that’s fine. Would you agree then, that the contents of that set of habits is contingent upon what makes you and those around you happy?
I suspect it’s a matter of degree rather than either-or. People sleeping on the edges of cliffs are much less likely to jot when startled than people sleeping on soft beds, but not 0% likely. The interplay between your biases and your reason is highly complex.
Yes; absolutely. I suspect that a coherent definition of morality that isn’t contingent on those will have to reference a deity.
We are, near as I can tell, in perfect agreement on the substance of this issue. Aumann would be proud. :)
I don’t understand what you mean by preferences when you say “intelligent agents trying to fulfill their preferences”. I have met plenty of people who were trying to do things contrary to their preferences. Perhaps before you try (or someone tries for you) to distinguish morality from preferences, it might be helpful to distinguish precisely how preferences and behavior can differ?
Example? I prefer not to stay up late, but here I am doing it. It’s not that I’m acting against my preferences, because my current preference is to continue typing this sentence. It’s simply that English doesn’t differentiate very well between “current preferences”= “my preferences right this moment” and “current preferences”= “preferences I have generally these days.”
Seinfeld said it best.
But I want an example of people acting contrary to their preferences, you’re giving one of yourself acting according to your current preferences. Hopefully, NMJablonski has an example of a common action that is genuinely contrary to the actor’s preferences. Otherwise, the word “preference” simply means “behavior” to him and shouldn’t be used by him. He would be able to simplify “the actions I prefer are the actions I perform,” or “morality is just behavior”, which isn’t very interesting to talk about.
“This-moment preferences” are synonymous with “behavior,” or more precisely, “(attempted/wished-for) action.” In other words, in this moment, my current preferences = what I am currently striving for.
Jablonski seems to be using “morality” to mean something more like the general preferences that one exhibits on a recurring basis, not this-moment preferences. And this is a recurring theme: that morality is questions like, “What general preferences should I cultivate?” (to get more enjoyment out of life)
Ok, so if I understand you correctly: It is actually meaningful to ask “what general preferences should I cultivate to get more enjoyment out of life?” If so, you describe two types of preference: the higher-order preference (which I’ll call a Preference) to get enjoyment out of life, and the lower-order “preference” (which I’ll call a Habit or Current Behavior rather than a preference, to conform to more standard usage) of eating soggy bland french fries if they are sitting in front of you regardless of the likelihood of delicious pizza arriving. So because you prefer to save room for delicious pizza yet have the Habit of eating whatever is nearby and convenient, you can decide to change that Habit. You may do so by changing your behavior today and tomorrow and the day after, eventually forming a new Habit that conforms better to your preference for delicious foods.
Am I describing this appropriately? If so, by the above usage, is morality a matter of Behavior, Habit, or Preference?
Sounds fairly close to what I think Jablonski is saying, yes.
Preference isn’t the best word choice. Ultimately it comes down to realizing that I want different things at different times, but in English future wanting is sometimes hard to distinguish from present wanting, which can easily result in a subtle equivocation. This semantic slippage is injecting confusion into the discussion.
Perhaps we have all had the experience of thinking something like, “When 11pm rolls around, I want to want to go to sleep.” And it makes sense to ask, “How can I make it so that I want to go to sleep when 11pm rolls around?” Sure, I presently want to go to sleep early tonight, but will I want to then? How can I make sure I will want to? Such questions of pure personal long-term utility seem to exemplify Jablonksi’s definition of morality.
ok cool, replying to the original post then.
Oops, I totally missed this subthread.
Amanojack has, I think, explained my meaning well. It may be useful to reduce down to physical brains and talk about actual computational facts (i.e. utility function) that lead to behavior rather than use the slippery words “want” or “preference”.
Good idea. Like, “My present utility function calls for my future utility function to be such and such”?
I replied to Marius higher up in the thread with my efforts at preference-taboo.
Same here.
It doesn’t mean any of those things, since any of them can be judged wrong.
Morality is about having the right preferences, as rationality is about having true beliefs.
Do you think the sentence “there are truths no-one knows” is meaningful?
I understand what it would mean to have a true belief, as truth is noticeably independent of belief. I can be surprised, and I can anticipate. I have an understanding of a physical world of which I am part, and which generates my experiences.
It does not make any sense for there to be some “correct” preferences. Unlike belief, where there is an actual territory to map, preferences are merely a byproduct of the physical processes of intelligence. They have no higher or divine purpose which demands certain preferences be held. Evolution selects for those which aid survival, and it doesn’t matter if survival means aggression or cooperation. The universe doesn’t care.
I think you and other objective moralists in this thread suffer from extremely anthropocentric thinking. If you rewind the universe to a time before there are humans, in a time of early expansion and the first formation of galaxies, does there exist then the “correct” preferences that any agent must strive to discover? Do they exist independent of what kinds of life evolve in what conditions?
If you are able to zoom out of your skull, and view yourself and the world around you as interesting molecules going about their business, you’ll see how absurd this is. Play through the evolution of life on a planetary scale in your mind. Be aware of the molecular forces at work. Run it on fast forward. Stop and notice the points where intelligence is selected for. Watch social animals survive or die based on certain behaviors. See the origin of your own preferences, and why they are so different from some other humans.
Objective morality is a fantasy of self-importance, and a hold-over from ignorant quasi-religious philosophy which has now cloaked itself in scientific terms and hides in university philosophy departments. Physics is going to continue to play out. The only agents who can ever possibly care what you do are other physical intelligences in your light cone.
Do you think mathematical statements are true and false? Do you think mathematics has an actual territory?
It is plainly the case that people can have morally wrong preferences, and therefore no argument against ethics that ethics are not forced on people. People will suffer if they hold incorrect or irrational factual beliefs, and they will suffer if they have evil preferences. In both cases there is a distinction between right and wrong, and in both cases there is an option.
I think you and others on this thread suffer from a confusion between ontology and epistemology. There can be objective truths in mathematics without having the number 23 floating around in space. Moral objectivity likewise does not demand the physical existence of moral objects.
There are things I don’t want done to me. I should not therefore do them to others. I can reason my way to that conclusion without the need for moral objects, and without denying that I am made of atoms.
Wait. So you don’t believe in an objective notion of morality, in the sense of a morality that would be true even if there were no people? Instead, you think of morality as, like, a set of reasonable principles a person can figure out that prevent their immediate desires from stomping on their well-being, and/or that includes in their “selfishness” a desire for the well-being of others?
Everything is non objective for some value of objective. It is doubtful that there are mathematical truths without mathematicians. But that does not make math as subjective as art.
Okay. The distinction I am drawing is: are moral facts something “out there” to be discovered, self-justifying, etc., or are they facts about people, their minds, their situations, and their relationships.
Could you answer the question for that value of objective? Or, if not, could you answer the question by ignoring the word “objective” or providing a particular value for it?
The second is closer, but there is still the issue of the fact-value divide.
ETA: I have a substantive pre-written article on this, but where am I going to post it with my karma...?
I translate that as: it’s better to talk about “moral values” than “moral facts” (moral facts being facts about what moral values are, I guess), and moral values are (approximately) reasonable principles a person can figure out that prevent their immediate desires from stomping on their well-being, and/or that includes in their “selfishness” a desire for the well-being of others.
Something like that? If not, could you translate for me instead?
I think the the fact that moral values apply to groups is important.
I take this to mean that, other than that, you agree.
(This is the charitable reading, however. You seem to be sending strong signals that you do not wish to have a productive discussion. If this is not your intent, be careful—I expect that it is easy to interpret posts like this as sending such signals.)
If this is true, then I think the vast majority of the disagreements you’ve been having in this thread have been due to unnecessary miscommunication.
Mathematics is not Platonically real. If it is we get Tegmark IV and then every instant of sensible ordered universe is evidence against it, unless we are Boltzmann brains. So, no, mathematics does not have an actual territory. It is an abstraction of physical behaviors that intelligences can use because intelligences are also physical. Mathematics works because we can perform isomorphic physical operations inside our brains.
You can say that as many times as you like, but that wont make it true.
ETA: You also still haven’t explained how a person can know that.
Only if is-real is a boolean. If it’s a number, then mathematics can be “platonically real” without us being Boltzmann brains.
Upvoted. That’s a good point, but also a whole other rabbit hole. Do you think morality is objective?
As opposed to what? Subjective? What are the options? Because that helps to clarify what you mean by “objective”. Prices are created indirectly by subjective preferences and they fluctuate, but if I had to pick between calling them “subjective” or calling them “objective” I would pick “objective”, for a variety of reasons.
No; morality reduces to values that can only be defined with respect to an agent, or a set of agents plus an aggregation process. However, almost all of the optimizing agents (humans) that we know about share some values in common, which creates a limited sort of objectivity in that most of the contexts we would define morality with respect to agree qualitatively with each other, which usually allows people to get away with failing to specify the context.
Upvoted. I think you could get a decent definition of the word “morality” along these lines.
A person can know that by reasoning about it.
If you think there is nothing wrong with having a preference for murder, it is about time you said so. It changes a lot.
It still isn’t clear what it means for a preference for murder to be “wrong”!
So far I can only infer your definition of “wrong” to be:
“Not among the correct preferences”
… but you still haven’t explained to us why you think there are correct preferences, besides to stamp your foot and say over and over again “There are obviously correct preferences” even when many people do not agree.
I see no reason to believe that there is a set of “correct” preferences to check against.
So you think there is nothing wrong in having a preference for murder? Yes or no?
I need to find out whether I should be arguing to specific cases from general principles or vice versa.
I do not believe there is a set of correct preferences. There is no objective right or wrong.
Funny how you never quite answer the question as stated. Can you even say it is subjectively wrong?
“Wrong” meaning what?
Would I prefer the people around me not be bloodthirsty? Yes, I would prefer that.
Can people reason that bloodthirst is not a good preference to have...?
Even if there’s no such thing as objective right and wrong, they might easily be able to reason that being bloodthirsty is not in their best selfish interest.
If there’s no right or wrong, why does that matter?
I don’t understand the question, nor why you singled out that fragment.
When you say “Even if there’s no such thing as objective right and wrong” you’re still implicitly presuming a default morality, namely ethical egoism.
Yes. Even subjective morality refutes NMJ’s nihilism.
I agree with Sewing-Machine
Being bloodthirsty would lead to results I do not prefer.
ETA: Therefore I would not choose to become bloodthirsty. This is based on existing preference.
For me, now, it isn’t practical. In other circumstances it would be. It need not ever be a terminal goal but it could be an instrumental goal built in deeply.
It isn’t ‘funny’ at all. You were trying to force someone into a lose lose morality signalling position. It is appropriate to ignore such attempts and instead state what your actual position is.
Your gambit here verges on logically rude.
In keeping with my analogy let’s translate your position into the corresponding position on physics:
Do you still agree with the changed version? If not, why not?
(I never realized how much fun it could be to play a chronophone.)
Based upon my experiences, physical truths appear to be concrete and independent of beliefs and opinions. I see no cases where “right” has a meaning outside of an agent’s preferences. I don’t know how one would go about discovering the “rightness” of something, as one would a physical truth.
It is a poor analogy.
Edit: Seriously? I’m not trying to be obstinate here. Would people prefer I go away?
New edit: Thanks wedrifid. I was very confused.
You’re not being obstinate. You’re more or less right, at least in the parent. There are a few nuances left to pick up but you are not likely to find them by arguing with Eugine.
Please explain what the word “concrete” means independent of anyone’s beliefs and opinions.
How about this. You stop down-voting the comments in this thread you disagree with and I’ll do the same.
… I’m not down-voting the comments I disagree with.
I down-voted a couple of snide comments from Peter earlier.
Well, somebody is.
If it’s not you I’m sorry.