Why do people seem to mean different things by “I want the pie” and “It is right that I should get the pie”? Why are the two propositions argued in different ways?
“I want the pie” is something that nobody else is affected by and thus nobody else has an interest in. “I should get the pie” is something that anybody else interested in the pie has an interest in. In this sense, the moral preferences are those that other moral beings have a stake in, those that affect other moral beings. I think some kind of a distinction like this explains the different ways we talk about and argue these two kinds of preferences. Additionally, evolution has most likely given us a pre-configured and optimized module for dealing with classes of problems involving other beings that were especially important in the environment of evolutionary adaptedness, which subjectively “feels” like an objective morality that is written into the fabric of the universe.
When and why do people change their terminal values? Do the concepts of “moral error” and “moral progress” have referents? Why would anyone want to change what they want?
I think of preferences and values as being part of something like a complex system (in the sense of http://en.wikipedia.org/wiki/Complex_system) in which all the various preferences are inter-related and in constant interaction. There may be something like a messy, tangled hierarchy where we have terminal preferences that are initially hardwired at a very low-level, on top of which are higher-level non-terminal preferences, with something akin to back-propagation allowing for non-terminal preferences to affect the low-level terminal preferences. Some preferences are so general that they are in constant interaction with a very large subset of all the preferences; these are experienced as things that are “core to our being”, and we are much more likely to call these “values” rather than “preferences”, although preferences and values are not different in kind.
I think of moral error as actions that go against the terminal (and closely associated non-terminal (which feedback to terminal)) and most general values (involving other moral beings) of a large class of human beings (either directly via this particular instance of the error affecting me or indirectly via contemplation of this type of moral error becoming widespread and affecting me in the future). I think of moral progress as changes to core values that result in more human beings having their fundamental values (like fairness, purpose, social harmony) flourish more frequently and more completely rather than be thwarted.
Why and how does anyone ever “do something they know they shouldn’t”, or “want something they know is wrong”?
Because the system of interdependent values is not a static system and it is not a consistent system either. We have some fundamental values that are in conflict with each other at certain times and in certain circumstances, like self-interest and social harmony. Depending on all the other values and their interdependencies, sometimes one will win out, and sometimes the other will win out. Guilt is a function of recognizing that something we have done has thwarted one of our own fundamental values (but satisfied the others that won out in this instance) and thwarted some fundamental values of other beings too (not thwarting the fundamental values of others is another of our fundamental values). The messiness of the system (and the fact that it is not consistent) dooms any attempt by philosophers to come up with a moral system that is logical and always “says what we want it to say”.
Does the notion of morality-as-preference really add up to moral normality?
I think it does add up to moral normality in the sense that our actions and interactions will generally be in accordance with what we think of as moral normality, even if the (ultimate) justifications and the bedrock that underlies the system as a whole are wildly different. Fundamental to what I think of as “moral normality” is the idea that something other than human beings supplies the moral criterion, whereas under the morality-as-preference view as I described it above, all we can say is that IF you desire to have your most fundamental values flourish (and you are a statistically average human in terms of your fundamental values including things like social harmony), THEN a system that provides for the simultaneous flourishing of other beings’ fundamental values is the most effective way of accomplishing that. It is a fact that most people DO have these similar fundamental values, but there is no objective criterion from the side of reality itself that says all beings MUST have the desire to have their most fundamental values flourish (or that the fundamental values we do have are the “officially sanctioned” ones). It’s just an empirical fact of the way that human beings are (and probably many other classes of beings that were subject to similar pressures).
I’ve voiced my annoyance with the commenting system in the past, in particular that it is non-threaded and so often very difficult to figure out what someone is responding to if they don’t include context (which they often don’t), so I won’t give details again.
On the topic of the 2 of 10 rule, if it’s to prevent one person dominating a thread, shouldn’t the rule be “no more than 2 of last 10 should be by the same person in the same thread” (so 3 posts by the same person would be fine as long as they are in 3 different threads)?
Optimistically, I would say that if the murderer perfectly knew all the relevant facts, including the victim’s experience, ve wouldn’t do it
The murderer may have all the facts, understand exactly what ve is doing and what the experience of the other will be, and just decide that ve doesn’t care. Which fact is ve not aware of? Ve may understand all the pain and suffering it will cause, ve may understand that ve is wiping out a future for the other person and doing something that ve would prefer not to be on the receiving end of, may realize that it is behavior that if universalized would destroy society, may realize that it lessens the sum total of happiness or whatever else, may even know that “ve should feel compelled not to murder” etc. But at the end of the day, ve still might say, “regardless of all that, I don’t care, and this is what I want to do and what I will do”.
There is a conflict of desire (and of values) here, not a difference of fact. Having all the facts is one thing. Caring about the facts is something altogether different.
On the question of the bedrock of fairness, at the end of the day it seems to me that one of the two scenarios will occur:
(1) all parties happen to agree on what the bedrock is, or they are able to come to an agreement.
(2) all parties cannot agree on what the bedrock is. The matter is resolved by force with some party or coalition of parties saying “this is our bedrock, and we will punish you if you do not obey it”.
And the universe itself doesn’t care one way or the other.
But if I understand you, you are saying that human morality is human and does not apply to all sentient beings. However, as long as all we are talking about and all we really deal with is humans, then there is no difference in practice between a morality that is specific to humans and a universal morality applicable to all sentient beings, and so the argument about universality seems academic, of no import at least until First Contact is achieved.
What I am really saying is that the notion of “morality” is so hopelessly contaminated with notions of objective standards and criteria of morality above and beyond humanity that we would do good to find other ways to think and talk about it. But to answer you directly in terms of what I think about the two ways of thinking about morality, I think there is a key difference between (1) “our particular ‘morality’ is purely a function of our evolutionary history (as it expresses in culture)” and (2) “there is a universal morality applicable to all sentients (and we don’t know of other similarly intelligent sentients yet)”.
With 1, there is no justification for a particular moral system: “this is just the way we are” is as good as it gets (no matter how you try to build on it, that is the bedrock). With 2, there is something outside of humanity that justifies some moralities and forbids others; there is something like an objective criterion that we can apply, rather than the criterion being relative to human beings and the (not inevitable) events that have brought us to this point. In 1 the rules are in some sense arbitrary; in 2 they are not. I think that is a huge difference. In the course of making decisions in day-to-day existence—should I steal this book? should I cheat on my partner? -- I agree with you that the difference is academic.
In particular, a lot of moral non-realists are wrong.
Yes, they’re wrong, but I think the important point is “what are they wrong about”? Under 1, the claim that “it is merely a matter of [arbitrary] personal opinion” is wrong as an empirical matter because personal opinions in “moral” matters are not arbitrary: they are derived from hardwired tendencies to interpret certain things in a moralistic manner. Under 2, it is not so much an empirical matter of studying human beings and experimenting and determining what the basis for personal opinions about “moral” matters is; it is a matter of determining whether “it’s merely a matter of personal opinion” is what the universal moral law says (and it does not, of course).
I concede that I was sloppy in speaking of “traditional notions”, although I did not say that there were no philosophical traditions such that...; I was talking about the traditions that were most influential over historical times in western culture (based on my meager knowledge of ethics based on a university course and a little other reading). I had in mind thousands of years of Judeo-Christian morality that is rooted in what the Deity Said or Did, and deontological understandings or morality such as Kant (in which species-indepedendent reason compels us to recognize that …), as well as utilitarianism (in the sense that the justification for believing that the moral worth of an action is strictly determined by the outcome is not based on our evolutionary quirks: it is supposed to be a rationally compelling system on its own, but perhaps a modern utilitarian might appeal to our evolutionary history as justification).
On the topic of natural law tradition, is it your understanding that it is compatible with the idea that moral judgments are just a subset of preferences that we are hardwired to have tendencies regarding, no different in kind to any other preference (like for sweet things)? That is the point I’m trying to make, and it’s certainly not something I heard presented in my ethics class in university. The fact that we have a system that is optimized and pre-configured for making judgments about certain important matters is a far cry from saying that there is an objective moral law. It also doesn’t support the notion that there are moral facts that are different in kind from any other type of fact.
It seems from skimming that natural law article you mentioned that Aquinas is central to understanding the tradition. The article quotes Aquinas as ‘the natural law is the way that the human being âparticipatesâ in the eternal law’ [of God]. It seems to me that again, we are talking about a system that sees an objective criterion for morality that is outside of humanity, and I think saying that “the way human beings happened to evolve to think about certain actions constitutes a objective natural law for human morality” is a rather tenuous position. Do you hold that position?
Laura ABJ: To expand on the text you quoted, I think that killing babies is ugly, and therefore would not do it without sufficient reason, which I don’t think the scenario provides. The ugliness of killing babies doesn’t need a moral explanation, and the moral explanation just builds on (and adds nothing but a more convenient way of speaking about) the foundation of aversion, no matter how it’s dressed up and made to look like something else.
The idea is not compelling to me and so would not haunt me forever, because like I said, I’m not yet convinced that some X number of refreshing breezes on a hot day is strictly equivalent in some non-arbitrary sense to murdering a baby, and X+1 breezes is “better” in some non-arbitrary sense.
However, the idea of being haunted forever would bother me now if I thought it likely that my future self would think I made the wrong decision, but that implies that I have more knowledge and perspective now than I actually have (in order to know enough to think it likely that I’ll be haunted). All I can do is make what I think is the best decision given what I know and understand now, so I don’t see that I could think it likely that I would be haunted by what I did. Of course, I could make a terrible mistake, not having understood something I will later think I should have understood, and I might regret that forever, but I wouldn’t realize that at the time and I wouldn’t think it likely.
Hal: as an amoralist, I wouldn’t do it. If there is not enough time to explain to me why it is necessary and convince me that it is necessary, no deal. Even if I thought it probably would substantially increase the future happiness of humanity, I still wouldn’t do it without a complete explanation. Not because I think there is a moral fabric to the universe that says killing babies is wrong, but because I am hardwired to have an extremely strong aversion to like killing babies. Even if I actually was convinced that it would increase happiness, I still might not do it, because I’m still undecided on the idea that some number of people experiencing a refreshing breeze on a hot day is worth more than some person being tortured -- ditto for killing babies.
It seems to me that if you want to find people who are willing to torture and kill babies because “it will increase happiness”, you need to find some extremely moral utilitarians. I think you’d have much better luck in that community than among amoralists ;-).
Traditional notions of morality are confused, and observation of the way people act does show that they are poor explanations, so I think we are in perfect agreement there. (I do mean “notion” among thinkers, not among average people who haven’t given much though to such things.) Your second paragraph isn’t in conflict with my statement that morality is traditionally understood to be in some sense objectively true and objectively binding on us, and that it would be just as true and just as binding if we had evolved very differently.
It’s a different topic altogether to consider to whom we have moral obligations (or who should be treated in ways constrained by our morality). And it’s another topic again to consider what types of beings are able to participate (or are obligated to participate in) the moral system. I wasn’t touching on either of these last two topics.
All I’m saying is that I believe that what morality actually is for each of us in our daily lives is a result of what worked for our ancestors, and that is all it is. I.e., there is no objective morality and there is no ONE TRUE WAY. You can never say “reason demands that you must do …” or “you are morally obligated by reality itself to …” without first making some assumptions that are themselves not justifiable (the axioms that we have as a result of evolution). Anything you build on that foundational bedrock is contingent and not necessary.
Constant: I basically agree with the gist of your rephrasing it in terms of being relative to the species rather than independent of the species, but I would emphasize that what you end up with is not a “moral system” in anything like the traditional sense, since it is fundamental to traditional notions of morality that THE ONE TRUE WAY does not depend on human beings and the quirks of our evolutionary history and that it is privileged from the point of view of reality (because its edicts were written in stone by God or because the one true species-independent reason proves it must be so).
btw, you mean partial application rather than currying.
Currying is converting a function like the following, which takes a single n-tuple arg (n > 1) [“::” means “has type”]
-- f takes a 2-tuple consisting of a value of type ‘x’ and a value of type ‘y’ and returns a value of type ‘z’.
f :: (x, y) → z
into a function like the following, which effectively takes the arguments separately (by returning a function that takes a single argument)
-- f takes a single argument of type ‘x’, and returns a function that accepts a single argument of type ‘y’ and returns a value of type ‘z’.
f :: x → y → z
What you meant is going from
f :: x → y → z
g :: y → z
g = f foo
where the ‘foo’ argument of type ‘x’ is “hardwired” into function g.
I agree with mtraven’s last post that morality is an innate functionality of the human brain that can’t be “disproved”, and yet I have said again and again that I don’t believe in morality, so let me explain.
Morality is just a certain innate functionality in our brains as it expresses itself based on our life experiences. This is entirely consistent with the assertion that what most people mean by morality—an objective standard of conduct that is written into the fabric of reality itself—does not exist: there is no such thing!
A lot of confusion in this thread is due to some people taking “there is no morality” to mean there is nothing in the brain that corresponds to morality (and nothing like a moral system that almost all of us intuitively know) -- which I believe is obviously false, i.e., that there is such a system—and others taking it to mean there is no objective morality that exists independently of thinking beings with morality systems built in to their brains—which I believe is obviously true, i.e., that there is no objective morality. And of course, others have taken “there is no morality” to mean other things, perhaps following on some of Eliezer’s rather bizarre statements (which I hope he will clarify) in the post that conflated morality with motivation and implied that morality is what gets us out of bed in the morning or causes us to prefer tasty food to boring food.
Morality exists as something hardwired into us due to our evolutionary history, and there are sound reasons why we are better off having it. But that doesn’t imply that there is some morality that is sanctioned from the side of reality itself or that our particular moral beliefs are in any way privileged.
As a matter of practice, we all privilege the system that is hardwired into us, but that is just a brute fact about how human beings happen to be. It could easily have turned out radically different. We have no objective basis for ranking and distinguishing between alternate possible moralities. Of course, we have strong feelings nevertheless.
mtraven: many of the posters in this thread—myself included—have said that they don’t believe in morality (meaning morality and not “values” or “motivation”), and yet I very highly doubt that many of us are clinically psychopaths.
Not believing in morality does not mean doing what those who believe in morality consider to be immoral. Psychopathy is not “not believing in morality”: it entails certain kinds of behaviors, which naive analyses of attribute to “lack of morality”, but which I would argue are a result of aberrant preferences that manifest as aberrant behavior and can be explained without recourse to the concept of morality.
Unknown: of course it would make a difference, just as my behavior would be different if I had billions of dollars rather than next to nothing or if I were immortal rather than mortal. It doesn’t have anything to do with “morality” though.
For example, if I had the power of invisibility (and immateriality) and were able to plant a listening device in the oval office with no chance of getting caught, I would do it in order to publicly expose the lies and manipulations of the Bush administration and give proof of the willful stupidity and rampant dishonesty that many of his former administration have stated they witnessed daily—not because I think there is some objective code of morality that they violate but because I think the world would be a better place if their lies were exposed and such people did not have such power. (Note: I don’t think it would be a better place in anything like an objective sense: that is just my personal preference, and if I had the power to make it so, I would.)
(Hello, NSA: this is all purely fictional, of course.)
On the topic of vegetarianism, I originally became a vegetarian 15 years ago because I thought it was “wrong” to cause unnecessary pain and suffering of conscious beings, but I am still a vegetarian even though I no longer think it is “wrong” (in anything like the ordinary sense).
Now that I no longer think that the concept of “morality” makes much sense at all (except as a fancy and unnecessary name for certain evolved tendencies that are purely a result of what worked for my ancestors in their environments (as they have expressed themselves and changed over the course of my lifetime)), I remain a vegetarian for the reason that I still prefer there to be less unnecessary pain and suffering rather than more. I don’t think my preference is demanded or sanctioned by some objective moral law; it is merely my preference.
I recognize now that the reason I thought it was “wrong” is that I had the underlying preference all along and that I recognized that my behavior was inconsistent with my fundamental preferences (and that I desired to act more consistently with my fundamental beliefs).
Would I prefer that more people were vegetarians? Yes. Is it because I think unnecessary pain and suffering are “wrong”? No. I just don’t like unnecessary pain and suffering and would prefer for there to be less rather than more. If you take the person who says it is “wrong”, and keep probing them for more fundamental reasons that they have this feeling of “wrongness”, asking them “why do you believe that?” again and again, eventually you come to a point where they say “I just believe this”.
As Wittgenstein said:
If I have exhausted the justifications I have reached bedrock, and my spade is turned. Then I am inclined to say: âThis is simply what I do.â
Believers in morality try to convince us that there is a bedrock that justifies everything else but needs no justification itself, but there is no uncaused cause and there can be no infinite regress. Our evolved tendencies as they express themselves as a result of our life experience are the bedrock, and nothing else is necessary. Morality is just a fairy tale that we build upon the bedrock in order to convince ourselves that reality or nature (or God) cares about what we do and that we are absolved of responsibility for our behavior as long as we were “trying to do the right thing” (which is a more subtle version of the “I was just following orders” defense).
One might argue that I believe in “morality” but have merely substituted “preferences” for “moral beliefs”, but the difference is that I don’t think any of my preferences are different in kind from any others, so there is no justification for picking a subset of them and calling that subset “the moral preferences” and arguing that they are fundamentally different from any other preference I have.
Ah, I’m rambling … Too much coffee.
Like many others here, I don’t believe that there is anything like a moral truth that exists independently of thinking beings (or even dependently on thinking beings in anything like an objective sense), so I already live in something like that hypothetical. Thus my behavior would not be altered in the slightest.
@Richard: I think that’s a valid reduction. It explains non-negative integers reductively in terms of an isomorphism between two groups of things without appealing to numbers or number concepts.
@constant: regardless of the label, you still have 2 sets of things, those which it is possible to label fizzbin (following the rules) and those which it is not. Possibility is still there. So what does it mean that it is possible to label a node fizzbin? Does that mean that in order to understand the algorithm, which relies on possibility of labelling nodes “fizzbin” or not, we now must set up a different search space, make an preliminary assumption about which nodes in that space we think it is possible to label fizzbin (following the rules), and then start searching and changing labels? How does this process terminate? It terminates in something other than the search algorithm, a primitive sense of possibility that is more fundamental than the concept of “can label fizzbin”.
To be clear, there are two different but related points that I’ve tried to make here in the last few posts.
Point 1 is a minor point about the Rationalist’s Taboo game:
With regard to this point, as I’ve stated already, the task was to give a reductive explanation of the concept of possibility by explaining it in terms of more fundamental concepts (i.e., concepts which have nothing to do with possibility or associated concepts, even implicitly). I think that Eliezer failed that by sneaking the concept of “possibile to be reached” (i.e., “reachable”) into his explanation (note that what we call that concept is irrelevant, reachable or fizzbin or whatever).
Point 2 is a related but slightly different point about whether state-space searching is a helpful way of thinking about possibility and whether it actually explains anything.
I think I should summarize what I think Eliezer’s thesis is first so that other people can correct me (Eliezer said he is “done explaining” anything to me, so perhaps others who think they understand him well would speak up if they understand his thesis differently).
Thesis: regarding some phenomenon as possible is nothing other than the inner perception a person experiences (and probably also the memory of such a perception in the past) after mentally running something like the search algorithm and determining that the phenomenon is reachable.
Is this substantially correct?
The problem I have with that position is that when we dig a little deeper and ask what it means for a state to be reachable, the answer circularly depends on the concept of possibility, which is what we are supposedly explaining. A reachable state is just a state that it is possible to reach. If you don’t want to talk about reaching, call it transitioning or changing or whatever. The key point is that the algorithm divides the states (or whatever) into two disjoint sets: the ones it’s possible to get to and the ones it is not possible to get to. What distinguishes these two sets except possibility?
You might say that there is no problem, this possibility is again to be explained in terms of a search (or the result of a search), recursively. But you can’t have an infinite regress, so there must be a base case. The base case might be something hardwired into us, but then that would mean that the thing that is hardwired into us is what possibility really is, and that the sensation of possibility that arises from the search algorithm is just something that depends on our primitive notion of possibility. If the base case isn’t something hardwired into us, it is still necessarily something other than the search algorithm, so again, the search algorithm is not what possibility is.
Cyan: I think your quibble misses the point. Eliezer’s directions were to play the rationality taboo game and talk about possibility without using any of the forbidden concepts. His explanation failed that task, regardless of whether either he or Brandon were referring to the map or the territory. (Note: this point is completely unrelated to the specifics of the planning algorithm.)
I’ll summarize my other points later. (But to reiterate the point that Eliezer doesn’t get and pre-empt anybody else telling me yet again that the label is irrelevant, I realize the label is irrelevant, and I am not talking about character strings or labels at all.)
Eliezer, my point was that you dedicated an entire follow-up post to chiding Brandon, in part for using realizable in his explanation since it implicitly refers to the same concept as could, and that you committed the same mistake in using reachable.
Anyway, I guess I misunderstood the purpose of this post. I thought you were trying to give a reductive explanation of possibility without using concepts such a “can”, “could”, and “able”. If I’ve understood you correctly now, that wasn’t the purpose at all: you were just trying to describe what people generally mean by possibility.
Can you talk about “could” without using synonyms like “can” and “possible”? …. Can you describe the corresponding state of the world without “could”, “possible”, “choose”, “free”, “will”, “decide”, “can”, “able”, or “alternative”?
My point being that you set out to explain “could” without “able” and you do it by way of elaboration on a state being “able to be reached”.
What you decide to label the concept does not change the fact that the concept you’ve decided upon is a composite concept that is made up of two more fundamental concepts: reaching (or transitioning to) and possibility.
You’ve provided 1 sense of possibility (in terms of reachability), but possibility is a more fundamental concept than “possible to be reached”.
I don’t think reachable is permissible in the game, since reach-able means able to be reached, or possible to be reached.
Possibility is synonymous with to be able, so any term suffixed with -able should be forbidden.
The reachability explanation of possibility is also just one instance of possibility among many. to be able (without specifying in what manner) is the general type, and able to be reached is one particular (sub-) type of possibility. The more traditional understanding of possibility is able to exist, but others have been used too, such as able to be conceived, able to be done, able to be caused to exist, etc.