1: First of all, I want to acknowledge my belief that Eliezer’s thought experiment is indeed usefuel, although it is “worse” than hypothetical. This is because it forces us to either face our psychological limitations when it comes to moral intuitions, or succumb to them (by arguing that the thought experiment is fundamentally unsound, in order to preserve harmony among our contradictive intuitions).
2: Once we admit that our patchwork’o’rules’o’thumb moral intuitions are indeed contradictive, the question remains if he is actually right. In another comment I have implied that one must either be an utilitarian or strictly amoral (actually I forgot the third option: one can be neither by being irrational). If this assertion is true then, in my book, Eliezer wins.
3: As I believe 1 to be sound, I’d really like to hear voices about 2. =)
Frank_Hirsch
Eisegetes: I admit your fourth option did not even enter my mind. I’ll try (in a rather ad-hoc way) to dispute this on the grounds of computationalism. To be able to impose an order on conflicting options, it must be possible to reduced the combined expected outcomes (pleasure, displeasure, whatever else) into a single scalar value. Even if they are in some way lexically ordered, we can do this by projecting the lexical options onto non-intersecting intervals. Everything that is morally significant does, by virtue of the definition, enter into this calculus. Everything that doesn’t, isn’t.
If you feel this does not apply, please help me by elaborating your objection.
Eisegetes: This is my third posting now, and I hope I will be forgiven by the powers that be…
Your (a): I was not talking about a universal, but of a personal scalar ordering. Somewhere inside everybody’s brain there must be a mechanism that decides which of the considered options wins the competition for “most moral option of the moment”. Once the existence of this (personal) ordering is acknowledged (rationality), we can either disavow it (amorality) or try our best with what we have [always keeping in mind that the mechanisms at work are imperfect] - including math (utilitarianism).
Your (b): I view morality not as the set of rules postulated by creed X at time T but as the result of a genetically biased social learning process. Morality is expressed through it’s influence on every (healthy) individual’s personal utility function.
“The statement that X is wrong can be taken to mean that X has bad consequences according to some metric. It can also mean (or be used to perform the functions of) the following variants:”
(1,2,4,6) X makes me feel bad because it triggers one of my morality circuits.
(3,5) X makes me nervous because [relevant group] might retribute.
(7) I do not want X to occur.
(8) ? [Sorry, I don’t understand this one.]
Eisegetes (please excuse the delay):
That’s a common utilitarian assumption/axiom, but I’m not sure it’s true. I think for most people, analysis stops at “this action is not wrong,” and potential actions are not ranked much beyond that. [...] Thus, it is simply wrong to say that we have ordered preferences over all of those possible actions—in fact, it would be impossible to have a unique brain state correspond to all possibilities. And remember—we are dealing here not with all possible brain states, but with all possible states of the portion of the brain which involves itself in ethical judgments.
I don’t think so. Even if only a few (or even just one) option is actually entertained, a complete ranking of all of them is implicit in your brain. If I asked you if table salt was green, you’d surely answer it wasn’t. Where in your brain did you store the information that table salt is not green?
I could make your brain’s implicit ordering of moral options explicit with a simple algorithm:
1. Ask for the most moral option.
2. Exclude it from the set of options.
3. While options left, goto 1.Intersting, but I think also incomplete. To see why: ask yourself whether it makes sense for someone to ask you, following G.E. Moore, the following question:
“Yes, I understand that X is a action that I am disposed to prefer/regard favorably/etc for reasons having to do with evolutionary imperatives. Nevertheless, is it right/proper/moral to do X?”
In other words, there may well be evolutionary imperatives that drive us to engage in infidelity, murder, and even rape. Does that make those actions necessarily moral? If not, your account fails to capture a significant amount of the meaning of moral language.That’s a confusion. I was explicitly talking of “moral” circuits. Not making a distinction between moral and amoral circuits makes moral a non-concept. (Maybe it is one, but that’s also beside the point.) The question “is it moral to do X” just makes no sense without this distinction. (Btw. “right/proper” might just be different beasts than “moral”.)
Eisegetes:
Well I (or you?) really maneuvered me into a tight spot here.
About those options, you made a goot point.
To the question “Which circuits are moral?”, I kind of saw that one coming. If you allow me to mirror it: How do you know which decisions involve moral judgements?
I don’t know of any satisfiying definition of morality. I probably must involve actions that are neither taylored for personal nor inclusive fitness. I suppose the best I can come up with is “A moral action is one which you choose (== that makes you feel good) without being likely to benefit your genes.”. Morality is the effect of some adaption that’s so flexible/plastic that it can be turned against itself. I admit that sounds rather like some kind of accident.
Maybe I should just give up and go back to being a moral nihilist again… there, now! See what you’ve made me believe! =)
ZMD:
C’mon gimme a break, I said it’s not satisfying!
I get your point, but I dare you to come up with a meaningful but unassailable one-line definition of morality yourself!
BTW birth control certainly IS moral, and overeating is just overdoing a beneficial adaption (i.e. eating).
Eisegetes:
”Moral” is a category of meaning whose content we determine through social negotiations, produced by some combination of each person’s inner shame/disgust/disapproval registers, and the views and attitudes expressed more generally throughout their society.From a practical POV, without any ambitions to look under the hood, we can just draw this “ordinary language defense line”, as I’d call it. Where it gets interesting from an Evolutionary Psychology POV is exactly those “inner shame/disgust/disapproval registers”. The part about “social negotiations” is just so much noise mixed into the underlying signal.
Unfortunately, as I believe we have shown, there is a circularity trap here: When we try to partition our biases into categories (e.g. “moral” and “amoral”), the partitioning depends on the definition, which depends on the partitioning, etc. etc. ad nauseam. I’ll try a resolution further down.Oh, I think a large subset of moral choices are moral precisely because they do benefit our genes—we say that someone who is a good parent is moral, not immoral, despite the genetic advantages conferred by being a good parent.
Well, this is where I used to prod people with my personal definition. I’d say that good parenting is just Evolutionary Good Sense (TM), so there’s no need to muddy the water by sticking the label “moral” to it. Ordinary language does, but I think it’s noise (or rather, in this case, a systematic error; more below).I think some common denominators are altruism (favoring tribe over self, with tribe defined at various scales), virtuous motives, prudence, and compassion. Note that these are all features that relate to our role as social animals—you could say that morality is a conceptual outgrowth of survival strategies that rely on group action (and hence, become a way to avoid collective action problems and other examples of individual rationality that are suboptimal when viewed from the group’s perspective).
I think the ordinary language definition of moral is useless for Evolutionary Psychology and must either be radically redefined in this context or dropped alltogether and replaced by something new (with the benefit of avoiding a mixup with the ordinary language sense of the word).
If we take for granted that we are the product of evolutionary processes fed by random variations, we can claim that (to a first approximation) everything about us is there because it furthers its own survival. Specifically, our genetic makeup is the way it is because it tends to produce successful survival machines.
1) Personal egoism exists because it is a useful and simple approximation of gene egoism.
2) For important instances of personal egoism going against gene egoism, we have built-in exceptions (e.g. altrusim towards own children and some other social adaptions).
3) But biasing behaviour using evolutionary adaption is slow. Therefore it would be useful to provide a survival machine with a mechanism that is able to override personal egoism using culturally transmitted bias. This proclaimed mechanism is at the core of my definition of morality (and, incidentally, a reasonable source of group selection effects).
4) Traditional definitions of morality are flawed because they confuse/conflate 2 and 3 and oppose them to 1. This duality is deeply mistaken, and must be rooted out if we are to make any headway in understanding ourselves.Btw, the fun thing about 3 is that it does not only allow us to overcome personal egoism biases (1) but also inclusive fitness biases (2). So morality is exactly that thing that allows us to laugh in the face of our selfish genes and commit truly altrustic acts.
It is an adaption to override adaptions.Regards, Frank
Eliezer, I must admit I really don’t get your problem with definitions. Or, more precisely, I can’t get myself to share it. It seems to me you attack definitions mainly because they enable malignant (and/or confused) arguers to do a bait-and-switch. Without defining what is being talked about, there is no obvious switching anymore, so that seems to be your solution. But to me that is like leaving an important variable unbound, which makes the whole argument underdefined and therefore practically worthless. IMHO it is precisely because two people have a common conception of what they are talking about that they can communicate at all. Definitions help to make important key concepts sharply and clearly—uhm—defined. When someone uses a “definition” which makes little or no practical sense, just go and call ‘em on that! When someone does a bait-and-switch, call ’em! But when people argue without defining what they’re arguing about, what you gonna do? Apart from that, both “I can define that thing any way I want.” and “It’s in the dictionary.” have a smell of straw-men. If someone goes “I can define that thing any way I want.” then just insist on the exact same definition when they draw their conclusions—be a djinn! Don’t give in to what they wish (or think) they had defined, but to what they did, and tread rickety would-be conclusions to shambles! If someone goes “It’s in the dictionary.”, ah well… find someone else to talk to… =)
Rolf: ,,What do you think of, say, philosophers’ endless arguments of what the word “knowledge” really means?″ I think meh! ,,This seems to me one example where many philosophers don’t seem to understand that the word doesn’t have any intrinsic meaning apart from how people define it.″ Well, if they like to do so, let ’em. At least they’re off the streets. =) What’s worse is the kind of philosophers who flourish by sidestepping honest debate by complicating matters until nobody (including themselves) can possibly tell a left hand from a right foot anymore, and then go on to declare victory. Definitions belong to their toolset, too. But are we going to argue against knifes because the malignant can hurt others with them, and the ignorant or plain unlucky even themselves? We need them to carve the turkey, so if we want turkey slices we’ll just have to operate carefully. I, for one, want to keep my knife! ,,Presumably Eliezer would ask, “for what purpose do we want to answer the question?” However, many philosophers would prefer to unconstructively argue what semantics are “correct”. So my personal experience is that I don’t think Eliezer’s attacking a straw man here.″ He is if he is going to spill the baby with the bath. He’d have to write “Careless/malignant use of definitions is bad.” not just “Definitions are bad.” (which is my perception).
Ben: I think you’re right, we are on the same page! =) How about “Useful definitions will still be distorted by our mental mechanisms. Malignant and careless definitions are bad no matter what.”?
Hi, am back from the city, and a bit sleepy. I’ll try my best with my comment. =) Michael: I was not so much commenting on this specific post as on the whole series. Your example seems to me to boil down to a case of bait-and-switch. Eliezer: ,,When people start violently arguing over their communication signals while they (a) understand what each other are trying to say″ Here the problem is already at full swing, and it’s the same as philosophers arguing about the “real” definition of X. As soon as you have managed to get your point across, any further insistance, or even “violent arguing” only shows lack of insight or sincerity. ,,and (b) are trying to do an inference that they could theoretically do as single players, something has gone wrong″ I see no problem about inferences as long as it’s clear to everyone what the inference is about (and nobody tries to sneak a switch later).
Just a small one, because I can’t hold it: You can’t judge the usefulness of a definition without specifying what you want it to be useful for. And now I’m off to bed… =)
Okay, now let’s code those factory objects! 1 bit for blue not red 1 bit for egg not cube 1 bit for furred not smooth 1 bit for flexible not hard 1 bit for opaque not translucent 1 bit for glows not dark 1 bit for vanadium not palladium
Nearly all objects we encounter code either 1111111 or 0000000. So we compress all objects into two categories and define: 1 bit for blegg (1111111) not rube (0000000). But, alas, the compression is not lossless, because there are objects which are neither perfect bleggs nor rubes: A 1111110 object will be innocently accused of containing vanadium, because it is guilty by association with the bleggs, subjected to unfair kin liability! Still, in an enviroment where our survival depends on how faithfully we can predict unobserved features of those objects we stand good chances:
Nature: “I have here an x1x1x1x object, what is at it’s core?” We suspect a blegg and guess Vanadium—and with 98% probability we are right, and nature awards us a pizza and beer.
Now the evil supervillain, I-can-define-any-way-I-like-man (Icdawil-man, for short), comes by and says: “I will define my categories thus: 1 bit for regg (0101010) not blube (1010101)” While he will achieve the same compression ratio, he looses about 1⁄2 of the information in the process. He has failed to carve at the joint. So much the worse for Icdawil-man.
Nature: “I have here an x1x1x1x object, what is at it’s core?” Icdawil-man suspects a regg, guesses Palladium, and with 98% probability starts coughing blood...
Next along comes the virtuous and humble I-refuse-to-compress-man:
Nature: “I have here an x1x1x1x object, what is at it’s core?” Irtc-man refuses to speculate and is awarded a speck in his eye.
Next along comes the brainy I-have-all-probabilities-stored-here-because-I-can-man:
Nature: “I have here an x1x1x1x object, what is at it’s core?” Ihapshbic-man also gets a pizza and beer, but will sooner be hungry again than we will. That’s because of all the energy he needs for his humongous brain which comes in an extra handcart.
Any more contenders? =)
tcpkac: The important caveat is : ‘boundaries around where concentrations of unusually high probability density lie, to the best of our knowledge and belief’ . All the imperfections in categorisation in existing languages come from that limitation.
This strikes me as a rather bold statement, but “to the best of our knowledge and belief” might be fuzzy enough to make it true. Some specific factors that distort our language (and consequently our thinking) might be:
Probability shifts in thingspace invalidating previously useful clusterings. Natural languages need time adapt, and dictionary writers tend to be conservative.
Cognitive biases that distort our perception of thingspace. Very on topic here, I suppose. ^_^
Manipulation (intended and unintended). Humans treat articulations from other humans as evidence. That can go so far that authentic contrary evidence is explained away using confirmation bias.
Other problems in categorisation, [...] do not come from language problems in categorisation, [...] but from different types of cognitive compromise.
Well, lack of consistency in important matters seems to me to be a rather bad sign.
It would also lack words for the surprising but significant improbable phenomenon. Like genius, or albino. Then again, once you get around to saying you will have words for significant low hills of probability, the whole argument blows away.
I don’t think so. Once the most significant hills have been named, we go on and name the next significant hills. We just choose longer names.
[Without having read the comments]
WTF? You say: [...] I was actually advised to post something “fun”, but I’d rather not [...]
I think it was fun!
BTW could we increase the probability of people being honest by basing reward not on individual choices, but on the log-likelihood over a sample of similar choices? (For a given meaning of similar.)
I think the trouble about “Have you stopped beating your wife?” is that it is not about a state but about a state transition. It asks “10?”, and the answer “no” really leaves three possibilities open (including that the questionee has recently started beating his wife). The sentence structure implies a false choice between answers 10 and 11, because we are used to asking (and answering) yes/no questions about 1-bit issues while here we deal with a 2-bit issue. But you probably knew all that… =)
Oh, and the Liar Paradox makes much more sense once we overcome our obsession about recursion: If we take the equally valid stance of viewing it as an iteration, it is easy to see that the whole problem is that the proposition does not converge; that’s all there is to it.
James Blair: I’ve read JH’s “On Intelligence” and find him overrated. He happens to be well known, but I have yet to see his results beating other people’s results. Pretty theories are fine with me, but ultimately results must count.
- 9 Nov 2011 1:41 UTC; 1 point) 's comment on Righting a Wrong Question by (
I think the argument is misguided. Why? The choice is not only hypothetical but impossible. There is not the remotest possibility of a googolplex persons even existing.
So I’ll tone it down to a more realistic “equation”, then I’ll argue that it’s not an equation after all.
Then I’ll admit that I’m lost, but so are you… =)
Let’s assume 1e7 people experiencing pain of a certain intensity for one second vs. one persion experiencing equal pain for 1e7 seconds (approx. 19 years).
Let’s assume that every person in question has an expectancy of, say, 63 years of painless life. Then my situation is eqivalent to either extending the painless life expectency of 1e7 people from 63y-1s to 63y or to extend it for one person from 54y to 63y.
According to the law of diminishing returns, the former is definitely much less valuable than the latter.
But how much so? How to quantify this?
I have no idea, but I claim that neither do you… =)
regards, frank
p.s.
I have a hunch that you couldn’t fit enough people with specks in their eyes into the universe to make up for one 50-year-torture.