# Disguised Queries

Imag­ine that you have a pe­cu­liar job in a pe­cu­liar fac­tory: Your task is to take ob­jects from a mys­te­ri­ous con­veyor belt, and sort the ob­jects into two bins. When you first ar­rive, Su­san the Se­nior Sorter ex­plains to you that blue egg-shaped ob­jects are called “bleggs” and go in the “blegg bin”, while red cubes are called “rubes” and go in the “rube bin”.

Once you start work­ing, you no­tice that bleggs and rubes differ in ways be­sides color and shape. Bleggs have fur on their sur­face, while rubes are smooth. Bleggs flex slightly to the touch; rubes are hard. Bleggs are opaque; the rube’s sur­face slightly translu­cent.

Soon af­ter you be­gin work­ing, you en­counter a blegg shaded an un­usu­ally dark blue—in fact, on closer ex­am­i­na­tion, the color proves to be pur­ple, halfway be­tween red and blue.

Yet wait! Why are you call­ing this ob­ject a “blegg”? A “blegg” was origi­nally defined as blue and egg-shaped—the qual­ifi­ca­tion of blue­ness ap­pears in the very name “blegg”, in fact. This ob­ject is not blue. One of the nec­es­sary qual­ifi­ca­tions is miss­ing; you should call this a “pur­ple egg-shaped ob­ject”, not a “blegg”.

But it so hap­pens that, in ad­di­tion to be­ing pur­ple and egg-shaped, the ob­ject is also furred, flex­ible, and opaque. So when you saw the ob­ject, you thought, “Oh, a strangely col­ored blegg.” It cer­tainly isn’t a rube… right?

Still, you aren’t quite sure what to do next. So you call over Su­san the Se­nior Sorter.

“Oh, yes, it’s a blegg,” Su­san says, “you can put it in the blegg bin.”
You start to toss the pur­ple blegg into the blegg bin, but pause for a mo­ment. “Su­san,” you say, “how do you know this is a blegg?”
Su­san looks at you oddly. “Isn’t it ob­vi­ous? This ob­ject may be pur­ple, but it’s still egg-shaped, furred, flex­ible, and opaque, like all the other bleggs. You’ve got to ex­pect a few color defects. Or is this one of those philo­soph­i­cal co­nun­drums, like ‘How do you know the world wasn’t cre­ated five min­utes ago com­plete with false mem­o­ries?’ In a philo­soph­i­cal sense I’m not ab­solutely cer­tain that this is a blegg, but it seems like a good guess.”
“No, I mean...” You pause, search­ing for words. “Why is there a blegg bin and a rube bin? What’s the differ­ence be­tween bleggs and rubes?”
“Bleggs are blue and egg-shaped, rubes are red and cube-shaped,” Su­san says pa­tiently. “You got the stan­dard ori­en­ta­tion lec­ture, right?”
“Why do bleggs and rubes need to be sorted?”
“Er… be­cause oth­er­wise they’d be all mixed up?” says Su­san. “Be­cause no­body will pay us to sit around all day and not sort bleggs and rubes?”
“Who origi­nally de­ter­mined that the first blue egg-shaped ob­ject was a ‘blegg’, and how did they de­ter­mine that?”
Su­san shrugs. “I sup­pose you could just as eas­ily call the red cube-shaped ob­jects ‘bleggs’ and the blue egg-shaped ob­jects ‘rubes’, but it seems eas­ier to re­mem­ber this way.”
You think for a mo­ment. “Sup­pose a com­pletely mixed-up ob­ject came off the con­veyor. Like, an or­ange sphere-shaped furred translu­cent ob­ject with writhing green ten­ta­cles. How could I tell whether it was a blegg or a rube?”
“Wow, no one’s ever found an ob­ject that mixed up,” says Su­san, “but I guess we’d take it to the sort­ing scan­ner.”
“How does the sort­ing scan­ner work?” you in­quire. “X-rays? Mag­netic res­o­nance imag­ing? Fast neu­tron trans­mis­sion spec­troscopy?”
“I’m told it works by Bayes’s Rule, but I don’t quite un­der­stand how,” says Su­san. “I like to say it, though. Bayes Bayes Bayes Bayes Bayes.”
“What does the sort­ing scan­ner tell you?”
“It tells you whether to put the ob­ject into the blegg bin or the rube bin. That’s why it’s called a sort­ing scan­ner.”
At this point you fall silent.
“In­ci­den­tally,” Su­san says ca­su­ally, “it may in­ter­est you to know that bleggs con­tain small nuggets of vana­dium ore, and rubes con­tain shreds of pal­la­dium, both of which are use­ful in­dus­tri­ally.”
“Su­san, you are pure evil.”
“Thank you.”

So now it seems we’ve dis­cov­ered the heart and essence of bleg­gness: a blegg is an ob­ject that con­tains a nugget of vana­dium ore. Sur­face char­ac­ter­is­tics, like blue color and furred­ness, do not de­ter­mine whether an ob­ject is a blegg; sur­face char­ac­ter­is­tics only mat­ter be­cause they help you in­fer whether an ob­ject is a blegg, that is, whether the ob­ject con­tains vana­dium.

Con­tain­ing vana­dium is a nec­es­sary and suffi­cient defi­ni­tion: all bleggs con­tain vana­dium and ev­ery­thing that con­tains vana­dium is a blegg: “blegg” is just a short­hand way of say­ing “vana­dium-con­tain­ing ob­ject.” Right?

Not so fast, says Su­san: Around 98% of bleggs con­tain vana­dium, but 2% con­tain pal­la­dium in­stead. To be pre­cise (Su­san con­tinues) around 98% of blue egg-shaped furred flex­ible opaque ob­jects con­tain vana­dium. For un­usual bleggs, it may be a differ­ent per­centage: 95% of pur­ple bleggs con­tain vana­dium, 92% of hard bleggs con­tain vana­dium, etc.

Now sup­pose you find a blue egg-shaped furred flex­ible opaque ob­ject, an or­di­nary blegg in ev­ery visi­ble way, and just for kicks you take it to the sort­ing scan­ner, and the scan­ner says “pal­la­dium”—this is one of the rare 2%. Is it a blegg?

At first you might an­swer that, since you in­tend to throw this ob­ject in the rube bin, you might as well call it a “rube”. How­ever, it turns out that al­most all bleggs, if you switch off the lights, glow faintly in the dark; while al­most all rubes do not glow in the dark. And the per­centage of bleggs that glow in the dark is not sig­nifi­cantly differ­ent for blue egg-shaped furred flex­ible opaque ob­jects that con­tain pal­la­dium, in­stead of vana­dium. Thus, if you want to guess whether the ob­ject glows like a blegg, or re­mains dark like a rube, you should guess that it glows like a blegg.

So is the ob­ject re­ally a blegg or a rube?

On one hand, you’ll throw the ob­ject in the rube bin no mat­ter what else you learn. On the other hand, if there are any un­known char­ac­ter­is­tics of the ob­ject you need to in­fer, you’ll in­fer them as if the ob­ject were a blegg, not a rube—group it into the similar­ity cluster of blue egg-shaped furred flex­ible opaque things, and not the similar­ity cluster of red cube-shaped smooth hard translu­cent things.

The ques­tion “Is this ob­ject a blegg?” may stand in for differ­ent queries on differ­ent oc­ca­sions.

If it weren’t stand­ing in for some query, you’d have no rea­son to care.

Is athe­ism a “re­li­gion”? Is tran­shu­man­ism a “cult”? Peo­ple who ar­gue that athe­ism is a re­li­gion “be­cause it states be­liefs about God” are re­ally try­ing to ar­gue (I think) that the rea­son­ing meth­ods used in athe­ism are on a par with the rea­son­ing meth­ods used in re­li­gion, or that athe­ism is no safer than re­li­gion in terms of the prob­a­bil­ity of causally en­gen­der­ing vi­o­lence, etc… What’s re­ally at stake is an athe­ist’s claim of sub­stan­tial differ­ence and su­pe­ri­or­ity rel­a­tive to re­li­gion, which the re­li­gious per­son is try­ing to re­ject by deny­ing the differ­ence rather than the su­pe­ri­or­ity(!)

But that’s not the a pri­ori ir­ra­tional part: The a pri­ori ir­ra­tional part is where, in the course of the ar­gu­ment, some­one pulls out a dic­tio­nary and looks up the defi­ni­tion of “athe­ism” or “re­li­gion”. (And yes, it’s just as silly whether an athe­ist or re­li­gion­ist does it.) How could a dic­tio­nary pos­si­bly de­cide whether an em­piri­cal cluster of athe­ists is re­ally sub­stan­tially differ­ent from an em­piri­cal cluster of the­olo­gians? How can re­al­ity vary with the mean­ing of a word? The points in thingspace don’t move around when we re­draw a bound­ary.

But peo­ple of­ten don’t re­al­ize that their ar­gu­ment about where to draw a defi­ni­tional bound­ary, is re­ally a dis­pute over whether to in­fer a char­ac­ter­is­tic shared by most things in­side an em­piri­cal cluster...

Hence the phrase, “dis­guised query”.

• While the ad­vi­sory against us­ing a dic­tio­nary to re­solve such ar­gu­ments are true, a lot of ar­gu­ments stem from con­fu­sion or dis­agree­ment over the mean­ing of words. Based on the work I’ve done in philos­o­phy, this type of dis­agree­ment prob­a­bly cov­ers 50% of philo­soph­i­cal de­bates, with about 2% of the par­ti­ci­pants in such de­bates ad­mit­ting that that is what they dis­agree about.

For ex­am­ple, “Most athe­ists be­lieve in the di­v­inity of Christ” could be re­solved eas­ily with­out re­course to the em­piri­cal world. If I be­lieve that it is pos­si­ble for some­one to be an athe­ist and be­lieve in the di­v­inity of Christ, then I am us­ing athe­ist to mean some­thing very differ­ent from its ac­tual mean­ing.

As you wrote ear­lier, us­ing words in­vokes con­no­ta­tions re­gard­less of whether a newly as­signed defi­ni­tion mer­its the same con­no­ta­tions. Some on the far left have defined “racism” to mean “is White and lives in the USA.” Ap­peal­ing to a dic­tio­nary is use­ful in an ar­gu­ment with such a per­son be­cause it pre­vents them from us­ing a very charged word in­ap­pro­pri­ately. Similar tricks oc­cur with “fas­cism,” “free­dom,” “democ­racy,” and many other such words.

Ba­si­cally, a dic­tio­nary doesn’t de­cide if an em­piri­cal cluster has a cer­tain prop­erty, but it does en­sure that the word you are us­ing matches the em­piri­cal cluster you are refer­ring to. It is ir­ra­tional to try to prove an em­piri­cal fact with a defi­ni­tion. It is not at all ir­ra­tional if there is any dis­agree­ment over what group is picked out by the word, or whether the group picked out by the word must or must not have a cer­tain prop­erty, or else the word would not pick them out. More dis­agree­ments cen­ter on poorly un­der­stood defi­ni­tions than most peo­ple would like to ad­mit.

On a re­lated note, this re­cent se­ries on defi­ni­tions is quite brilli­antly writ­ten, Eliezer, even more so than usual.

• In col­loge, I led a book dis­cus­sion group about ethics. Most par­ti­ci­pants had read the book.

Every­one in the group agreed that ethics and morals were differ­ent.

They even agreed on HOW they were differ­ent (in­ter­nal/​per­sonal vs group/​so­cietal, ar­rived at vs pro­scribed, philo­soph­i­cal vs le­gal).

They REFUSED to agree, how­ever, on what term referred to which dis­tinc­tion.

Sigh...

• This was a re­ally clar­ify­ing post for me. I had got­ten to the point of notic­ing that “What is X?” de­bates were re­ally just de­bates over the defi­ni­tion of X, but I hadn’t yet taken the next step of ask­ing why peo­ple care about how X is defined.

I think an­other great ex­am­ple of a dis­guised query is the re­cur­ring de­bate, “Is this art?” Peo­ple have re­ally widely vary­ing defi­ni­tions of “art” (e.g., some peo­ple’s defi­ni­tion in­cludes “aes­thet­i­cally in­ter­est­ing,” other peo­ple’s defi­ni­tion merely re­quires “con­cep­tu­ally in­ter­est­ing”) -- and in one sense, once both par­ties ex­plain how they use the word “art,” the de­bate should re­solve pretty quickly.

But of course, since it’s a dis­guised query, the ques­tion “Is this art?” should re­ally be fol­lowed up with the ques­tion “Why does it mat­ter?” As far as I can tell, the dis­guised query in this case is usu­ally “does this de­serve to be taken se­ri­ously?” which can be trans­lated in prac­tice into, “Is this the sort of thing that de­serves to be ex­hibited in a gallery?” And that’s cer­tainly a real, non-se­man­tic de­bate. But we can have that de­bate with­out ever need­ing to de­cide whether to ap­ply the la­bel “art” to some­thing—in fact, I think the de­bate would be much clearer if we left the word “art” out of it al­to­gether.

I’ve elab­o­rated on this topic on Ra­tion­ally Speak­ing: http://​​ra­tio­nal­lys­peak­ing.blogspot.com/​​2010/​​03/​​is-this-art-and-why-thats-wrong.html …and I cite this LW post. Thanks, Eliezer.

• Peo­ple who ar­gue that athe­ism is a re­li­gion “be­cause it states be­liefs about God” are re­ally try­ing to ar­gue (I think) that the rea­son­ing meth­ods used in athe­ism are on a par with the rea­son­ing meth­ods used in re­li­gion, or that athe­ism is no safer than re­li­gion in terms of the prob­a­bil­ity of causally en­gen­der­ing vi­o­lence, etc...

Or they’re ap­ply­ing a Fully Gen­eral Coun­ter­ar­gu­ment with­out ac­tu­ally try­ing to make any sub­stan­tive point, or re­al­iz­ing that they should be?

• In­deed.
For ex­am­ple:

Eliezer: Reli­gion sucks, be­cause of this and Bayes...
Je­sus: Ah, not so fast, chap. You see, athe­ism is also a re­li­gion, be­cause of this and that...

I think that Je­sus’ re­sponse is a non se­quitur (a well de­signed one (by us­ing a tech­nique similar to equiv­o­ca­tion), which is why it makes for such good “block­ing” tech­nique). So there’s no dis­guised query, since Je­sus isn’t query­ing at all, he’s just try­ing to “win” the ar­gu­ment.

• There is a similar­ity be­tween Chris­ti­ans and many athe­ists in their moral philos­o­phy, how­ever. Athe­ists may not be­lieve in God, but I think they mostly ad­here to the 10 com­mand­ments.

At least Chris­ti­ans can say they fol­low their moral philos­o­phy be­cause God told them so. What rea­son do athe­ists have?

• Athe­ists may not be­lieve in God, but I think they mostly ad­here to the 10 com­mand­ments.

I think you’re just try­ing to say that athe­ists fol­low moral ex­pec­ta­tions of mod­ern Chris­tian-in­fluenced cul­ture, but taken liter­ally, the state­ment’s non­sense.

I mean, look at the Ten Com­mand­ments:

1. Thou shalt have no other gods be­fore me.

2. Thou shalt not make unto thee any graven image (...).

3. Thou shalt not take the name of the Lord thy God in vain (...).

4. Re­mem­ber the sab­bath day, to keep it holy. (...)

5. Honour thy father and thy mother (...).

6. Thou shalt not kill.

7. Thou shalt not com­mit adultery.

8. Thou shalt not steal.

9. Thou shalt not bear false wit­ness against thy neigh­bour.

10. Thou shalt not covet thy neigh­bour’s house, (...) nor his ass, nor any thing that is thy neigh­bour’s.

The first 4 are blatantly ig­nored, 6 is fa­mously prob­le­matic, 9 and 10 are mostly ig­nored (via gos­sip, sta­tus seek­ing, greed and so on) and fi­nally 7 and 8 might be typ­i­cally obeyed, but minor theft (es­pe­cial anony­mous) is com­mon and adultery has at least 10% base rates.

How is this a “mostly ad­hered”? (Ob­vi­ously, Chris­ti­ans and athe­ists don’t re­ally differ in their be­hav­ior here.)

• I’ll have to con­cede that athe­ists moral be­liefs don’t mostly ad­here to the 10 com­mand­ments.

The point I wished to make was that many of the moral philoso­phies of ra­tio­nal­ists are very similar to their Chris­tian coun­ter­parts. I be­lieve the similar­ity is mostly due to the cul­ture they were brought up in rather than whether they be­lieve God ex­ists or not. You might even con­sider God to be ir­rele­vant to the is­sue.

• I cer­tainly agree that many peo­ple’s moral be­liefs are shaped and con­strained by their cul­ture, and that God is ir­rele­vant to this, as is be­lief in God.

• Agreed. Obli­ga­tory Mold­bug link (warn­ing: long, and only first in a se­ries) for an in­ter­est­ing deriva­tion of (some) mod­ern moral­ity as athe­is­tic Chris­ti­an­ity.

• In the in­ter­ests of char­i­ta­ble read­ing, I took them to mean “athe­ists ad­here to the ten com­mand­ments about as well as Chris­ti­ans do”.

• In the in­ter­ests of char­i­ta­ble read­ing, I took them to mean “athe­ists ad­here to the ten com­mand­ments about as well as Chris­ti­ans do”.

I looked through them and I was sur­prised at how lit­tle I break them. 4 is way off, of course and I’ll hon­our my father and mother to the ex­tend they damn well earn it (rather a lot as it turns out). The thing is go­ing by the stan­dards that I ac­tu­ally held for fol­low­ing all those com­mand­ments when I was Chris­tian I could have ex­pected to be vi­o­lat­ing all over the place. I’m par­tic­u­larly dis­ap­pointed with No. 7. I’ve been mak­ing a damn fine effort to be liv­ing in sin as much as con­ve­niently pos­si­ble but since I have yet to sleep with a mar­ried woman I seem to be clean on that one. Go­ing by the ac­tual com­mand­ment I’m prob­a­bly even ok with 3. The “swear­ing” thing seems to be to­tally blown out of pro­por­tion.

• Per­son­ally I break some of them more of­ten than I’d like, but then again I did so when I iden­ti­fied as an Ortho­dox Jew as well.

Of course, if I were to take this se­ri­ously, I’d get bogged down in defi­ni­tional is­sues pretty quickly. For ex­am­ple, I’ve slept with a mar­ried man (mar­ried to some­one else, I mean), so I guess I’ve vi­o­lated #7… or at least, he did. OTOH, given that ev­ery­one in­volved was aware of the situ­a­tion and OK with it, I don’t con­sider that any of us were do­ing any­thing wrong in the pro­cess.

But a cer­tain kind of re­li­gious per­son would say that my be­liefs about what’s right and wrong don’t mat­ter. Of course, I would dis­agree.

• Per­son­ally I break some of them more of­ten than I’d like

It’s 6, isn’t it! (Dex­ter has that prob­lem too—I recom­mend fol­low­ing his ex­am­ple and at least chanel­ling it into vigilan­tism.)

• Well, I kill all the time… most peo­ple I know do.

But if we adopt the con­ven­tional prac­tice of trans­lat­ing “lo tirt­zoch” as “don’t mur­der”, and fur­ther adopt the con­ven­tional prac­tice of not la­bel­ing kil­lings we’re morally OK with as “mur­der”, then I squeak by here as well… I’m ba­si­cally OK with all the kil­ling I’ve done.

I’ve never ac­tu­ally watched Dex­ter, but I gather it’s about some­one com­pel­led to mur­der peo­ple who chooses to mur­der only peo­ple where the world is im­proved by their death? Hrm. I’m not sure I agree.

Cer­tainly, if I’m go­ing to mur­der some­one, it should be the least valuable per­son I can find. Which might turn out to be my­self. The ques­tion for me is how re­li­able my judg­ment is on the mat­ter. If I’m not a re­li­able judge, I should re­cuse my­self from judge­ment.

Per­haps I should as­sem­ble a com­mit­tee to de­cide on my vic­tims.

• But if we adopt the con­ven­tional prac­tice of trans­lat­ing “lo tirt­zoch” as “don’t mur­der”, and fur­ther adopt the con­ven­tional prac­tice of not la­bel­ing kil­lings we’re morally OK with as “mur­der”, then I squeak by here as well…

I think the gen­eral idea is that by “mur­der” the con­cept of ‘do not kill peo­ple with­out it be­ing pre­scribed by the law’ is meant—with the rest of Mo­saic law in­di­cat­ing in which cases it was okay to kill peo­ple nonethe­less.

So kil­ling in­sects doesn’t count (be­cause they’re not peo­ple), nor be­ing a state ex­e­cu­tioner counts (be­cause it’s pre­scribed by the law).

• Yeah, you’re right. I was be­ing snarky in the gen­eral di­rec­tion of my Yeshiva up­bring­ing, at the ex­pense of ac­cu­racy.

• I’ve never ac­tu­ally watched Dex­ter, but I gather it’s about some­one com­pel­led to mur­der peo­ple who chooses to mur­der only peo­ple where the world is im­proved by their death?

Slightly more spe­cific and slightly less con­se­quen­tial­is­tic than that. He chooses to kill only other mur­der­ers, and usu­ally only cold-blooded mur­der­ers who are un­re­pen­tant and likely to mur­der again, (ex­am­ple: one time he stopped when he re­al­ized his se­lected vic­tim had only mur­dered the per­son that had raped him in prison).

But it’s not about im­prov­ing the world re­ally, some­times he even sab­o­tages the po­lice in­ves­ti­ga­tion just so he can have these peo­ple to him­self.

• I’ve never ac­tu­ally watched Dex­ter, but I gather it’s about some­one com­pel­led to mur­der peo­ple who chooses to mur­der only peo­ple where the world is im­proved by their death?

In what I’ve seen of Dex­ter the most eth­i­cally grey kill was of a pe­dophile who was stalk­ing his step-daugh­ter (and that’s a mur­der I’d be com­fortable com­mit­ting!). The rest were all mur­der­ers who were highly likely to kill again.

For my part I would pre­fer to live in a world in which other peo­ple don’t go around be­ing vigilantes and also don’t want to be a vigilante my­self. Be­cause frankly it isn’t my prob­lem and it isn’t worth the risk or the effort it would take me.

• But if we adopt the con­ven­tional prac­tice of trans­lat­ing “lo tirt­zoch” as “don’t mur­der”, and fur­ther adopt the con­ven­tional prac­tice of not la­bel­ing kil­lings we’re morally OK with as “mur­der”

That doesn’t sound like a con­ven­tion that the quite fits with cul­ture or spirit of the holy law in ques­tion or of the cul­ture which would cre­ate such a law.

• That doesn’t sound like a con­ven­tion that the quite fits with cul­ture or spirit of the holy law in ques­tion or of the cul­ture which would cre­ate such a law.

Huh? The Is­raelites were for kil­ling peo­ple dur­ing wartime, and the var­i­ous cul­tures that in­ter­preted that law all bent it to ex­clude the deaths they wanted to cause.

• Huh? The Is­raelites were for kil­ling peo­ple dur­ing wartime, and the var­i­ous cul­tures that in­ter­preted that law all bent it to ex­clude the deaths they wanted to cause.

Oh, of course you take into ac­count what the Is­raelites con­sid­ered mur­der, and what­ever mean­ing they would have em­bed­ded into what­ever word it was that is trans­lated into mur­der or kill. But what we can­not rea­son­ably do is plug in our moral val­ues around kil­ling. Be­ing as we are a cul­ture of im­moral in­fidels by the stan­dards of the law in ques­tion! (Gen­tiles too come to think of it.) What we con­sider moral kil­lings is damn near ir­rele­vant.

• But what we can­not rea­son­ably do is plug in our moral val­ues around kil­ling.

It’s not clear to me what you mean here. I took TheOtherDave to be in­ter­pret­ing “lo tirt­zoch” as “so­cially dis­ap­proved kil­ling is so­cially dis­ap­proved,” which is vac­u­ous on pur­pose. That is, a cul­ture that would cre­ate such a law is a cul­ture of homo hyp­ocri­tus.

To put it an­other way, the con­ven­tion of how you in­ter­pret a law is more im­por­tant that the writ­ten con­tent of the law, and so the rele­vant ques­tion is if the Is­raelites saw “lo tirt­zoch” as ab­solutely op­posed to kil­ling or not. (I would imag­ine not, as there were sev­eral crimes which man­dated the com­mu­nity col­lec­tively kill the per­son who com­mit­ted the crime!)

• To put it an­other way, the con­ven­tion of how you in­ter­pret a law is more im­por­tant that the writ­ten con­tent of the law, and so the rele­vant ques­tion is if the Is­raelites saw “lo tirt­zoch” as ab­solutely op­posed to kil­ling or not. (I would imag­ine not, as there were sev­eral crimes which man­dated the com­mu­nity col­lec­tively kill the per­son who com­mit­ted the crime!)

I thought that was about what I said.

• I got the op­po­site im­pres­sion from two sources:

First, I saw the cul­ture and spirit of the drafters of such a law to be self-serv­ing /​ rel­a­tivist /​ hyp­o­crit­i­cal, and so thought the con­ven­tion was the em­bod­i­ment of that. Your claim that the con­ven­tion didn’t fit with the cul­ture sug­gested to me that you thought the Is­raelites saw the law as un­chang­ing and un­bend­able.

Se­cond, the com­ment that claimed what we con­sider moral was ir­rele­vant struck me as ev­i­dence for the pre­vi­ous sug­ges­tion, that there is a moral stan­dard set at one time and not chang­ing, rather than us mod­el­ing the Is­raelites’ exmaple by bend­ing the defi­ni­tions to suit our pur­poses.

It’s plau­si­ble we agree ex­cept are us­ing differ­ent defi­ni­tions for things like cul­ture and spirit, but also plau­si­ble we don’t agree on key ideas here.

• (nods) As noted el­se­where, you’re of course right. I was be­ing snarky in the gen­eral di­rec­tion of my Yeshiva up­bring­ing, at the ex­pense of ac­cu­racy. Bad Dave. No bis­cuit.

• I sup­pose you do tech­ni­cally scrape through in ad­her­ing to No. 7 as it is pre­sented in that wikipe­dia pas­sage based on two tech­ni­cal­ities. That it it is only adultery if you sleep with a mar­ried woman and that be­ing the part­ner of the adulterer doesn’t qual­ify. (I’m a lit­tle skep­ti­cal of that pas­sage ac­tu­ally). Come to think of it you may get a re­prieve for a third ex­cep­tion if it is the case that the other guy was mar­ried to a guy (am­bigu­ous).

• The guy in ques­tion was mar­ried to a woman at the time.

• Upvoted for the 10th com­mand­ment link.

• You have that back­wards.

Mo­ral peo­ple fol­low their moral philos­o­phy be­cause they be­lieve it’s the right thing to do, whether they are Chris­tian or athe­ist or nei­ther.

Some moral peo­ple also be­lieve God has told them to do cer­tain things, and use those be­liefs to help them se­lect a moral philos­o­phy. Those peo­ple are moral and re­li­gious.
Other moral peo­ple don’t be­lieve that, and se­lect a moral philos­o­phy with­out the aid of that be­lief. Those peo­ple are moral and athe­ist.

Some im­moral peo­ple be­lieve that God has told them to do cer­tain things. Those peo­ple are im­moral and re­li­gious.
Some im­moral peo­ple don’t be­lieve that. Those peo­ple are im­moral and athe­ist.

In­ci­den­tally, I know no athe­ists (whether moral or not) who ad­here to the Tal­mu­dic ver­sion of the first com­mand­ment. But then, since you are talk­ing about the ten com­mand­ments in a Chris­tian rather than Jewish con­text, I sup­pose you don’t sub­scribe to the Tal­mu­dic ver­sion any­way a. (cf http://​​en.wikipe­dia.org/​​wiki/​​Ten_Com­mand­ments#Two_texts_with_num­ber­ing_schemes)

EDIT: I should prob­a­bly also say ex­plic­itly that I don’t mean to as­sert here that no­body fol­lows the ten com­mand­ments sim­ply be­cause they be­lieve God told them to… per­haps some peo­ple do. But some­one who doesn’t think the ten com­mand­ments are the right thing to do and does them any­way sim­ply be­cause God told them to is not a moral per­son, but rather a de­vout or God-fear­ing per­son. (e.g., Abra­ham set­ting out to sac­ri­fice his son).

• In­ci­den­tally, I know no athe­ists (whether moral or not) who ad­here to the Tal­mu­dic ver­sion of the first com­mand­ment.

I also know no athe­ists who ad­here to the sec­ond com­mand­ment (make no graven image), the fourth (no “work” on Shab­bath), or the tenth (do not covet).

• Mo­ral peo­ple fol­low their moral philos­o­phy be­cause they be­lieve it’s the right thing to do, whether they are Chris­tian or athe­ist or nei­ther.

My point is that Chris­ti­ans be­lieve their moral philos­o­phy is cor­rect be­cause God told them so. Athe­ists don’t have such an au­thor­ity to rely on.

So what ra­tio­nal jus­tifi­ca­tion can an athe­ist provide for his moral philos­o­phy? There is no jus­tifi­ca­tion be­cause there is no way to de­ter­mine the val­idity of any jus­tifi­ca­tion they may provide.

There is no ra­tio­nal foun­da­tion for moral be­liefs be­cause they are ar­bi­trar­ily in­vented. They are built on blind faith.

• Some

I agree that re­li­gion isn’t the source of moral­ity. In my ex­pe­rience, athe­ists be­lieve in good and evil just as much as re­li­gious peo­ple do.

required

To be­lieve you can some­how make the world ob­jec­tively bet­ter, even in a small way, you must still be­lieve in some sort of ob­jec­tive good or evil. My po­si­tion is the sac­rile­gious idea that there is no ob­jec­tive good or evil—that the uni­verse is stuff bounc­ing and jump­ing around in ac­cor­dance with the laws of na­ture. Crazy, I know.

There is a differ­ence be­tween the uni­verse it­self and our in­ter­pre­ta­tions of the uni­verse. A moral is a judge­ment about the uni­verse mis­taken for an in­her­ent prop­erty of the uni­verse.

In or­der to es­tab­lish that some­thing is bet­ter than or su­pe­rior to some­thing else, we must have some crite­ria to com­pare them by. The prob­lem with ob­jec­tive good and evil, if you be­lieve they ex­ist, is that there is no way to es­tab­lish the cor­rect crite­ria.

A lion’s in­cli­na­tion to kill an­telope isn’t in­her­ently wrong. The in­cli­na­tion is sim­ply the lion’s in­di­vi­d­ual na­ture. Be­cause you care about the an­telope’s suffer­ing doesn’t mean the lion should. The lion isn’t wrong if it doesn’t care.

We are all in­di­vi­d­u­als with differ­ent wants and de­sires. To be­lieve there is a one-size-fits-all moral code that all liv­ing crea­tures should fol­low is lu­nacy.

• My po­si­tion is the sac­rile­gious idea that there is no ob­jec­tive good or evil—that the uni­verse is stuff bounc­ing and jump­ing around in ac­cor­dance with the laws of na­ture. Crazy, I know.

That is a po­si­tion shared by 13% of LW sur­vey re­spon­dents.

• “Be­cause God said so” is hardly a ra­tio­nal jus­tifi­ca­tion ei­ther.

• Direct coun­ter­ar­gu­ment: I would phrase my at­ti­tude to ethics as: “I have de­cided that I want X to hap­pen as much as pos­si­ble, and Y to hap­pen as lit­tle as pos­si­ble.” I’m not “be­liev­ing” any­thing—just stat­ing goals. So there’s no faith re­quired.

Reflec­tive coun­ter­ar­gu­ment: But even if God did say so*, why should we obey Him? There are a num­ber of an­swers, some based on prior moral con­cepts (grat­i­tude for Creation, fear of Hell, etc.) and some on a new one (vari­a­tions on “God is God and there­fore has moral au­thor­ity”) but they all just push the is­sue of your ul­ti­mate ba­sis for moral­ity back a step. They don’t solve the prob­lem, or even sim­plify it.

*In­ci­den­tally, what does it mean for an all-pow­er­ful be­ing to say some­thing? The Abra­hamic God is the cause for liter­ally ev­ery­thing, so aren’t all in­struc­tions writ­ten or spo­ken any­where by any­one equally “the speech of God”?

• Direct coun­ter­ar­gu­ment: I would phrase my at­ti­tude to ethics as: “I have de­cided that I want X to hap­pen as much as pos­si­ble, and Y to hap­pen as lit­tle as pos­si­ble.” I’m not “be­liev­ing” any­thing—just stat­ing goals. So there’s no faith re­quired.

I’d agree. By switch­ing from morals to your in­di­vi­d­ual prefer­ences, you avoid the need to iden­tify what is ob­jec­tively good and evil.

• So, let’s look at a spe­cific in­stance, just to be clear on what we’re say­ing.

Sup­pose I be­lieve that it’s bad for peo­ple to suffer, and it’s good for peo­ple to live fulfilled and happy lives.

I would say that’s a moral be­lief, in that it’s a be­lief about what’s good and what’s bad. Would you agree?

Sup­pose fur­ther that, when I look into how I ar­rived at that be­lief, I con­clude that I de­rived it from the fact that I en­joy liv­ing a fulfilled and happy life, and that I anti-en­joy suffer­ing, and that my ex­pe­riences with other peo­ple have led me to be­lieve that they are similar to me in that re­spect.

Would you say that my be­lief that it’s bad for peo­ple to suffer is ar­bi­trar­ily in­vented and built on blind faith?

And if so: what fol­lows from that, to your way of think­ing?

• I would say that’s a moral be­lief, in that it’s a be­lief about what’s good and what’s bad. Would you agree?

I would.

Would you say that my be­lief that it’s bad for peo­ple to suffer is ar­bi­trar­ily in­vented and built on blind faith?

Yes, be­cause you’re us­ing a ra­tio­nal­iza­tion to jus­tify how you be­lieve the world should be. And no ra­tio­nal­iza­tion for a moral is more valid than any other.

You could equally say that you think other peo­ple should work and suffer so that your life is fulfilled and happy. How do we de­ter­mine whether that moral be­lief is more cor­rect than the idea that you should pre­vent other peo­ple’s suffer­ings? The an­swer is that we can­not.

Ob­vi­ously, we can be­lieve in what­ever moral philos­o­phy we like, but we must ac­cept there is no ra­tio­nal ba­sis for them, be­cause there is no way to de­ter­mine the val­idity of any ra­tio­nal ex­pla­na­tion we make. There is no cor­rect moral­ity.

In my opinion, a per­son’s par­tic­u­lar moral be­liefs usu­ally have more to do with the be­liefs of their par­ents and the cul­ture they were brought up in. If they were brought up in a differ­ent cul­ture, they’d have a differ­ent moral philos­o­phy for which they would give similar ra­tio­nal jus­tifi­ca­tions.

• A few things:

• Can you clar­ify what ra­tio­nal­iza­tion you think I’m us­ing, ex­actly? For that mat­ter, can you clar­ify what ex­actly I’m do­ing that you la­bel “jus­tify­ing” my be­liefs? It seems to me all I’ve done so far is de­scribe what my be­liefs are, and spec­u­late on how they got that way. Nei­ther of which, it seems to me, re­quire any sort of faith (in­clud­ing but not limited to blind faith, what­ever that is).

• Leav­ing that aside, and ac­cept­ing for the sake of dis­cus­sion that “us­ing a ra­tio­nal­iza­tion to jus­tify how I be­lieve the world should be” is a le­gi­t­i­mate de­scrip­tion of what I’m do­ing… is there some­thing else you think I ought to be do­ing in­stead? Why?

• I agree with you that fam­ily and cul­tural in­fluence have a lot to do with moral be­liefs (in­clud­ing mine).

• Can you clar­ify what ra­tio­nal­iza­tion you think I’m us­ing, ex­actly? For that mat­ter, can you clar­ify what ex­actly I’m do­ing that you la­bel “jus­tify­ing” my be­liefs?

You said “Sup­pose I be­lieve that it’s bad for peo­ple to suffer”. I’d say that’s a moral be­lief. The ra­tio­nal jus­tifi­ca­tion you pro­vided for that be­lief was that “I de­rived it from the fact that I en­joy liv­ing a fulfilled and happy life, and that I anti-en­joy suffer­ing, and that my ex­pe­riences with other peo­ple have led me to be­lieve that they are similar to me in that re­spect”.

is there some­thing else you think I ought to be do­ing in­stead?

Not re­ally. The main point I’m mak­ing is that there is no way to de­ter­mine whether any moral is valid.

One could ar­gue that moral­ity dis­torts one’s view of the uni­verse and that do­ing away with it gives you a clearer idea of how the uni­verse ac­tu­ally is be­cause you’re no longer con­stantly con­sid­er­ing how it should be.

For ex­am­ple, you might think that your com­puter should work the way you want and ex­pect, so when it crashes you might an­grily con­sider your­self the vic­tim of a di­a­bol­i­cal com­puter and throw it out of your win­dow. The moral be­lief has dis­torted the situ­a­tion.

Without that moral be­lief, one would sim­ply ac­cept the com­puter’s un­wanted and un­ex­pected be­hav­ior and calmly con­sider pos­si­ble ac­tions to get the be­hav­ior one wants. There is no sense of be­ing cheated by a cruel uni­verse.

• OK, thanks for clar­ify­ing.

For what it’s worth, I agree with you that “it’s bad for peo­ple to suffer” is a moral be­lief, but I dis­agree that “I de­rived it from...” is any sort of jus­tifi­ca­tion for a moral be­lief, in­clud­ing a ra­tio­nal one. It’s sim­ply a spec­u­la­tion about how I came to hold that be­lief.

I agree that there’s no way to de­ter­mine whether a moral be­lief is “valid” in the sense that I think you’re us­ing that word.

I agree that it’s pos­si­ble to hold a be­lief (in­clud­ing a moral be­lief) in such a way that it in­hibits my abil­ity to per­ceive the uni­verse as it ac­tu­ally is. It’s also pos­si­ble to hold a be­lief in such a way that it in­hibits my abil­ity to achieve my goals.
I agree that one ex­am­ple of that might be if I held a moral be­lief about how my com­puter should work in such a way that when my com­puter fails to work as I think it should, I throw it out the win­dow.
Another ex­am­ple might be if I held the be­lief that pour­ing lemon­ade into the key­board will im­prove its perfor­mance. That’s not at all a moral be­lief, but it nev­er­the­less in­terferes with my abil­ity to achieve my goals.

Would you say that if choose to sim­ply ac­cept that my com­puter be­haves the way it does, and I calmly con­sider pos­si­ble ac­tions to get the be­hav­ior I want, and I don’t have the sense that I’m be­ing cheated by a cruel uni­verse, that it fol­lows from all of that that I have no rele­vant moral be­liefs about the situ­a­tion?

• Would you say that if choose to sim­ply ac­cept that my com­puter be­haves the way it does, and I calmly con­sider pos­si­ble ac­tions to get the be­hav­ior I want, and I don’t have the sense that I’m be­ing cheated by a cruel uni­verse, that it fol­lows from all of that that I have no rele­vant moral be­liefs about the situ­a­tion?

I’d say so, yes.

• OK. Given that, I’m pretty sure I’ve un­der­stood you; thanks for clar­ify­ing.

For my own part, it seems to me that when I do that, my be­hav­ior is in large part mo­ti­vated by the be­lief that it’s good to avoid strong emo­tional re­sponses to events, which is just as much a moral be­lief as any other.

• For my own part, it seems to me that when I do that, my be­hav­ior is in large part mo­ti­vated by the be­lief that it’s good to avoid strong emo­tional re­sponses to events, which is just as much a moral be­lief as any other.

There are situ­a­tions where emo­tions need to be tem­porar­ily sup­pressed—it needn’t in­volve a moral be­lief. Get­ting an­gry could sim­ply be un­helpful at that mo­ment so you sup­press it. To do so, you don’t need to be­lieve that its in­her­ently wrong to ex­press strong emo­tions.

That par­tic­u­lar moral would come with its dis­ad­van­tages. If some­one close to you dies, it is healthier to ex­press your sor­row than avoid it. Some peo­ple don’t change their be­hav­ior un­less you ex­press anger.

Many think that moral­ity is nec­es­sary to con­trol the evil im­pulses of hu­mans, as if its re­moval would mean we’d all sud­denly start ran­domly kil­ling each other. Far from sav­ing us from suffer­ing, I’m in­clined to think moral be­liefs have ac­tu­ally caused much suffer­ing: for ex­am­ple, some re­li­gious be­lief is evil, some poli­ti­cal be­lief is evil, some eth­nic group is evil.

• We seem to be largely talk­ing past each other.

I agree with you that there are situ­a­tions where sup­press­ing emo­tions is a use­ful way of achiev­ing some other goal, and that choos­ing to sup­press emo­tions in those situ­a­tions doesn’t re­quire be­liev­ing that there’s any­thing wrong with ex­press­ing strong emo­tions, and that choos­ing to sup­press emo­tions in those situ­a­tions with­out such a be­lief doesn’t re­quire any par­tic­u­lar moral be­lief.

I agree with you that the be­lief that ex­press­ing strong emo­tions is wrong has dis­ad­van­tages.

I agree with you that many peo­ple have con­fused be­liefs about moral­ity.

I agree with you that much suffer­ing has been caused by moral be­liefs, some more so than oth­ers.

• How do peo­ple use the karma sys­tem here? If you agree vote up, if you dis­agree vote down? That will cre­ate a very in­su­lar com­mu­nity.

My five cents.

• The typ­i­cal ad­vice is “if you want to see more like this, vote up; if you want to see less like this, vote down.” Users try to down­vote for faulty premises or logic rather than con­clu­sions they dis­agree with.

For short posts, where claims are made with­out much jus­tifi­ca­tion, there tends to be lit­tle be­sides a con­clu­sion. Those com­ments will get voted down if they seem wrong or to not add much to the con­ver­sa­tion. (I’ve had sev­eral off­hand re­marks, for which I had solid, non-ob­vi­ous jus­tifi­ca­tion, voted down, but then in re­sponses I made up the karma by ex­plain­ing my­self fully. I sus­pect that if I had ex­plained my­self fully at the start, I wouldn’t have got­ten down­voted.)

• Well, for my­self, it’s be­cause game the­ory says the world works bet­ter when peo­ple aren’t dicks to one an­other, and be­cause em­pa­thy (in­tu­itive and ra­tio­nal) al­low me to put my­self in other peo­ples’ shoes, and to ap­pre­ci­ate that it’s good to try to help them when I can, since they’re very much like my­self. I have de­sires and goals, and so do they, and mine aren’t par­tic­u­larly more im­por­tant sim­ply be­cause they’re mine.

• I have de­sires and goals, and so do they, and mine aren’t par­tic­u­larly more im­por­tant sim­ply be­cause they’re mine.

This is the base of my whole moral philos­o­phy, too. And you know what? There are peo­ple who ac­tu­ally dis­agree with it! Re­sponses I’ve got­ten from peo­ple in dis­cus­sions have ranged from “I don’t give a shit about other peo­ple, they’re not me” to “you can’t think like that, you need to think self­ishly, be­cause oth­er­wise ev­ery­one will tram­ple on you.”

• Athe­ists may not be­lieve in God, but I think they mostly ad­here to the 10 com­mand­ments.

Nit­pick: Only half of the Ten Com­mand­ments are nice hu­man­i­tar­ian com­mand­ments like “don’t mur­der”. The other half are all about how hu­mans should in­ter­act with God, and I don’t think most athe­ists put much weight be­hind “you will not make for your­self any statue or any pic­ture of the sky above or the earth be­low or the wa­ter that is be­neath the earth”.

At least Chris­ti­ans can say they fol­low their moral philos­o­phy be­cause God told them so.

They can say that, but un­less they already have a moral philos­o­phy that gives God moral au­thor­ity (or states that Hell is to be avoided, or jus­tifies grat­i­tude for Creation, or...) that’s not ac­tu­ally a rea­son.

• Chris­ti­ans allegedly fol­low the com­mand­ments be­cause God told them to. They do what God told them to be­cause of de­sire to avoid pun­ish­ment, de­sire to ob­tain re­ward, de­sire to fulfill their per­ceived duty, or de­sire to ex­press their love. They fulfill these de­sires be­cause it makes them feel good/​happy.

Athe­ists do what­ever they do, most of them for the same rea­son, cut out the idea of it be­ing cen­tered around a per­son­al­ity who effects their hap­piness.

Harry said he preferred achiev­ing things over hap­piness, but I can’t help think­ing that if he had sac­ri­ficed his po­ten­tial, he wouldn’t re­ally have been happy about it, no mat­ter how many friends he had.

At the end of the day, hap­piness drives at least most peo­ple, and in the­ory, all (when they make their de­ci­sions through care­ful con­sid­er­a­tion, and not just to fulfill some role or habit. As we know, this is rare, and in re­al­ity, most peo­ple can not trace their de­ci­sions’ mo­ti­va­tion to their hap­piness or any­one’s, or to any other con­sis­tent value; so I opine).

• At the end of the day, hap­piness drives at least most peo­ple, and in the­ory, all

That sounds like a hid­den tau­tol­ogy-by-defi­ni­tion. What is hap­piness? That which peo­ple act to ob­tain. Why do peo­ple act? To ob­tain hap­piness. What­ever some­one does, you can say af­ter the fact that they did it to make them­selves happy.

• What is hap­piness?

It is a state of mind. So say­ing that some­one is driven by hap­piness is not tau­tolog­i­cal—it means that they have a per­cep­tu­ally de­ter­mined util­ity func­tion.

• I think Plas­tic’s got it.

I don’t think hap­piness is defined as what­ever peo­ple act to ob­tain. It’s some­thing most peo­ple fail at with some reg­u­lar­ity.

I mean, just look at Elsa, yah?

• I mean, just look at Elsa, yah?

Er, Elsa)? Um, what?

• Pre­cisely!

Full of no­ble de­sires, and of self-de­struc­tive means to achieve them.

Her efforts for hap­piness are won­der­fully demon­stra­tive of the failure sys­temic to like efforts con­ceived in ig­no­rance.

• What rea­son do athe­ists have?

Maybe be­cause they have de­cided that a spe­cific moral philos­o­phy would be most use­ful?

• I was ac­tu­ally just try­ing to say that Eliezer gave a bad ex­am­ple of a dis­guised query.

As for moral philos­o­phy, it can be con­sid­ered a sci­ence. So athe­ists that be­lieve in moral­ity should value it as any other sci­ence (for it’s use­ful­ness etc). Well, hm, athe­ists need not be fans of sci­ence. So they can be moral be­cause they en­joy it, or sim­ply be­cause “why the heck not”.

• I wouldn’t call moral philos­o­phy a sci­ence.

If we both in­de­pen­dently in­vented an imag­i­nary crea­ture, nei­ther would be cor­rect. They are sim­ply the crea­tures we’ve ar­bi­trar­ily cre­ated. There is no sci­ence of moral philos­o­phy any­more than there is a sci­ence of in­vent­ing an imag­i­nary crea­ture.

I’d say to be sci­ence there needs to be the abil­ity to test whether some­thing is valid. There is no such test for the val­idity of morals any­more than there is a test for the val­idity of an imag­i­nary crea­ture.

• What rea­son do athe­ists have?

Lots of rea­sons. It’s pretty much built into the hu­man brain that be­ing nice to your friends and neigh­bours is helpful to long-term sur­vival, so most peo­ple get pleas­ant feel­ings from do­ing some­thing they con­sider ‘good’, and feel guilty af­ter do­ing some­thing they con­sider ‘bad’. You don’t need the Com­mand­ments them­selves.

...Oh and the whole idea that it’s bet­ter to live in a so­ciety where ev­ery­one fol­lows laws like “don’t mur­der”...even if you per­son­ally could benefit from mur­der­ing the peo­ple who you didn’t like, you don’t want ev­ery­one else mur­der­ing peo­ple too, and so it makes sense, as a so­ciety, to teach chil­dren that ‘mur­der is bad’.

• It’s pretty much built into the hu­man brain that be­ing nice to your friends and neigh­bours is helpful to long-term sur­vival, so most peo­ple get pleas­ant feel­ings from do­ing some­thing they con­sider ‘good’, and feel guilty af­ter do­ing some­thing they con­sider ‘bad’.

Are these rea­sons to not kill peo­ple or steal? Can I pro­pose a test? Sup­pose that it were built into the hu­man brain that be­ing cruel to your friends and neigh­bors is helpful to long-term sur­vival (bear with me on the evolu­tion­ary im­plau­si­bil­ity of this), and so must peo­ple get pleas­ant feel­ings from do­ing things they con­sider cruel, and feel guilty af­ter do­ing nice things.

Sup­pose all that were true: would you then have good rea­sons to to be cruel? If not, then how are they rea­sons to be nice?

• You would clearly have rea­sons; whether they are good rea­sons de­pends how you’re mea­sur­ing “good”.

• We might want to dis­t­in­guish here be­tween rea­sons to do some­thing and rea­sons why one does some­thing. So imag­ine we dis­cover that the color green makes peo­ple want to com­pro­mise, so we paint a board­room green. Dur­ing a meet­ing, the chair­per­son de­cides to com­pro­mise. Even if the chair­per­son knows about the study, and is be­ing af­fected by the green walls in a de­ci­sive way (such that the green­ness of the walls is the rea­son why he or she com­pro­mises), could the chair­per­son take the green­ness of the walls as a rea­son to com­pro­mise?

• A rea­son­able dis­tinc­tion, but I don’t think it quite maps onto the is­sue at hand. You said to sup­pose “peo­ple get pleas­ant feel­ings from do­ing things they con­sider cruel, and feel guilty af­ter do­ing nice things”. If one has a goal to feel pleas­ant feel­ings, and is struc­tured in that man­ner, then that is rea­son to be cruel, not just rea­son why they would be cruel.

• If one has a goal to feel pleas­ant feel­ings, and is struc­tured in that man­ner, then that is rea­son to be cruel, not just rea­son why they would be cruel.

Agreed, but so much is packed into that ‘if’. We all seek plea­sure, but not one of us be­lieves it is an un­qual­ified good. The im­pli­ca­tion of Swim­mer’s post was that athe­ists have rea­sons to obey the ten com­mand­ments (well, 4 or 5 of them) com­pa­rable in for­mal terms to the rea­sons Chris­ti­ans have (God’ll burn me if I don’t, or what­ever). That is, the claim seems to be that athe­ists can jus­tify their ac­tions. Now, if some­one does some­thing nice for me, and I ask her why she did that, she can re­ply with some facts about evolu­tion­ary biol­ogy. This might ex­plain her be­hav­ior, but it doesn’t jus­tify it.

If we imag­ine some­one com­mit­ting a mur­der and then tel­ling us some­thing about her (per­haps defec­tive) neu­ro­biol­ogy, we might take this to ex­plain their be­hav­ior, but never to jus­tify it. We would never say ’Yeah, I guess now that you make those ob­ser­va­tions about your brain, it was rea­son­able of you to kill that guy.” The point is that the mur­derer hasn’t just given us a bad rea­son, she hasn’t given us a rea­son at all. We can­not call her ra­tio­nal if this is all she has.

• The im­pli­ca­tion of Swim­mer’s post was that athe­ists have rea­sons to obey the ten com­mand­ments (well, 4 or 5 of them) com­pa­rable in for­mal terms to the rea­sons Chris­ti­ans have (God’ll burn me if I don’t, or what­ever).

I didn’t claim that, and if I im­plied it, it was by ac­ci­dent. (Although I do think that a lot of athe­ists have just as strong if not stronger rea­sons to obey cer­tain moral rules, the ex­am­ples I gave weren’t those ex­am­ples.) I was try­ing to point out that if some­one de­cides one day to stop be­liev­ing in God, and re­al­izes that this means God won’t smite them if they break one of the Ten Com­mand­ments, that doesn’t mean they’ll go out and mur­der some­one. Their moral in­stincts, and the pos­i­tive/​nega­tive re­in­force­ment to obey them (i.e. plea­sure or guilt), keep ex­ist­ing re­gard­less of ex­ter­nal laws.

The point is that the mur­derer hasn’t just given us a bad rea­son, she hasn’t given us a rea­son at all. We can­not call her ra­tio­nal if this is all she has.

So we ask her why, and she says “oh, he took the seat that I wanted on the bus three weeks in a row, and his hum­ming is an­noy­ing, and he always copies my ex­ams.” Which might not be a good rea­son to mur­der some­one ac­cord­ing to you, with your nor­mal neu­ro­biol­ogy–you would con­tent your­self with fum­ing and mak­ing rude com­ments about him to your friends–but she con­sid­ers it a good rea­son, be­cause her men­tal ‘brakes’ are off.

• Their moral in­stincts, and the pos­i­tive/​nega­tive re­in­force­ment to obey them (i.e. plea­sure or guilt), keep ex­ist­ing re­gard­less of ex­ter­nal laws.

Right, we agree on that. But if the apos­tate there­after has no rea­son to re­gard them­selves as morally re­spon­si­ble, then their moral be­hav­ior is no longer fully ra­tio­nal. They’re sort of go­ing through the mo­tions.

Which might not be a good rea­son to mur­der some­one ac­cord­ing to you, with your nor­mal neu­ro­biol­ogy–you would con­tent your­self with fum­ing and mak­ing rude com­ments about him to your friends–but she con­sid­ers it a good rea­son, be­cause her men­tal ‘brakes’ are off.”

The ques­tion here isn’t about good vs. bad rea­sons, but be­tween ad­mis­si­ble vs. in­ad­mis­si­ble rea­sons. Hearsay is of­ten a bad rea­son to be­lieve that Peter shot Paul, but it is a rea­son. It counts as ev­i­dence. If that’s all you have, then you’re not rea­son­ing well, but you are rea­son­ing. The num­ber of planets or­bit­ing the star fur­thest from the sun is not a rea­son to be­lieve Peter shot Paul. It’s not that it’s a bad rea­son. It’s just to­tally in­ad­mis­si­ble. If that’s all you have, then you’re not rea­son­ing badly, you’re just not rea­son­ing at all.

• Sup­pose all that were true: would you then have good rea­sons to to be cruel?

It’s a hard world to vi­su­al­ize, but if cru­elty-ten­den­cies evolved be­cause peo­ple sur­vived bet­ter by be­ing cruel, then cru­elty works in that world, and so­ciety would be dys­func­tional if there were rules against it (imag­ine our world hav­ing rules against be­ing nice, ever!), and to me, some­thing be­ing use­ful is a good rea­son to do it.

If we ever came across that species, no doubt we’d be ap­palled, but the uni­verse isn’t ap­palled. Not un­less you be­lieve that moral­ity ex­ists in it­self, in­de­pen­dently of brains...which I don’t.

Sup­pose that it were built into the hu­man brain that be­ing cruel to your friends and neigh­bors is helpful to long-term sur­vival (bear with me on the evolu­tion­ary im­plau­si­bil­ity of this), and so must peo­ple get pleas­ant feel­ings from do­ing things they con­sider cruel, and feel guilty af­ter do­ing nice things.

If there were an en­tire so­ciety built out of peo­ple like this, then prob­a­bly quite a lot of minor day-to-day cru­elty would go on, and there would be ra­tio­nal­ized Laws, like the Ten Com­mand­ments, jus­tify­ing why be­ing cruel was so im­por­tant, and there would be so­cial cus­toms and struc­tures and eti­quette in­volved in mak­ing sure the right kind of cru­elty hap­pened at the right times…

I’m not say­ing that our brain’s evolu­tion­ary ca­pac­ity for em­pa­thy is the ul­ti­mate perfect moral the­ory. But I do think that all those moral the­o­ries, perfect or ul­ti­mate or not, ex­ist be­cause our brains evolved to have the lit­tle voice of em­pa­thy. Which means that if you take away the Ten Com­mand­ments, most peo­ple won’t stop be­ing nice to peo­ple they care about.

(Be­ing nice to strangers or mem­bers of an out­group is a com­pletely differ­ent mat­ter...there seems to be a mechanism for turn­ing off em­pa­thy to­wards groups of strangers, and plenty of so­cieties have pro­duced peo­ple who were very nice to their friends and neigh­bors, and bar­baric to­wards ev­ery­one else.)

Most athe­ists don’t ac­cept de­on­tolog­i­cal moral the­o­ries–i.e. any the­ory that talks about a set of a pri­ori rules of what’s right ver­sus wrong. But moral­ity doesn’t go away. If you rea­son it out start­ing from what our brains already tell us, you end up with util­i­tar­ian the­o­ries (“I like be­ing happy, and I’m ca­pa­ble of em­pa­thy, so I think other peo­ple must like be­ing happy too, and since my perfect world would be one where I was happy all the time, the perfect world for ev­ery­one would be one with max­i­mum hap­piness.”)

Alter­nately you end up with Kan­tian the­o­ries (“I like be­ing treated as an end, not a means, and em­pa­thy tells me other peo­ple are similar to me, so we should treat ev­ery­one as an end in them­selves or not a means… Oh, and Ac­tion X will make me happy, but if ev­ery­one else did Ac­tion X too, it would make me un­happy, and em­pa­thy tells me ev­ery­one else is about like me, so they wouldn’t want me to do X, so the best so­ciety is one in which no one does X.”) Etc.

If you don’t rea­son it out, you get “well, it made me happy when I helped Su­san with her home­work, and it made me feel bad when I said some­thing mean to Rachel and she cried, so I should help peo­ple more and not be mean as much.” Th­ese feel­ings aren’t perfect, and there are lots of con­flict­ing feel­ings, so peo­ple aren’t nice all the time...but the in­nate brain mechanisms are there even when there aren’t any laws, and the fact that they’re there is prob­a­bly the rea­son why there are laws at all.

• Th­ese feel­ings aren’t perfect, and there are lots of con­flict­ing feel­ings, so peo­ple aren’t nice all the time...but the in­nate brain mechanisms are there even when there aren’t any laws, and the fact that they’re there is prob­a­bly the rea­son why there are laws at all.

So we agree that one might have a rea­son to do some­thing be­cause it’s recom­mended by moral the­o­ries. What I’m ques­tion­ing is whether or not you can have a rea­son to do some­thing on the ba­sis of brain mechanisms or if you can have rea­son to adopt a moral the­ory on the ba­sis of brain mechanisms. And I don’t mean ‘good’ rea­sons, I mean ad­mis­si­ble rea­sons.

Imag­ine some­one think­ing to them­selves: ‘Well, my brain is struc­tured in such and such a way as a re­sult of evolu­tion, so I think I’ll kill this com­pletely in­no­cent guy over here.’ Is he think­ing ra­tio­nally?

And con­cern­ing the adop­tion of a moral the­ory:

(“I like be­ing happy, and I’m ca­pa­ble of em­pa­thy, so I think other peo­ple must like be­ing happy too, and since my perfect world would be one where I was happy all the time, the perfect world for ev­ery­one would be one with max­i­mum hap­piness.”)

There’s a miss­ing in­fer­ence here from want­ing to be happy to want­ing other peo­ple to be happy. Can you ex­plain how you think this ar­gu­ment gets filled out? As it stands, it’s not valid.

Like­wise:

“I like be­ing treated as an end, not a means, and em­pa­thy tells me other peo­ple are similar to me, so we should treat ev­ery­one as an end in them­selves or not a means...

Why should the fact that other peo­ple want some­thing mo­ti­vate me? It doesn’t fol­low from the fact that my want­ing some­thing mo­ti­vates me, that an­other per­son’s want­ing that thing should mo­ti­vate me. In both these ar­gu­ments there’s a miss­ing step which, I think, is per­ti­nent to the prob­lem above: the fact that I am mo­ti­vated to X doesn’t even give me rea­son to X, much less a rea­son to pur­sue the de­sires of other peo­ple.

• Well, my brain is struc­tured in such and such a way as a re­sult of evolu­tion, so I think I’ll kill this com­pletely in­no­cent guy over here.

Beliefs don’t feel like be­liefs, they feel like the way the world is. Like­wise with brain struc­tures. If some­one is a so­ciopath (in short, their brain mechanism for em­pa­thy is bro­ken) and they de­cide they want to kill some­one for rea­sons X and Y, are they be­ing any more ir­ra­tional than some­one who vol­un­teers at a soup kitchen be­cause see­ing peo­ple smile when he hands them their food makes him feel fulfilled?

(“I like be­ing happy, and I’m ca­pa­ble of em­pa­thy, so I think other peo­ple must like be­ing happy too, and since my perfect world would be one where I was happy all the time, the perfect world for ev­ery­one would be one with max­i­mum hap­piness.”)

There’s a miss­ing in­fer­ence here from want­ing to be happy to want­ing other peo­ple to be happy. Can you ex­plain how you think this ar­gu­ment gets filled out? As it stands, it’s not valid.

Sorry for not be­ing clear. The in­fer­ence is that “em­pa­thy”, the abil­ity to step into some­one else’s shoes and imag­ine be­ing them, is an in­nate abil­ity that most hu­mans have, leads you to think that other peo­ple are like you...when they feel plea­sure, it’s like your plea­sure, and when they feel pain, it’s like your pain, and there’s a hy­po­thet­i­cal world where you could have been them. I don’t think this hy­po­thet­i­cal is some­thing that’s taught by moral the­o­ries, be­cause I re­mem­ber rea­son­ing with it as a child when I’d had ba­si­cally no ex­po­sure to for­mal moral the­o­ries, only the stan­dard “that wasn’t nice, you should apol­o­gize.” If you could have been them, you want the same things for them that you’d want for your­self.

I think this is im­me­di­ately ob­vi­ous for fam­ily mem­bers and friends...do you want your mother to be happy? Your chil­dren?

• Beliefs don’t feel like be­liefs, they feel like the way the world is.

Per­haps on some level this is right, but the fact that I can as­sess the truth of my be­liefs means that they don’t feel like the way the world is in an im­por­tant re­spect. They feel like things that are true and false. The way the world is has no truth value. Very small chil­dren have prob­lem with this dis­tinc­tion, but so far as I can tell al­most all healthy adults do not be­lieve that their be­liefs are iden­ti­cal with the world. ETA: That sounded jerky. I didn’t in­tend any covert mean­ness, and please for­give any ap­pear­ance of that.

If some­one is a so­ciopath (in short, their brain mechanism for em­pa­thy is bro­ken) and they de­cide they want to kill some­one for rea­sons X and Y, are they be­ing any more ir­ra­tional than some­one who vol­un­teers at a soup kitchen be­cause see­ing peo­ple smile when he hands them their food makes him feel fulfilled?

I think I re­ally don’t un­der­stand your ques­tion. Could you ex­plain the idea be­hind this a lit­tle bet­ter? My ob­jec­tion was that there are rea­sons to do things, and rea­sons why we do things, and while all rea­sons to do things are also rea­sons why, there are rea­sons why that are not rea­sons to do things. For ex­am­ple, hav­ing a micro-stroke might be the rea­son why I drive my car over an em­bank­ment, but it’s not a rea­son to drive one’s car over an em­bank­ment. No ra­tio­nal per­son could say to them­selves “Huh, I just had a micro-stroke. I guess that means I should drive over this em­bank­ment.”

I think this is im­me­di­ately ob­vi­ous for fam­ily mem­bers and friends...do you want your mother to be happy? Your chil­dren?

Sure, but I take my­self to have moral rea­sons for this. I may feel this way be­cause of my biol­ogy, but my biol­ogy is never it­self a rea­son for me to do any­thing.

• I may feel this way be­cause of my biol­ogy, but my biol­ogy is never it­self a rea­son for me to do any­thing.

Rele­vant LW post.

• Rele­vant LW post.

That post is in need of some se­ri­ous edit­ing: I gen­uinely couldn’t tell if it was on the whole agree­ing with what I was say­ing or not.

I have a puz­zle for you: sup­pose we lived in a uni­verse which is en­tirely de­ter­minis­tic. From the pre­sent state of the uni­verse, all fu­ture states could be com­puted. Would that mean that de­liber­a­tion in which we try to come to a de­ci­sion about what to do is mean­ingless, im­pos­si­ble, or some­how un­der­mined? Or would this make no differ­ence?

• That post is in need of some se­ri­ous edit­ing: I gen­uinely couldn’t tell if it was on the whole agree­ing with what I was say­ing or not.

That post didn’t have a con­clu­sion, be­cause EY wanted to get much fur­ther into his Me­taethics se­quence be­fore offer­ing one.

I have a puz­zle for you: sup­pose we lived in a uni­verse which is en­tirely de­ter­minis­tic. From the pre­sent state of the uni­verse, all fu­ture states could be com­puted. Would that mean that de­liber­a­tion in which we try to come to a de­ci­sion about what to do is mean­ingless, im­pos­si­ble, or some­how un­der­mined? Or would this make no differ­ence?

It makes no differ­ence. In fact, many-wor­lds is a de­ter­minis­tic uni­verse; it just so hap­pens there are differ­ent ver­sions of fu­ture-you who ex­pe­rience/​do differ­ent things, so it’s not “de­ter­minis­tic from your view­point”.

• It makes no differ­ence. In fact, many-wor­lds is a de­ter­minis­tic uni­verse; it just so hap­pens there are differ­ent ver­sions of fu­ture-you who ex­pe­rience/​do differ­ent things, so it’s not “de­ter­minis­tic from your view­point”.

So I’d like to ar­gue that it makes at least a lit­tle differ­ence. When we en­gage in prac­ti­cal de­liber­a­tion, when we think about what to do, we are think­ing about what is pos­si­ble and about our­selves as sources of what is pos­si­ble. No one de­liber­ates about the nec­es­sary, or about any­thing over which we have no con­trol: we don’t de­liber­ate about what the size of the sun should be, or whether or not modus tol­lens should be valid.

If we re­al­ize that the uni­verse is de­ter­minis­tic, then we may still de­cide that we can de­liber­ate, but we do now qual­ify this as a mat­ter of ‘view­points’ or some­thing like that. So the lit­tle differ­ence this makes is in the way we qual­ify the idea of de­liber­a­tion.

So do you agree that there is at least this lit­tle differ­ence? Per­haps it is in­con­se­quen­tial, but it does mean that we learn some­thing about what it means to de­liber­ate when we learn we are liv­ing in a de­ter­minis­tic uni­verse as op­posed to one with a bunch of spon­ta­neous free causes run­ning around.

• It all adds up to nor­mal­ity. Every­thing you do when mak­ing a de­ci­sion is some­thing a de­ter­minis­tic agent can do, and a de­ter­minis­tic agent that de­liber­ates well will (on av­er­age) ex­pe­rience higher ex­pected value than de­ter­minis­tic agents that de­liber­ate poorly.

You’re get­ting closer to the se­quence of posts that cov­ers this in more de­tail, so I’ll just say that I en­dorse what’s said in this se­quence.

• It all adds up to nor­mal­ity. Every­thing you do when mak­ing a de­ci­sion is some­thing a de­ter­minis­tic agent can do, and a de­ter­minis­tic agent that de­liber­ates well will (on av­er­age) ex­pe­rience higher ex­pected value than de­ter­minis­tic agents that de­liber­ate poorly.

What is nor­mal­ity ex­actly? It’s not the ideas and in­tu­itions I came to the table with, un­less the the­ory ac­tu­ally pro­poses to teach me noth­ing. My ques­tions is this: “what do I learn when I learn that the uni­verse is de­ter­minis­tic?” Do I learn any­thing that has to do with de­liber­a­tion? One rea­son­able an­swer (and one way to ex­plain the nor­mal­ity point) would just be ‘no, it has noth­ing to do with ac­tion.’ But this would strike many peo­ple as odd, since we rec­og­nize in our de­liber­a­tion a dis­tinc­tion be­tween fu­ture events we can bring about or pre­vent, and fu­ture states we can­not bring about or pre­vent.

You’re get­ting closer to the se­quence of posts that cov­ers this in more de­tail, so I’ll just say that I en­dorse what’s said in this se­quence.

I find I have an ex­tremely hard time un­der­stand­ing some of the ar­gu­ments in that se­quence, af­ter sev­eral at­tempts. I would dearly love to have some of it ex­plained in re­sponse to my ques­tions. I find this ar­gu­ment in parctic­u­lar to be very con­fus­ing:

But have you ever seen the fu­ture change from one time to an­other? Have you wan­dered by a lamp at ex­actly 7:02am, and seen that it is OFF; then, a bit later, looked in again on the “the lamp at ex­actly 7:02am”, and dis­cov­ered that it is now ON?

Nat­u­rally, we of­ten feel like we are “chang­ing the fu­ture”. Log­ging on to your on­line bank ac­count, you dis­cover that your credit card bill comes due to­mor­row, and, for some rea­son, has not been paid au­to­mat­i­cally. Imag­in­ing the fu­ture-by-de­fault—ex­trap­o­lat­ing out the world as it would be with­out any fur­ther ac­tions—you see that the bill not be­ing paid, and in­ter­est charges ac­cru­ing on your credit card. So you pay the bill on­line. And now, imag­in­ing to­mor­row, it seems to you that the in­ter­est charges will not oc­cur. So at 1:00pm, you imag­ined a fu­ture in which your credit card ac­crued in­ter­est charges, and at 1:02pm, you imag­ined a fu­ture in which it did not. And so your imag­i­na­tion of the fu­ture changed, from one time to an­other.

This ar­gu­ment (which reap­pears in the ‘time­less con­trol’ ar­ti­cle) seems to hang on a very weird idea of ‘chang­ing the fu­ture’. No one I have ever talked to be­lieves that they can liter­ally change a fu­ture mo­ment from hav­ing one prop­erty to hav­ing an­other, and that this change is dis­tinct from a change that takes place over an ex­tent of time. I cer­tainly don’t see how any­one could take this as a way to treat the world as un­de­ter­mined. This seems like very much a straw­man view, born from an equiv­o­ca­tion on the word ‘change’.

But I ex­pect I am miss­ing some­thing (per­haps some­thing re­vealed later on in the more tech­ni­cal stage of the ar­ti­cle). Can you help me?

• What is nor­mal­ity ex­actly?

I meant that learn­ing the uni­verse is de­ter­minis­tic should not turn one into a fatal­ist who doesn’t care about mak­ing good de­ci­sions (which is the in­tu­ition that many peo­ple have about de­ter­minism), be­cause goals and choices mean some­thing even in a de­ter­minis­tic uni­verse. As an anal­ogy, note that all of the agents in my de­ci­sion the­ory se­quence are de­ter­minis­tic (with one kind-of ex­cep­tion: they can make a de­ter­minis­tic choice to adopt a mixed strat­egy), but some of them char­ac­ter­is­ti­cally do bet­ter than oth­ers.

Re­gard­ing the “chang­ing the fu­ture” idea, let’s think of what it means in the con­text of two de­ter­minis­tic com­puter pro­grams play­ing chess. It is a fact that only one game ac­tu­ally gets played, but many al­ter­nate moves are ex­plored in hy­po­thet­i­cals (within the pro­grams) along the way. When one pro­gram de­cides to make a par­tic­u­lar move, it’s not that “the fu­ture changed” (since some­one with a faster com­puter could have pre­dicted in ad­vance what moves the pro­grams make, the fu­ture is in that sense fixed), but rather that of all the hy­po­thet­i­cal moves it ex­plored, the pro­gram chose one ac­cord­ing to a par­tic­u­lar set of crite­ria. Other pro­grams would have cho­sen an­other moves in those cir­cum­stances, which would have led to differ­ent games in the end.

When you or I are de­cid­ing what to do, the differ­ent hy­po­thet­i­cal op­tions all feel like they’re on an equal ba­sis, be­cause we haven’t figured out what to choose. That doesn’t mean that differ­ent pos­si­ble fu­tures are all real, and that all but one van­ish when we make our de­ci­sion. The hy­po­thet­i­cal fu­tures ex­ist on our map, not in the ter­ri­tory; it may be that no ver­sion of you any­where chooses op­tion X, even though you con­sid­ered it.

Does that make more sense?

• but some of them char­ac­ter­is­ti­cally do bet­ter than oth­ers.

A fair point, though I would be in­ter­ested to hear how the al­gorithm de­scribed in DT re­late to ac­tion (it can’t be that they de­scribe ac­tion, since we needn’t act on the out­put of a DT, es­pe­cially given that we’re of­ten akratic). When the metaethics se­quence, for all the trou­ble I have with its ar­gu­ments, gets into an ac­count of free will, I don’t gen­er­ally find my­self in dis­agree­ment. I’ve been look­ing over that and the physics se­quences in the last cou­ple of days, and I think I’ve found the point where I need to do some more read­ing: I think I just don’t be­lieve ei­ther that the uni­verse is time­less, or that it’s a block uni­verse. So I should read Bar­bour’s book.

Thanks, buy the way for post­ing that DT se­ries, and for an­swer­ing my ques­tions. Both have been very helpful.

Does that make more sense?

It does, but I find my­self, as I said, un­able to grant the premise that state­ments about the fu­ture have truth value. I think I do just need to read up on this view of time.

• Thanks, buy the way for post­ing that DT se­ries, and for an­swer­ing my ques­tions. Both have been very helpful.

You’re wel­come!

I would be in­ter­ested to hear how the al­gorithm de­scribed in DT re­late to ac­tion (it can’t be that they de­scribe ac­tion, since we needn’t act on the out­put of a DT, es­pe­cially given that we’re of­ten akratic).

Yeah, a hu­man who con­sciously en­dorses a par­tic­u­lar de­ci­sion the­ory is not the same sort of agent as a sim­ple al­gorithm that runs that de­ci­sion the­ory. But that has more to do with the messy psy­chol­ogy of hu­man be­ings than with de­ci­sion the­ory in its ab­stract math­e­mat­i­cal form.

• Beliefs don’t feel like be­liefs, they feel like the way the world is.

Per­haps on some level this is right, but the fact that I can as­sess the truth of my be­liefs means that they don’t feel like the way the world is in an im­por­tant re­spect.

OK, let me give you a bet­ter ex­am­ple. When you look at some­thing, a lot of very com­plex hard­ware packed into your retina, op­tic nerve, and vi­sual cor­tex, a lot of hard-won com­plex­ity op­ti­mized over mil­lions of years, is go­ing all out an­a­lyz­ing the data and pre­sent­ing you with com­pre­hen­si­ble shapes, colour, and move­ment, as well as helpful rec­og­niz­ing ob­jects for you. When you look at some­thing, are you aware of all that hap­pen­ing? Or do you just see it?

(Dis­claimer: if you’ve read a lot about neu­ro­science, it’s quite pos­si­ble that some­times you do think about your vi­sual pro­cess­ing cen­tres while you’re look­ing at some­thing. But the av­er­age per­son wouldn’t, and the av­er­age per­son prob­a­bly doesn’t think ‘well, there go my em­pa­thy cen­tres again’ when they see an old lady hav­ing trou­ble with her gro­cery bag and feel a de­sire to help her.)

I think I re­ally don’t un­der­stand your ques­tion. Could you ex­plain the idea be­hind this a lit­tle bet­ter? My ob­jec­tion was that there are rea­sons to do things, and rea­sons why we do things, and while all rea­sons to do things are also rea­sons why, there are rea­sons why that are not rea­sons to do things.

Okay, let’s try to un­pack this. In my ex­am­ple, we have a so­ciopath who wants to mur­der some­one. The rea­son why he wants to mur­der some­one, when most peo­ple don’t, is be­cause there’s a cen­tre in his brain that’s bro­ken and so hasn’t learned to see the world from an­other’s per­spec­tive, thus hasn’t in­ter­nal­ized any so­cial moral­ity be­cause it doesn’t make sense to him...ba­si­cally, peo­ple are ob­jects to him, so why not kill them. His rea­son to mur­der some­one is be­cause, let’s say, they’re dat­ing a girl he wants to date. Most non-so­ciopaths wouldn’t con­sider that a rea­son to mur­der any­one, but the rea­son why they wouldn’t is be­cause they have an in­nate un­der­stand­ing that other peo­ple feel pain, of the con­cept of fair­ness, etc, and were thus ca­pa­ble of learn­ing more com­plex moral rules as well.

Sure, but I take my­self to have moral rea­sons for this. I may feel this way be­cause of my biol­ogy, but my biol­ogy is never it­self a rea­son for me to do any­thing.

The way I see it, the biol­ogy as­pect is both nec­es­sary and suffi­cient for this kind of be­havi­our. Some­one with­out the req­ui­site biol­ogy wouldn’t be a good par­ent or friend be­cause they’d see no rea­son to make an effort (un­less they were de­liber­ately “fak­ing it” to benefit from that per­son). And an or­di­nary hu­man be­ing raised with no ex­po­sure to moral rules, who isn’t taught any­thing about it ex­plic­itly, will still want to make their friends happy and do the best they can rais­ing chil­dren. They may not be very good at it, but un­less they’re down­right abused/​severely ne­glected, they won’t be evil.

• When you look at some­thing, are you aware of all that hap­pen­ing? Or do you just see it?

I just see it. I’m aware on some ab­stract level, but I never think about this when I see things, and I don’t take it into ac­count when I con­fi­dently be­lieve what I see.

“His rea­son to mur­der some­one is be­cause, let’s say, they’re dat­ing a girl he wants to date. Most non-so­ciopaths wouldn’t con­sider that a rea­son to mur­der any­one”

I guess I’d dis­agree with the sec­ond claim, or at least I’d want to qual­ify it. Hav­ing a bro­ken brain cen­ter is an in­ad­mis­si­ble rea­son to kill some­one. If that’s the only ex­pla­na­tion some­one could give (or that we could sup­ply them) then we wouldn’t even hold them re­spon­si­ble for their ac­tions. But dat­ing your be­loved re­ally is a rea­son to kill some­one. It’s a very bad rea­son, all things con­sid­ered, but it is a rea­son. In this case, the kil­ler would be held re­spon­si­ble.

“The way I see it, the biol­ogy as­pect is both nec­es­sary and suffi­cient for this kind of be­havi­our. ”

Ne­c­es­sary, we agree. Suffi­cient is, I think, too much, es­pe­cially if we’re rely­ing on evolu­tion­ary ex­pla­na­tions, which should never stand in with­out qual­ifi­ca­tion for psy­cholog­i­cal, much less ra­tio­nal ex­pla­na­tions. After all, I could come to hate my fam­ily if our re­la­tion­ship soured. This hap­pens to many, many peo­ple who are not sig­nifi­cantly differ­ent from me in this biolog­i­cal re­spect.

An or­di­nary hu­man be­ing raised with no ex­po­sure to moral rules in an ex­tremely strange coun­ter­fac­tual: no per­son I have ever met, or ever heard of, is like this. I would prob­a­bly say that there’s not re­ally any sense in which they were ‘raised’ at all. Could they have friends? Is that so morally neu­tral an idea that one could learn it while lean­ing noth­ing of loy­alty? I re­ally don’t think I can imag­ine a ra­tio­nal, lan­guage-us­ing hu­man adult who hasn’t been ex­posed to moral rules.

So the ‘ne­ces­sity’ case is granted. We agree there. The ‘suffi­ciency’ case is very prob­le­matic. I don’t think you could even have learned a first lan­guage with­out be­ing ex­posed to moral rules, and if you never learn any lan­guage, then you’re just not re­ally a ra­tio­nal agent.

• An or­di­nary hu­man be­ing raised with no ex­po­sure to moral rules in an ex­tremely strange coun­ter­fac­tual: no per­son I have ever met, or ever heard of, is like this.

A weak ex­am­ple of this: some­one from a so­ciety that doesn’t have any ex­plicit moral rules, i.e. ‘Ten Com­mand­ments.’ They might fol­low laws, but but the laws aren’t ex­plained ‘A is the right thing to do’ or ‘B is wrong’. Strong ver­sion: some­one whose par­ents never told them ‘don’t do that, that’s wrong/​mean/​bad/​etc’ or ‘you should do this, be­cause it’s the right thing/​what good peo­ple do/​etc.’ Some­one raised in that con­text would prob­a­bly be strange, and kind of undis­ci­plined, and prob­a­bly pretty thoughtless about the con­se­quences of ac­tions, and might in­clude only a small num­ber of peo­ple in their ‘cir­cle of em­pa­thy’...but I don’t think they’d be in­ca­pable of hav­ing friends/​be­ing nice.′

• A weak ex­am­ple of this: some­one from a so­ciety that doesn’t have any ex­plicit moral rules, i.e. ‘Ten Com­mand­ments.’ They might fol­low laws, but but the laws aren’t ex­plained ‘A is the right thing to do’ or ‘B is wrong’.

I can see a case like this, but moral­ity is a much broader idea than can be cap­tured by a list of di­v­ine com­mands and similar such things. Even Chris­ti­ans, Jews, and Mus­lims would say that the ten com­mand­ments are just a sort of be­gin­ning, and not all on their own suffi­cient to be moral ideas.

Some­one raised in that con­text would prob­a­bly be strange, and kind of undis­ci­plined, and prob­a­bly pretty thoughtless about the con­se­quences of ac­tions, and might in­clude only a small num­ber of peo­ple in their ‘cir­cle of em­pa­thy’...but I don’t think they’d be in­ca­pable of hav­ing friends/​be­ing nice.′

Huh, we have pretty differ­ent in­tu­itions about this: I have a hard time imag­in­ing how you’d even get a hu­man be­ing out of that situ­a­tion. I mean, an­i­mals, even re­ally crappy ones like rats, can be em­pa­thetic to­ward one an­other. But there’s no moral­ity in a rat, and we would never think to praise or blame one for its be­hav­ior. Em­pa­thy it­self is nec­es­sary for moral­ity, but far from suffi­cient.

• Or they’re not spel­ling out their ev­i­dence be­cause it seems ob­vi­ous to them and there­fore (in their minds) should be ob­vi­ous to you as well and need no ex­pla­na­tion.

I know many Athe­ists for whom their be­lief in no god is in­deed a re­li­gion. They ar­rived at their be­lief not through rea­son and weigh­ing the ev­i­dence but through the same kind of blind ac­cep­tance of some­one else’s cached val­ues that re­li­gion­ists en­gage in. They fall into the same traps of treat­ing “ar­gu­ments as sol­diers” as do the re­li­gion­ists. They make the same kind of cir­cu­lar, bad ar­gu­ments in favour of their own point of view. Since these peo­ple also tend to be the most vo­cal, mil­i­tant Athe­ists they are the ones that vo­cal Theists run up against the most of­ten. As a re­sult Theists, upon en­coun­ter­ing a ra­tio­nal Athe­ist are at least as per­plexed as an Athe­ist en­coun­ter­ing one of the rare, ra­tio­nal Theists and the two of­ten end up talk­ing past each other due to not re­al­is­ing that the as­sumed com­mon frame of refer­ence they’re each try­ing to use for com­mu­ni­ca­tion isn’t ac­tu­ally com­mon.

• Is athe­ism a “re­li­gion”? Is tran­shu­man­ism a “cult”?

My fa­vorite ex­am­ple is, Is a fe­tus a per­son?

• Quote: My fa­vorite ex­am­ple is, Is a fe­tus a per­son?

I can an­swer this one: A foe­tus is not a per­son prior to 20 weeks ges­ta­tion (18 weeks of preg­nancy), but may be a per­son from that point on­wards.

A body with one mind is one per­son. A body with two minds is two peo­ple (con­joint twins). A body with three minds would be three peo­ple. A heart trans­plant does not switch a per­son into a differ­ent body. A lung trans­plant does not switch a per­son into a differ­ent body. A brain trans­plant (and there­fore a mind trans­plant) would switch a per­son into a differ­ent body. It is minds, not bod­ies, that defines peo­ple.

The mind ex­ists, if at all, in the brain, or more speci­fi­cally the cere­bral cor­tex. The cere­bral cor­tex be­gins to de­velop con­nec­tions no ear­lier than 20 weeks ges­ta­tion, there­fore there is not a per­son be­fore this time (though the body does have re­flexes).

‘Brain Waves’ When??? http://​​eileen.250x.com/​​Main/​​Ein­stein/​​Brain_Waves.htm Mar­garet Sykes

• That doesn’t an­swer the ques­tion “Is a fœ­tus a per­son”, it just sup­plies a defi­ni­tion of “per­son”, which may or may not be rele­vant to any given query.

Sup­pose my real query is “Can a fœ­tus talk?” Now, just be­cause I choose to define “per­son” in such a way that most “per­son”s can talk, and in such a way that a fœ­tus classes as a “per­son”, that doesn’t make the prob­a­bil­ity that a fœ­tus can talk any differ­ent to if I’d defined “per­son” differ­ently.

The whole point of these ex­am­ples of dis­guised queries is that if you find your­self try­ing to an­swer them, you’re do­ing it wrong.

Sup­pose we call the horse’s tail a leg.

• I was told once that I was clearly not a col­lege grad­u­ate. After some dig­ging, he ex­plained that I took the time to define the terms in a dis­cus­sion, whereas col­lege grads knew the defi­ni­tions of words, and so didn’t take the time to agree on them.

Can’t agree with him about that.

• What’s re­ally at stake is an athe­ist’s claim of sub­stan­tial differ­ence and su­pe­ri­or­ity rel­a­tive to religion

Often se­man­tics mat­ter be­cause laws and con­tracts are writ­ten in words. When “Congress shall make no law re­spect­ing an es­tab­lish­ment of re­li­gion”, it’s some­times ad­van­ta­geous to claim that you’re not a re­li­gion, or that your en­emy is a re­li­gion. If churches get prefer­en­tial tax treat­ment, it may be ad­van­ta­geous to claim that you’re a church.

• Or more con­cisely: sharp dis­tinc­tions re­gard­ing fuzzy con­cepts are mean­ingless.

• Sum­mary: Aris­totelianism con­sid­ered harm­ful; Hilbert Space is the new in­dus­try stan­dard.

• Be­cause if no one takes philos­o­phy se­ri­ously, the philoso­phers will have noth­ing at all.

Will you take that away from them? They have so lit­tle as it is.

• Often se­man­tics mat­ter be­cause laws and con­tracts are writ­ten in words.

What he said.

• Ex­cel­lent post, how­ever, “But peo­ple of­ten don’t re­al­ize that their ar­gu­ment about where to draw a defi­ni­tional bound­ary, is re­ally a dis­pute over whether to in­fer a char­ac­ter­is­tic shared by most things in­side an em­piri­cal cluster...” In­deed so, but there are other as­pects. Hu­mans also have ob­ses­sions with (a) how far your cluster is from mine (kin­ship or the lack of it) (b) given one em­piri­cal cluster, how can I pick a char­ac­ter­is­tic, how­ever minor, which will al­low me to split it into ‘us vs them’ (Rob­ber’s Cave). So when you get to dis­cussing whether an up­loaded hu­man brain is part of the cluster ‘hu­man’, those are the con­sid­er­a­tions which will be fore­most.

• Based on the work I’ve done in philos­o­phy, this type of dis­agree­ment prob­a­bly cov­ers 50% of philo­soph­i­cal de­bates, with about 2% of the par­ti­ci­pants in such de­bates ad­mit­ting that that is what they dis­agree about. Some­one re­mind me against why I’m sup­posed to take philos­o­phy se­ri­ously.

• I run the Less Wrong meetup group in Palo Alto. After we an­nounced the events at Meetup.com, we of­ten get a lot of guests who are in­ter­ested in ra­tio­nal­ity but who have not read the LW se­quences. I have an idea for a in­tro­duc­tory ses­sion where we have the par­ti­ci­pants do a sort­ing ex­er­cise. There­fore, I am in­ter­ested in get­ting 3D printed ver­sions of rubes, bleggs and other items refer­ences in this post.

Does any­one have any thoughts on how to do this cheaply? Is there suffi­cient in­ter­est in this to get a kick­starter run­ning? I ex­pect that these items may be of in­ter­est to other Less Wrong meetup groups, and pos­si­bly to CFAR work­shops and/​or schools?

• When I have dis­cus­sions of the philo­soph­i­cal kind, I have learned that it of­ten pays of to start with defin­ing the words be­ing used: For ex­am­ple, I re­call one dis­cus­sion where I defined Evil as a short­hand for “all cor­po­ra­tions and in­sti­tu­tions that try to com­pete by op­pos­ing the ex­is­tence and le­gi­t­i­macy of com­peti­tors and new­com­ers in­stead of by try­ing to offer a bet­ter product, like Microsoft”, and one other dis­cus­sion where I defined Evil as “Work­ing for Sau­ron or Saru­man or Mor­goth”, i.e very differ­ent. I would never (that is, I try hard not to) use a word such as evil with­out defin­ing it first: Peo­ple are all to likely to think of some­thing other than what I meant.

• I like this post be­cause it shows the use­ful­ness of one of my favourite ques­tions to an­swer a ques­tion with: “What’s it for?” What use do you have for the an­swer to your ques­tion?

• Rolf, have you been read­ing Un­qual­ified Reser­va­tions?

• I’m hav­ing prob­lems with the word “is” in your de­scrip­tion.

This is not in­tended as a snarky com­ment...

• My fa­vorite ex­am­ple is, Is a fe­tus a per­son? Yes, but it’s still okay to mur­der them.

Micha Gert­ner has an in­ter­est­ing es­say on prag­ma­tism & eco­nomics here.

• The ques­tion “Is this ob­ject a blegg?” may stand in for differ­ent queries on differ­ent oc­ca­sions. If it weren’t stand­ing in for some query, you’d have no rea­son to care.

Ba­si­cally, this is prag­ma­tism in a nut­shell—right?

Cheers, Ari