GBM

Karma: 13 (LW), 0 (AF)
• Eliezer, this ex­pla­na­tion fi­nally puts it all to­gether for me in terms of the “com­pu­ta­tion”. I get it now, I think.

On the other hand, I have a ques­tion. Maybe this in­di­cates that I don’t truly get it; maybe it in­di­cates that there’s some­thing you’re not con­sid­er­ing. In any case, I would ap­pre­ci­ate your ex­pla­na­tion, since I feel so close to un­der­stand­ing what you’ve been say­ing.

When I mul­ti­ply 19 and 103, whether in my head, or us­ing a pocket calcu­la­tor, I get a cer­tain re­sult that I can check: In the­ory, I can gather a whole bunch of peb­bles, lay them out in 103 rows of 19, and then count them in­di­vi­d­u­ally. I don’t have to rely on my calcu­la­tor—be it in­ter­nal or elec­tronic.

When I com­pute moral­ity, though, the only thing I have to ex­am­ine is my calcu­la­tor and a bunch of other ones. I would eas­ily rec­og­nize that most calcu­la­tors I come across will give the same an­swer to a moral ques­tion, at least to a limited num­ber of dec­i­mal points. But I have no way of know­ing whether those calcu­la­tors are ac­cu­rate rep­re­sen­ta­tions of the world—that is, per­haps all of those calcu­la­tors were cre­ated in a way that didn’t re­flect re­al­ity, and added ten to any re­sult calcu­lated.

If 90% of my calcu­la­tors say 19 times 103 is equal to 1967, how do I de­ter­mine that they are in­cor­rect, with­out hav­ing the ac­tual peb­bles to count?

• Eliezer, thank you for this clear ex­pla­na­tion. I’m just now mak­ing the con­nec­tion to your calcu­la­tor ex­am­ple, which struck me as rele­vant if I could only figure out how. Now it’s all fit­ting to­gether.

How does this differ from per­sonal prefer­ence? Or is it sim­ply broader in scope? That is, if an in­di­vi­d­ual’s calcu­la­tion in­cludes “self-in­ter­est” and weighs it heav­ily, per­sonal prefer­ence might be the re­sult of the calcu­la­tion, which fits in­side your meta­moral model, if I’m read­ing things cor­rectly.

• I’m go­ing to need some help with this one.

It seems to me that the ar­gu­ment goes like this, at first:

• There is a huge blob of com­pu­ta­tion; it is a 1-place func­tion; it is iden­ti­cal to right.

• This com­pu­ta­tion bal­ances var­i­ous val­ues.

• Our minds ap­prox­i­mate that com­pu­ta­tion.

Even this lit­tle bit cre­ates a lot of ques­tions. I’ve been fol­low­ing Eliezer’s writ­ings for the past lit­tle while, al­though I may well have missed some key point.

Why is this com­pu­ta­tion a 1-place func­tion? Eliezer says at first “Here we are treat­ing moral­ity as a 1-place func­tion.” and then jumps to “Since what’s right is a 1-place func­tion...” with­out jus­tify­ing that sta­tus.

What val­ues does this com­pu­ta­tion bal­ance? Why those val­ues?

What rea­son do we have to be­lieve that our minds ap­prox­i­mate that com­pu­ta­tion?

Sorry if these are ex­tremely ba­sic ques­tions that have been an­swered in other places, or even in this ar­ti­cle—I’m try­ing and hav­ing a difficult time with un­der­stand­ing how Eliezer’s ar­gu­ment goes past these is­sues. Any help would be ap­pre­ci­ated.

• Richard, I don’t know any­thing about moral the­o­rists, but this se­ries of posts has helped me un­der­stand my own be­liefs bet­ter than any­thing I’ve ever read, and they’ve co­a­lesced mostly while read­ing this post. “Meta” was a con­cept miss­ing from my toolbox, at least in the case of moral­ity, and Eliezer’s point­ing it out has been im­mensely pro­duc­tive for me.

be­he­moth, I think the point you make about the sec­ond gen­er­a­tion is an im­por­tant one. Be­cause chil­dren are both ir­ra­tional and bad at listen­ing to their in­tu­itions when it’s in­con­ve­nient to do so, hav­ing some form of meta­moral­ity is use­ful to serve as a ves­sel for moral­ity. The prob­lem is, in do­ing that, peo­ple bind the ves­sel and its con­tents, and can’t pour the con­tents into some other ves­sel if theirs turns out to be leaky. Which is why ra­tio­nal­ism is im­por­tant.

• Not hard at all, Cale­do­nian.

Also, stop trol­ling. Offer some in­sight, or go away.

• Another thing way to look at this idea of math be­ing a tool that ex­ists only in the mind has oc­curred to me:

Does ad­di­tion hap­pen out­side the mind? What is some­thing “plus” some­thing else? If we’ve got a quan­tity of two sheep, and a quan­tity of three sheep, and they’re stand­ing next to each other, then we can con­sider the two quan­tities to­gether, and count five sheep. But let’s say a quan­tity of two sheep wan­der through a meadow un­til they come across a quan­tity of three sheep, and then stop. Where did the ac­tual ad­di­tion hap­pen? Out­side the mind, there are only quan­tities.

• I think the prob­lem I have with the math ex­am­ple, and it may be that this is ex­ten­si­ble to moral­ity, is this:

If I have a cer­tain quan­tity of ap­ples, or sheep, or what­ever, my mind has a tool (a num­ber) ready to iden­tify some char­ac­ter­is­tic about that quan­tity (how many it is). But that’s all that num­ber is: a tool. A refer­ence.

Eliezer is right in say­ing that the teacher’s teach­ing “2+3=5” doesn’t make it true any more than the teacher’s teach­ing “2+3=6″ makes it true. But that’s not be­cause two plus three “ac­tu­ally” equals five. It’s be­cause we, as learn­ing an­i­mals, have learned defi­ni­tions of these con­cepts, and we con­ceive of them as be­ing fun­da­men­tal. We think of math as a fun­da­men­tal part of re­al­ity, when it is in fact a low-level, ex­tremely use­ful, but all-in-the-mind tool used to ma­nipu­late our un­der­stand­ing of re­al­ity. We’re con­fus­ing the map with the ter­ri­tory.

Tak­ing this over to moral­ity:

“Killing is wrong” isn’t true be­cause some­one told us it’s true, any more than “Killing is right” would be true if some­one were to tell us that. But that’s not be­cause kil­ling an­other hu­man be­ing “ac­tu­ally” is wrong. It’s be­cause we, as learn­ing an­i­mals, have learned (or evolved the low-level emo­tions that serve as a foun­da­tion for this rule) defi­ni­tions of right and wrong, and we con­ceive of them as be­ing fun­da­men­tal. We think of moral­ity as a fun­da­men­tal part of re­al­ity, when it is in fact, an all-in-the mind tool. Should we throw it out be­cause it’s merely evolved? No. It’s use­ful (at least for the species). But we shouldn’t con­fuse the map with the ter­ri­tory.

This is still pretty fuzzy in my mind; please crit­i­cize, es­pe­cially if I’ve made some fun­da­men­tal er­ror.

• If you be­lieve that there is any kind of stone tablet in the fabric of the uni­verse, in the na­ture of re­al­ity, in the struc­ture of logic—any­where you care to put it—then what if you get a chance to read that stone tablet, and it turns out to say “Pain Is Good”? What then?

Well, Eliezer, since I can’t say it as elo­quently as you:

“Em­brace re­al­ity. Hug it tight.”

“It is always best to think of re­al­ity as perfectly nor­mal. Since the be­gin­ning, not one un­usual thing has ever hap­pened.”

If we find that Stone Tablet, we ad­just our model ac­cord­ingly.

• Z. M. Davis: Thank you. I get it now.

• Roland and Ian C. both help me un­der­stand where Eliezer is com­ing from. And PK’s com­ment that “Real­ity will only take a sin­gle path” makes sense. That said, when I say a die has a 16 prob­a­bil­ity of land­ing on a 3, that means: Over a se­ries of rolls in which no effort is made to sys­tem­at­i­cally con­trol the out­come (e.g. by always start­ing with 3 fac­ing up be­fore toss­ing the die), the die will land on a 3 about 1 in 6 times. Ob­vi­ously, with perfect in­for­ma­tion, ev­ery­thing can be calcu­lated. That doesn’t mean that we can’t pre­dict the prob­a­bil­ity of a spe­cific event.

Also, I didn’t get a re­sponse to the Gom­boc ( http://​​tinyurl.com/​​2rffxs ) ar­gu­ment. I would say that it has an in­her­ent 100% prob­a­bil­ity of right­ing it­self. Even if I knew noth­ing about the ob­ject, the real prob­a­bil­ity of it right­ing it­self is 100%. Now, I might not bet on those odds, with­out pre­vi­ous knowl­edge, but no mat­ter what I know, the ob­ject will right it­self. How is this in­cor­rect?

• It seems to me you’re us­ing “per­ceived prob­a­bil­ity” and “prob­a­bil­ity” in­ter­change­ably. That is, you’re “defin­ing” prob­a­bil­ity as the prob­a­bil­ity that an ob­server as­signs based on cer­tain pieces of in­for­ma­tion. Is it not true that when one rolls a fair 1d6, there is an ac­tual 16 prob­a­bil­ity of get­ting any one spe­cific value? Or us­ing your bi­ased coin ex­am­ple: our in­for­ma­tion may tell us to as­sume a 5050 chance, but the man may be cor­rect in say­ing that the coin has a bias—that is, the coin may re­ally come up heads 80% of the time, but we must as­sume a 50% chance to make the de­ci­sion, un­til we can be cer­tain of the 80% chance our­selves. What am I miss­ing? I would say that the Gom­boc (http://​tinyurl.com/​2rffxs) has a 100% chance of right­ing it­self, in­her­ently. I do not un­der­stand how this is in­cor­rect.