# Circular Altruism

Fol­lowup to: Tor­ture vs. Dust Specks, Zut Allais, Ra­tion­al­ity Quotes 4

Sup­pose that a dis­ease, or a mon­ster, or a war, or some­thing, is kil­ling peo­ple. And sup­pose you only have enough re­sources to im­ple­ment one of the fol­low­ing two op­tions:

1. Save 400 lives, with cer­tainty.

2. Save 500 lives, with 90% prob­a­bil­ity; save no lives, 10% prob­a­bil­ity.

Most peo­ple choose op­tion 1. Which, I think, is fool­ish; be­cause if you mul­ti­ply 500 lives by 90% prob­a­bil­ity, you get an ex­pected value of 450 lives, which ex­ceeds the 400-life value of op­tion 1. (Lives saved don’t diminish in marginal util­ity, so this is an ap­pro­pri­ate calcu­la­tion.)

“What!” you cry, in­censed. “How can you gam­ble with hu­man lives? How can you think about num­bers when so much is at stake? What if that 10% prob­a­bil­ity strikes, and ev­ery­one dies? So much for your damned logic! You’re fol­low­ing your ra­tio­nal­ity off a cliff!”

Ah, but here’s the in­ter­est­ing thing. If you pre­sent the op­tions this way:

1. 100 peo­ple die, with cer­tainty.

2. 90% chance no one dies; 10% chance 500 peo­ple die.

Then a ma­jor­ity choose op­tion 2. Even though it’s the same gam­ble. You see, just as a cer­tainty of sav­ing 400 lives seems to feel so much more com­fortable than an un­sure gain, so too, a cer­tain loss feels worse than an un­cer­tain one.

You can grand­stand on the sec­ond de­scrip­tion too: “How can you con­demn 100 peo­ple to cer­tain death when there’s such a good chance you can save them? We’ll all share the risk! Even if it was only a 75% chance of sav­ing ev­ery­one, it would still be worth it—so long as there’s a chance—ev­ery­one makes it, or no one does!”

You know what? This isn’t about your feel­ings. A hu­man life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain’s feel­ings of com­fort or dis­com­fort with a plan. Does com­put­ing the ex­pected util­ity feel too cold-blooded for your taste? Well, that feel­ing isn’t even a feather in the scales, when a life is at stake. Just shut up and mul­ti­ply.

Pre­vi­ously on Over­com­ing Bias, I asked what was the least bad, bad thing that could hap­pen, and sug­gested that it was get­ting a dust speck in your eye that ir­ri­tated you for a frac­tion of a sec­ond, barely long enough to no­tice, be­fore it got blinked away. And con­versely, a very bad thing to hap­pen, if not the worst thing, would be get­ting tor­tured for 50 years.

Now, would you rather that a googol­plex peo­ple got dust specks in their eyes, or that one per­son was tor­tured for 50 years? I origi­nally asked this ques­tion with a vastly larger num­ber—an in­com­pre­hen­si­ble math­e­mat­i­cal mag­ni­tude—but a googol­plex works fine for this illus­tra­tion.

Most peo­ple chose the dust specks over the tor­ture. Many were proud of this choice, and in­dig­nant that any­one should choose oth­er­wise: “How dare you con­done tor­ture!”

This matches re­search show­ing that there are “sa­cred val­ues”, like hu­man lives, and “un­sa­cred val­ues”, like money. When you try to trade off a sa­cred value against an un­sa­cred value, sub­jects ex­press great in­dig­na­tion (some­times they want to pun­ish the per­son who made the sug­ges­tion).

My fa­vorite anec­dote along these lines—though my books are packed at the mo­ment, so no cita­tion for now—comes from a team of re­searchers who eval­u­ated the effec­tive­ness of a cer­tain pro­ject, calcu­lat­ing the cost per life saved, and recom­mended to the gov­ern­ment that the pro­ject be im­ple­mented be­cause it was cost-effec­tive. The gov­ern­men­tal agency re­jected the re­port be­cause, they said, you couldn’t put a dol­lar value on hu­man life. After re­ject­ing the re­port, the agency de­cided not to im­ple­ment the mea­sure.

Trad­ing off a sa­cred value (like re­frain­ing from tor­ture) against an un­sa­cred value (like dust specks) feels re­ally awful. To merely mul­ti­ply util­ities would be too cold-blooded—it would be fol­low­ing ra­tio­nal­ity off a cliff...

But let me ask you this. Sup­pose you had to choose be­tween one per­son be­ing tor­tured for 50 years, and a googol peo­ple be­ing tor­tured for 49 years, 364 days, 23 hours, 59 min­utes and 59 sec­onds. You would choose one per­son be­ing tor­tured for 50 years, I do pre­sume; oth­er­wise I give up on you.

And similarly, if you had to choose be­tween a googol peo­ple tor­tured for 49.9999999 years, and a googol-squared peo­ple be­ing tor­tured for 49.9999998 years, you would pick the former.

A googol­plex is ten to the googolth power. That’s a googol/​100 fac­tors of a googol. So we can keep do­ing this, grad­u­ally—very grad­u­ally—diminish­ing the de­gree of dis­com­fort, and mul­ti­ply­ing by a fac­tor of a googol each time, un­til we choose be­tween a googol­plex peo­ple get­ting a dust speck in their eye, and a googol­plex/​googol peo­ple get­ting two dust specks in their eye.

If you find your prefer­ences are cir­cu­lar here, that makes rather a mock­ery of moral grand­stand­ing. If you drive from San Jose to San Fran­cisco to Oak­land to San Jose, over and over again, you may have fun driv­ing, but you aren’t go­ing any­where. Maybe you think it a great dis­play of virtue to choose for a googol­plex peo­ple to get dust specks rather than one per­son be­ing tor­tured. But if you would also trade a googol­plex peo­ple get­ting one dust speck for a googol­plex/​googol peo­ple get­ting two dust specks et cetera, you sure aren’t helping any­one. Cir­cu­lar prefer­ences may work for feel­ing no­ble, but not for feed­ing the hun­gry or heal­ing the sick.

Altru­ism isn’t the warm fuzzy feel­ing you get from be­ing al­tru­is­tic. If you’re do­ing it for the spiritual benefit, that is noth­ing but self­ish­ness. The pri­mary thing is to help oth­ers, what­ever the means. So shut up and mul­ti­ply!

And if it seems to you that there is a fierce­ness to this max­i­miza­tion, like the bare sword of the law, or the burn­ing of the sun—if it seems to you that at the cen­ter of this ra­tio­nal­ity there is a small cold flame -

Well, the other way might feel bet­ter in­side you. But it wouldn’t work.

And I say also this to you: That if you set aside your re­gret for all the spiritual satis­fac­tion you could be hav­ing—if you whole­heart­edly pur­sue the Way, with­out think­ing that you are be­ing cheated—if you give your­self over to ra­tio­nal­ity with­out hold­ing back, you will find that ra­tio­nal­ity gives to you in re­turn.

But that part only works if you don’t go around say­ing to your­self, “It would feel bet­ter in­side me if only I could be less ra­tio­nal.”

Chim­panzees feel, but they don’t mul­ti­ply. Should you be sad that you have the op­por­tu­nity to do bet­ter? You can­not at­tain your full po­ten­tial if you re­gard your gift as a bur­den.

Added: If you’d still take the dust specks, see Un­known’s com­ment on the prob­lem with qual­i­ta­tive ver­sus quan­ti­ta­tive dis­tinc­tions.

• I agree that as you defined the prob­lems, both have prob­lems. But I don’t agree that the prob­lems are equal, for the rea­son stated ear­lier. Sup­pose some­one says that the bound­ary is that 1,526,216,123,000,252 dust specks is ex­actly equal to 50 years of tor­ture (in fact, it’s likely to be some rel­a­tively low num­ber like this rather than any­thing like a google­plex.) It is true that prov­ing this would be a prob­lem. But it is no par­tic­u­lar prob­lem that 1,526,216,123,000,251 dust specks would be prefer­able to the tor­ture, while the tor­ture would be prefer­able to 1,526,123,000,253 dust specks would be worse than the tor­ture: the point is that the tor­ture would differ from each of these val­ues by an ex­tremely tiny amount.

But sup­pose some­one defines a qual­i­ta­tive bound­ary: 1,525,123 de­grees of pain (given some sort of mea­sure) has an in­trin­si­cally worse qual­ity from 1,525,122 de­grees, such that no amount of the lat­ter can ever add up to the former. It seems to me that there is a prob­lem which doesn’t ex­ist in the other case, namely that for a trillion peo­ple to suffer pain of 1,525,122 de­grees for a trillion years is said to be prefer­able to one per­son suffer­ing pain of 1,525,123 de­grees for one year.

In other words: both po­si­tions have difficult to find bound­aries, but one di­rectly con­tra­dicts in­tu­ition in a way the other does not.

• Sup­pose that the qual­i­ta­tive differ­ence is be­tween bear­able and un­bear­able, in other words things that are over o be­low the pain tol­er­ance. A pain just be­low pain tol­er­ance when ex­pe­rienced for a small quan­tity of time will re­main bear­able; how­ever, if it is pro­longed for lots of time it will be­come un­bear­able be­cause hu­man pa­tience is limited. So, even if we give im­por­tance to qual­i­ta­tive differ­ences, we can still choose to avoid tor­ture and your sec­ond sce­nario, with­out go­ing against our in­tu­itions, or be in­co­her­ent.

More­over, we can de­scribe qual­i­ta­tive differ­ences as the col­ors on the spec­trum of visi­ble light: their edges are neb­u­lous but we can still agree that the grass is green and the sea is blue. This means that two very close points on the spec­trum ap­pear as part of the same color, but when their dis­tance in­creases they be­came part of two differ­ent col­ors.

1,525,122 and 1,525,123 are so close that we can see them as shades of the same qual­i­ta­tive cat­e­gory. On the other hand, dust speck and tor­ture are very dis­tant from each other and we can con­sider them as part of two differ­ent qual­i­ta­tive cat­e­gories.

• To be more pre­cise: let’s as­sume that the time will be quite short (5 sec­ond for ex­am­ple), in this case I think it is re­ally bet­ter to let billions of peo­ple suffer 5 sec­ond of bear­able pain than to let one per­son suffer 5 sec­ond of un­bear­able pain. After all, peo­ple can stand a bear­able pain by defi­ni­tion.

How­ever, pain tol­er­ance is sub­jec­tive and in real life we don’t know ex­actly where the thresh­old is in ev­ery per­son, so we can pre­fer, as heuris­tic rule, the op­tion with less peo­ple in­volved when the pains are similar to each other (maybe we have evolved some sys­tem to make such ap­prox­i­ma­tions, a sort of thresh­old in­sen­si­tivity).

• I’m not to­tally con­vinced—there may be other fac­tors that make such qual­i­ta­tive dis­tinc­tions im­por­tant. Such as ex­ceed­ing the thresh­old to boiling. Or putting enough bricks in a sack to burst the bot­tom. Or al­low­ing some­one to go long enough with­out air that they can­not be re­sus­ci­tated. It prob­a­bly doesn’t do any good to pose /​ar­bi­trary/​ bound­aries, for sure, but not all such qual­i­ta­tive dis­tinc­tions are ar­bi­trary...

• Or al­low­ing some­one to go long enough with­out air that they can­not be re­sus­ci­tated.

This is less of a sin­gle qual­i­ta­tive dis­tinc­tion than you would think, given the var­i­ous de­grees of neu­rolog­i­cal dam­age that can make a per­son more or less the same per­son that they were be­fore.

• Good point… you are right about that. It would be more of a mat­ter of de­grees of per­son­hood, es­pe­cially if you had ad­vanced med­i­cal tech­nolo­gies available such as neu­ral im­plants.

• Sup­pose that the qual­i­ta­tive differ­ence is be­tween bear­able and un­bear­able, in other words things that are over o be­low the pain tol­er­ance. A pain just be­low pain tol­er­ance when ex­pe­rienced for a small quan­tity of time will re­main bear­able; how­ever, if it is pro­longed for lots of time it will be­come un­bear­able be­cause hu­man pa­tience is limited. So, even if we give im­por­tance to qual­i­ta­tive differ­ences, we can still choose to avoid tor­ture and your sce­nario, with­out go­ing against our in­tu­itions, or be in­co­her­ent. Now, let’s as­sume that the time will be quite short (5 sec­ond for ex­am­ple), in this case I think it is re­ally bet­ter to let billions of peo­ple suffer 5 sec­ond of bear­able pain than to let one per­son suffer 5 sec­ond of un­bear­able pain. After all, peo­ple can stand a bear­able pain by defi­ni­tion. How­ever, pain tol­er­ance is sub­jec­tive and in real life we don’t know ex­actly where the thresh­old is in ev­ery per­son, so we can pre­fer, as heuris­tic rule, the op­tion with less peo­ple in­volved when the pains are similar to each other (maybe we have evolved some sys­tem to make such ap­prox­i­ma­tions, a sort of thresh­old in­sen­si­tivity).

• So we can keep do­ing this, grad­u­ally—very grad­u­ally—diminish­ing the de­gree of dis­com­fort...

Eliezer, your readi­ness to as­sume that all ‘bad things’ are on a con­tin­u­ous scale, lin­ear or no, re­ally sur­prises me. Put your enor­mous num­bers away, they’re not what peo­ple are tak­ing um­brage at. Do you think that if a googol doesn’t con­vince us, per­haps a googol­plex will? Or maybe 3^^^3? If x and y are finite, there will always be a quan­tity of x that ex­ceeds y, and vice versa. We get the maths, we just don’t agree that the phe­nom­ena are com­pa­rable. Bro­ken an­kle? Stub­bing your toe? Pos­si­bly, there is cer­tainly more of a tan­gible link there, but you’re still im­pos­ing your judg­ment on how the mind ex­pe­riences and deals with dis­com­fort on us all and call­ing it ra­tio­nal­ity. It isn’t.

Put sim­ply—a dust mote reg­isters ex­actly zero on my tor­ture scale, and tor­ture reg­isters fun­da­men­tally off the scale (not just off the top, off) on my dust mote scale.

You’re ask­ing how many bis­cuits equal one steak, and then when one says ‘there is no num­ber’, ac­cus­ing him of scope in­sen­si­tivity.

• So you wouldn’t pay one cent to pre­vent 3^^^3 peo­ple from get­ting a dust speck in their eye?

• Sure. My loss of util­ity from los­ing the cent might be less than the gain in util­ity for those peo­ple to not get dust specks—but these are both what Ben might con­sider triv­ial events; it doesn’t ad­dress the prob­lem Ben Jones has with the as­sump­tion of a con­tin­u­ous scale. I’m not sure I’d pay \$100 for any amount of peo­ple to not get specks in their eyes, be­cause now we may have made the jump to a non-triv­ial cost for the ad­di­tion of triv­ial pay­offs.

• Ben Jones didn’t recog­nise the dust speck as “triv­ial” on his tor­ture scale, he iden­ti­fied it as “zero”. There is a differ­ence: If dust speck di­su­til­ity is equal to zero, you shouldn’t pay one cent to save 3^^^3 peo­ple from it. 0 3^^^3 = 0, and the di­su­til­ity of los­ing one cent is non-zero. If you as­sign an ep­silon of di­su­til­ity to a dust speck, then 3^^^3 ep­silon is way more than 1 per­son suffer­ing 50 years of tor­ture. For all in­tents and pur­poses, 3^^^3 = in­finity. The only way that In­finity(X) can be worse than a finite num­ber is if X is equal to 0. If X = 0.00000001, then tor­ture is prefer­able to dust specks.

• Well, he didn’t ac­tu­ally iden­tify dust mote di­su­til­ity as zero; he says that dust motes reg­ister as zero on his tor­ture scale. He goes on to men­tion that tor­ture isn’t on his dust-mote scale, so he isn’t just us­ing “tor­ture scale” as a syn­onym for “di­su­til­ity scale”; rather, he is em­pha­siz­ing that there is more than just a sin­gle “(dis)util­ity scale” in­volved. I be­lieve his con­tention is that the events (tor­ture and dust-mote-in-the-eye) are fun­da­men­tally differ­ent in terms of “how the mind ex­pe­riences and deals with [them]”, such that no amount of dust motes can add up to the ex­pe­rience of tor­ture… even if they (the motes) have a nonzero amount of di­su­til­ity.

I be­lieve I am mak­ing much the same dis­tinc­tion with my sep­a­ra­tion of di­su­til­ity into triv­ial and non-triv­ial cat­e­gories, where no amount of triv­ial di­su­til­ity across mul­ti­ple peo­ple can sum to the ex­pe­rience of non-triv­ial di­su­til­ity. There is a fun­da­men­tal gap in the scale (or differ­ent scales al­to­gether, à la Jones), a differ­ence in how differ­ent amounts of di­su­til­ity work for hu­mans. For a more con­crete ex­am­ple of how this might work, sup­pose I steal one cent each from one billion differ­ent peo­ple, and Eliezer steals \$100,000 from one per­son. The to­tal amount of money I have stolen is greater than the amount that Eliezer has stolen; yet my vic­tims will prob­a­bly never even re­al­ize their loss, whereas the loss of \$100,000 for one in­di­vi­d­ual is sig­nifi­cant. A cent does have a nonzero amount of pur­chas­ing power, but none of my vic­tims have ac­tu­ally lost the abil­ity to pur­chase any­thing; whereas Eliezer’s, on the other hand, has lost the abil­ity to pur­chase many, many things.

I be­lieve util­ity for hu­mans works in the same man­ner. Another thought ex­per­i­ment I found helpful is to imag­ine a cer­tain amount of di­su­til­ity, x, be­ing ex­pe­rienced by one per­son. Let’s sup­pose x is “be­ing bru­tally tor­tured for a week straight”. Call this situ­a­tion A. Now di­vide this di­su­til­ity among peo­ple un­til we have y peo­ple all ex­pe­rienc­ing (1/​y)*x di­su­til­ity—say, a dust speck in the eye each. Call this situ­a­tion B. If we can add up di­su­til­ity like Eliezer sup­poses in the main ar­ti­cle, the to­tal amount of di­su­til­ity in ei­ther situ­a­tion is the same. But now, ask your­self: which situ­a­tion would you choose to bring about, if you were forced to pick one?

Would you just flip a coin?

I be­lieve few, if any, would choose situ­a­tion A. This brings me to a fi­nal point I’ve been want­ing to make about this ar­ti­cle, but have never got­ten around to do­ing. Mr. Yud­kowsky of­ten defines ra­tio­nal­ity as win­ning—a rea­son­able defi­ni­tion, I think. But with this dust speck sce­nario, if we ac­cept Mr. Yud­kowsky’s rea­son­ing and choose the one-per­son-be­ing-tor­tured op­tion, we end up with a situ­a­tion in which ev­ery par­ti­ci­pant would rather that the other op­tion had been cho­sen! Cer­tainly the in­di­vi­d­ual be­ing tor­tured would pre­fer that, and each po­ten­tially dust-specked in­di­vi­d­ual* would gladly agree to ex­pe­rience an in­stant of dust-speck­i­ness in or­der to save the former in­di­vi­d­ual.

I don’t think this is win­ning; no one is hap­pier with this situ­a­tion. Like Eliezer says in refer­ence to New­comb’s prob­lem, if ra­tio­nal­ity seems to be tel­ling us to go with the choice that re­sults in los­ing, per­haps we need to take an­other look at what we’re call­ing ra­tio­nal­ity.

*Well, as­sum­ing a pop­u­la­tion like our own, not ev­ery sin­gle in­di­vi­d­ual would agree to ex­pe­rience a dust speck in the eye to save the to-be-tor­tured in­di­vi­d­ual; but I think it is clear that the vast ma­jor­ity would.

• Thank you for try­ing to ad­dress this prob­lem, as it’s im­por­tant and still both­ers me.

But I don’t find your idea of two differ­ent scales con­vinc­ing. Con­sider elec­tric shocks. We start with an im­per­cep­ti­bly low voltage and turn up the dial un­til the first level at which the vic­tim is able to per­ceive slight dis­com­fort (let’s say one volt). Sup­pose we sur­vey peo­ple and find that a one volt shock is about as un­pleas­ant as a dust speck in the eye, and most peo­ple are in­differ­ent be­tween them.

Then we turn the dial up fur­ther, and by some level, let’s say two hun­dred volts, the vic­tim is in ex­cru­ci­at­ing pain. We can sur­vey peo­ple and find that a two hun­dred volt shock is equiv­a­lent to what­ever kind of tor­ture was be­ing used in the origi­nal prob­lem.

So one volt is equiv­a­lent to a dust speck (and so on the “triv­ial scale”), but two hun­dred volts is equiv­a­lent to tor­ture (and so on the “non­triv­ial scale”). But this im­plies ei­ther that triv­ial­ity ex­ists only in de­gree (which ru­ins the en­tire ar­gu­ment, since enough triv­ial­ity ag­gre­gated equals non­triv­ial­ity) or that there must be a sharp dis­con­ti­nu­ity some­where (eg a 21.32 volt shock is triv­ial, but a 21.33 volt shock is non­triv­ial). But the lat­ter is ab­surd. There­fore there should not be sep­a­rate triv­ial and non­triv­ial util­ity scales.

• First of all, you might benefit from look­ing up the beard fal­lacy.

To ad­dress the is­sue at hand di­rectly, though:

Of course there are sharp dis­con­ti­nu­ities. Not just one sharp dis­con­ti­nu­ity, but countless. How­ever, there is not par­tic­u­lar voltage at which there is a dis­con­ti­nu­ity. Rather, in­creas­ing the voltage in­creases the prob­a­bil­ity of a dis­con­ti­nu­ity.

I will list a few dis­con­ti­nu­ities es­tab­lished by tor­ture.

1. Night­mares. As the elec­tro­cu­tion ex­pe­rience be­comes more se­vere, the prob­a­bil­ity that it will re­sult in a night­mare in­creases. After 50 years of high voltage, hun­dreds or even thou­sands of such night­mares are likely to have oc­curred. How­ever, 1 sec­ond of 1V is un­likely to re­sult in even a sin­gle night­mare. The first night­mare is a sharp dis­con­ti­nu­ity. But fur­ther­more, each ad­di­tional night­mare is an­other sharp dis­con­ti­nu­ity.

2. Stress re­sponses to as­so­ci­a­tional trig­gers. The first such stress re­sponse is a sharp dis­con­ti­nu­ity, but so is ev­ery other one. But please note that there is a dis­con­ti­nu­ity for each in­stance of stress re­sponse that fol­lows in your life: each one is its own dis­con­ti­nu­ity. So, if you will ex­pe­rience 10,500 stress re­sponses, that is 10,500 dis­con­ti­nu­ities. It’s im­pos­si­ble to say be­fore­hand what voltage or how many sec­onds will make the differ­ence be­tween 10,499 and 10,500, but in the­ory a prob­a­bil­ity could be as­signed. I think there are already ac­tual stud­ies that have mea­sured the in­creased stress re­sponse af­ter elec­troshock, over short pe­ri­ods.

3. Flash­backs. Again, the first flash­back is a dis­con­ti­nu­ity; as is ev­ery other flash­back. Every time you start cry­ing dur­ing a flash­back is an­other dis­con­ti­nu­ity.

4. So­cial prob­lems. The first re­la­tion­ship that fails (e.g., first woman that leaves you) be­cause of the so­cial ram­ifi­ca­tions of dam­age to your psy­che is a dis­con­ti­nu­ity. Every time you flee from a so­cial event: an­other dis­con­ti­nu­ity. Every fight that you have with your par­ents as a re­sult of your tor­ture (and the fact that you have be­come un­rec­og­niz­able to them) is a dis­con­ti­nu­ity. Every time you fail to make eye con­tact is a dis­con­ti­nu­ity. If not for the tor­ture, you would have made the eye con­tact, and ev­ery failure rep­re­sents a forked path in your en­tire fu­ture so­cial life.

I could go on, but you can look up the symp­toms of PTSD your­self. I hope, how­ever, that I have im­pressed upon you the fact that life con­sti­tutes a se­ries of dis­crete events, not a con­tin­u­ous plane of quan­tifi­able and summable util­ity lines. It’s “sharp dis­con­ti­nu­ities” all the way down to el­e­men­tary par­ti­cles. Be care­ful with math­e­mat­i­cal mod­els in­volv­ing a con­tinuum.

Please note that flash­backs, night­mares, stress re­sponses to trig­gers, and so­cial prob­lems do not re­sult from specs of dust in the eye.

• Ex­cept per­cep­tion doesn’t work like that. We can have two qual­i­ta­tively differ­ent per­cep­tions aris­ing from quan­tities of the same stim­u­lus. We know that ir­ri­ta­tion and pain use differ­ent nerve end­ings, for in­stance; and elec­tric shock in differ­ent quan­tities could turn on ir­ri­ta­tion at a lower thresh­old than pain. Similarly, a dim col­ored light is per­ceived as color on the cone cells, while a very bright light of the same fre­quency is per­ceived as bright­ness on the rod cells. A baby wailing may be per­ceived as un­pleas­ant; turn it up to jet-en­g­ine vol­ume and it will be per­ceived as painful.

• Okay, good point. But if we change the ar­gu­ment slightly to the small­est per­ceiv­able amount of pain it’s still bit­ing a pretty big bul­let to say 3^^^3 of those is worse than 50 years of tor­ture.

(the the­ory would also im­ply that an in­finite amount of ir­ri­ta­tion is not as bad as a tiny amount of pain, which doesn’t seem to be true)

• (the the­ory would also im­ply that an in­finite amount of ir­ri­ta­tion is not as bad as a tiny amount of pain, which doesn’t seem to be true)

Hmm not sure. It seems quite plau­si­ble to me that for any n, an in­stance of real harm to one per­son is worse than n in­stances of com­pletely harm­less ir­ri­ta­tion to n peo­ple. Espe­cially if we con­sider a bounded util­ity func­tion; the n in­stances of ir­ri­ta­tion have to flat­ten out at some finite level of di­su­til­ity, and there is no a pri­ori rea­son to ex­clude tor­ture to one per­son hav­ing a worse di­su­til­ity than that asymp­tote.

Hav­ing said all that, I’m not sure I buy into the con­cept of com­pletely harm­less ir­ri­ta­tion. I doubt we’d per­ceive a dust speck as a di­su­til­ity at all ex­cept for the fact that it has small prob­a­bil­ity of caus­ing big harm (loss of life or offspring) some­where down the line. A difficulty with the whole prob­lem is the stipu­la­tion that the dust specks do noth­ing ex­cept cause slight ir­ri­ta­tion… no ma­jor harm re­sults to any in­di­vi­d­ual. How­ever, throw­ing a dust speck in some­one’s eye would in prac­tice have a very small prob­a­bil­ity of very real harm, such as dis­trac­tion while op­er­at­ing dan­ger­ous ma­chin­ery (driv­ing, fly­ing etc), start­ing an eye in­fec­tion which leads to months of agony and loss of sight, a slight shock caus­ing a stum­ble and bro­ken limbs or lead­ing to a big­ger shock and heart at­tack. Even the very mild ir­ri­ta­tion may be enough to send an ir­ri­ta­ble per­son “over the edge” into punch­ing a neigh­bour, or a gun ram­page, or a bor­der­line suici­dal per­son into suicide. All these are spec­tac­u­larly un­likely for each in­di­vi­d­ual, but if you mul­ti­ply by 3^^^3 peo­ple you still get or­der 3^^^3 in­stances of ma­jor harm.

• With that many in­stances, it’s even highly likely that at least one of the specs in the eye will offer a rare op­por­tu­nity for some poor pris­oner to es­cape his cap­tors, who had in­tended to sub­ject him to 50 years of tor­ture.

• the the­ory would also im­ply that an in­finite amount of ir­ri­ta­tion is not as bad as a tiny amount of pain, which doesn’t seem to be true)

I’m in­creas­ingly con­vinced that the whole Tor­ture vs. Dust Specks sce­nario is spark­ing way more heat than light, but...

I can imag­ine situ­a­tions where an in­finite amount of some type of ir­ri­ta­tion in­te­grated to some­thing equiv­a­lent to some finite but non-tiny amount of pain. I can even imag­ine situ­a­tions where that amount was a mat­ter of prefer­ence: if you asked some­one what finite level of pain they’d ac­cept to pre­vent some per­ma­nent and an­noy­ing but non-painful con­di­tion, I’d ex­pect the an­swers to differ sig­nifi­cantly. Granted, “lifelong” is not “in­finite”, and there’s hy­per­bolic dis­count­ing and var­i­ous other is­sues to cor­rect for, but even af­ter these cor­rec­tions a finite an­swer doesn’t seem ob­vi­ously wrong.

• Well, for one thing, pain is not nega­tive util­ity ….

Pain is a spe­cific set of phys­iolog­i­cal pro­cesses. Re­cent dis­cov­er­ies sug­gest that it shares some brain-space with other phe­nom­ena such as so­cial re­jec­tion and math anx­iety, which are phe­nomenolog­i­cally dis­tinct.

It is also phe­nomenolog­i­cally dis­tinct from the sen­sa­tions of dis­gust, grief, shame, or dread — which are all un­pleas­ant and in­spire us to avoid their causes. Ir­ri­ta­tion, anx­iety, and many other un­pleas­ant sen­sa­tions can take away from our abil­ity to ex­pe­rience plea­sure; many of them can also make us less effec­tive at achiev­ing our own goals.

In place of an in­di­vi­d­ual ex­pe­rienc­ing “50 years of tor­ture” in terms of phys­iolog­i­cal pain, we might con­sider 50 years of frus­tra­tion, akin to the myth of Sisy­phus or Tan­talus; or 50 years of night­mare, akin to that in­flicted on Alex Burgess by Mor­pheus in The Sand­man ….

• A bet­ter metaphor: What if we re­placed “get­ting a dust speck in your eye” with “be­ing hor­ribly tor­tured for one sec­ond”? Ig­nore the prac­ti­cal prob­lems of the lat­ter, just say the per­son ex­pe­riences the ex­act same (av­er­age) pain as be­ing hor­ribly tor­tured, but for one sec­ond.

That al­lows us to di­rectly com­pare the two ex­pe­riences much bet­ter, and it seems to me it elimi­nates the “you can’t com­pare the two ex­pe­riences”- ex­cept of course with long term effects of tor­ture, I sup­pose; to get a perfect com­par­i­son we’d need a tor­ture ma­chine that not only does no phys­i­cal dam­age, but no psy­cholog­i­cal dam­age ei­ther.

On the other hand, it does leave in OnTheOtherHan­dle’s ar­gu­ment about “fair­ness” (speci­fi­cally in the “shar­ing of bur­dens” defi­ni­tion, since oth­er­wise we could just say the per­son tor­tured is se­lected at ran­dom). Which to me as a util­i­tar­ian makes perfect sense; I’m not sure if I agree or dis­agree with him on that.

• For a more con­crete ex­am­ple of how this might work, sup­pose I steal one cent each from one billion differ­ent peo­ple, and Eliezer steals \$100,000 from one per­son. The to­tal amount of money I have stolen is greater than the amount that Eliezer has stolen; yet my vic­tims will prob­a­bly never even re­al­ize their loss, whereas the loss of \$100,000 for one in­di­vi­d­ual is sig­nifi­cant. A cent does have a nonzero amount of pur­chas­ing power, but none of my vic­tims have ac­tu­ally lost the abil­ity to pur­chase any­thing; whereas Eliezer’s, on the other hand, has lost the abil­ity to pur­chase many, many things.

Isn’t this a re­duc­tio of your ar­gu­ment? Steal­ing \$10,000,000 has less eco­nomic effect than steal­ing \$100,000, re­ally? Well, why don’t we just do it over and over, then, since it has no effect each time? If I re­peated it enough times, you would sud­denly de­cide that the av­er­age effect of each \$10,000,000 theft, all told, had been much larger than the av­er­age effect of the \$100,000 theft. So where is the point at which, sud­denly, steal­ing 1 more cent from ev­ery­one has a much larger and dis­pro­por­tionate effect, enough to make up for all the “neg­ligible” effects ear­lier?

• Money is not a lin­ear func­tion of util­ity. A cer­tain amount is nec­es­sary to ex­is­tance (enough to ob­tain food, shelter, etc.) A per­son’s first dol­lar is thus a good deal more valuable than a per­son’s mil­lionth dol­lar, which is in turn more valuable than their billionth dol­lar. There is clearly some ad­di­tional util­ity from each ad­di­tional dol­lar, but I sus­pect that the to­tal util­ity may well be asymp­totic.

The to­tal di­su­til­ity of steal­ing an amount of money, \$X, from a per­son with to­tal wealth \$Y, is (at least ap­procx­i­mately) equal to the differ­ence in util­ity be­tween \$Y and \$(Y-X). (There may be some ad­di­tional di­su­til­ity from the fact that a theft oc­curred—peo­ple may worry about be­ing the next vic­tim or falsely ac­cuse some­one else or so forth—but that should be roughly equiv­a­lent for any theft, and thus I shall dis­re­gard it).

So. Steal­ing one dol­lar from a per­son who will starve with­out that dol­lar is there­fore worse than steal­ing one dol­lar from a per­son who has a billion more dol­lars in the bank.

Steal­ing one dol­lar from each of one billion peo­ple, who will each starve with­out that dol­lar, is far, far worse than steal­ing \$100 000 from one per­son who has an­other \$1e100 in the bank.

Steal­ing \$100 000 from a per­son who only had \$100 000 to start with is worse than steal­ing \$1 from each of one billion peo­ple, each of whom have an­other billion dol­lars in sav­ings.

Now, if we as­sume a level play­ing field—that is, that ev­ery sin­gle per­son starts with the same amount of money (say, \$1 000 000) and no-one will starve if they lose \$100 000, then it be­gins to de­pend on the ex­act func­tion used to find the util­ity of money.

There are func­tions such that a mil­lion thefts of \$1 each re­sults in less di­su­til­ity that a sin­gle theft of \$100 000. (If asked to find an ex­am­ple, I will take a sim­ple ex­po­nen­tial func­tion and fid­dle with the pa­ram­e­ters un­til this is true). How­ever, if you con­tinue adding ad­di­tional thefts of \$1 each from the same mil­lion peo­ple, an in­ter­est­ing effect takes place; each ad­di­tional theft of \$1 each from the same mil­lion peo­ple is worse than the pre­vi­ous one. By the time you hit the hun­dred-thou­sandth theft of \$1 each from the same mil­lion peo­ple, that last theft is sub­stan­tially more than ten times worse than a sin­gle theft of \$100 000 from one per­son.

• Yeah, but also keep in mind that peo­ple’s util­ity func­tions can­not be very con­cave. (My rephras­ing is pretty mis­lead­ing but I can’t think of a bet­ter one, do read the linked post.)

• Hmmm. The linked post talks about the per­ceived util­ity of money; that is, what the owner of the money thinks it is worth. This is not the same as the ac­tual util­ity of money, which is what I am try­ing to use in the grand­par­ent post.

I apol­o­gise if that was not clear, and I hope that this has cleared up any lin­ger­ing mi­s­un­der­stand­ings.

• It seems like you and Hul-Gil are us­ing differ­ent for­mu­lae for eval­u­at­ing util­ity (or, rather, di­su­til­ity); and, there­fore, you are talk­ing past each other.

While Hul-Gil is look­ing solely at the im­me­di­ate pur­chas­ing power of each in­di­vi­d­ual, you are con­sid­er­ing rip­ple effects af­fect­ing the econ­omy as a whole. Thus, while steal­ing a sin­gle penny from a sin­gle in­di­vi­d­ual may have neg­ligible di­su­til­ity, re­mov­ing 1e9 such pen­nies from 1e9 in­di­vi­d­u­als will have a strong nega­tive effect on the econ­omy, thus re­duc­ing the effec­tive pur­chas­ing power of ev­ery­one, your vic­tims in­cluded.

This is a valid point, but it doesn’t re­ally lend any sup­port to ei­ther side in your ar­gu­ment with Hul-Gil, since you’re com­par­ing ap­ples and or­anges.

• I’m pretty sure Eliezer’s point holds even if you only con­sider the im­me­di­ate pur­chas­ing power of each in­di­vi­d­ual.

Let us define thefts A and B:

A : Steal 1 cent from each of 1e9 in­di­vi­d­u­als. B : Steal 1e7 cents from 1 in­di­vi­d­ual.

The claim here is that A has neg­ligible di­su­til­ity com­pared to B. How­ever, we can define a new theft C as fol­lows:

C: Steal 1e7 cents from each of 1e9 in­di­vi­d­u­als.

Now, I don’t dis­count the pos­si­bil­ity that there are ar­gu­ments to the con­trary, but naively it seems that a C theft is 1e9 times as bad as a B theft. But a C theft is equiv­a­lent to 1e7 A thefts. So, nec­es­sar­ily, one of those A thefts must have been worse than a B theft—sub­stan­tially worse. Eliezer’s ques­tion is: if the first one is neg­ligible, at what point do they be­come so much worse?

• I think this is a ques­tion of on­go­ing col­lat­eral effects (not sure if “ex­ter­nal­ities” is the right word to use here). The ex­am­ples that speak of money are ad­di­tion­ally com­pli­cated by the fact that the pur­chas­ing power of money does not scale lin­early with the amount of money you have.

Con­sider the fol­low­ing two sce­nar­ios:

A). In­flict −1e-3 util­ity on 1e9 in­di­vi­d­u­als with neg­ligible con­se­quences over time, or B). In­flict a −1e7 util­ity on a sin­gle in­di­vi­d­ual, with fur­ther −1e7 con­se­quences in the fu­ture.

vs.

C). In­flict a −1e-3 util­ity on 1e9 in­di­vi­d­u­als lead­ing to an ad­di­tional −1e9 util­ity over time, or B). In­flict a one-time −1e7 util­ity on a sin­gle in­di­vi­d­ual, with no ad­di­tional con­se­quences.

Which one would you pick, A or B, and C or D ? Of course, we can play with the num­bers to make A and C more or less at­trac­tive.

I think the prob­lem with Eliezer’s “dust speck” sce­nario is that his di­su­til­ity of op­tion A—i.e., the dust specs—is ba­si­cally ep­silon, and since it has no ad­di­tional costs, you might as well pick A. The al­ter­na­tive is a rather solid chunk of di­su­til­ity—the tor­ture—that will fur­ther add up even af­ter the ini­tial tor­ture is over (due to on­go­ing phys­i­cal and men­tal health prob­lems).

The “grand theft penny” sce­nario can be seen as AB or CD, de­pend­ing on how you think about money; and the right an­swer in ei­ther case might change de­pend­ing on how much you think a penny is ac­tu­ally worth.

• A cent does have a nonzero amount of pur­chas­ing power, but none of my vic­tims have ac­tu­ally lost the abil­ity to pur­chase anything

As­sum­ing that none of them end up one cent short for some­thing they would oth­er­wise have been able to pay for, which out of a billion peo­ple is prob­a­bly go­ing to hap­pen. It doesn’t have to be their next pur­chase.

• But this is analo­gous to say­ing some tiny per­centage of the peo­ple who got dust specks would be driv­ing a car at that mo­ment and lose con­trol, re­sult­ing in an ac­ci­dent. That would be an en­tirely differ­ent bal­lgame, even if the per­cent of peo­ple this hap­pened to was uni­mag­in­ably tiny, be­cause in an uni­mag­in­ably vast pop­u­la­tion, lots of peo­ple are bound to die of grue­some dust-speck re­lated ac­ci­dents.

But Eliezer ex­plic­itly de­nied any ex­ter­nal­ities at all; in our hy­po­thet­i­cal the chance of ac­ci­dents, blind­ness, etc are liter­ally zero. So the chances of not be­ing able to af­ford a vi­tal heart trans­plant or what­ever for want of a penny must also be liter­ally zero in the analo­gous hy­po­thet­i­cal, no mat­ter how ridicu­lously large the pop­u­la­tion gets.

• Not be­ing able to pay for some­thing due to the loss of money isn’t an ex­ter­nal­ity, it’s the only kind of di­rect con­se­quence you’re go­ing to get. If you took a hun­dred thou­sand dol­lars from an in­di­vi­d­ual, they might still be able to make their next pur­chase, but the di­rect con­se­quence would be their be­ing un­able to pay for things they could pre­vi­ously have af­forded.

• The loss of \$100,000 (or one cent) is more or less sig­nifi­cant de­pend­ing on the in­di­vi­d­ual. Which is worse: steal­ing a cent from 100,000,000 peo­ple, or steal­ing \$100,000 from a billion­aire? What if the 100,000,000 peo­ple are very poor and the cent would buy half a slice of bread and they were hun­gry to start with? (Tiny dust specks, at least, have a com­pa­rable an­noy­ance effect on al­most ev­ery­one.)

Eliezer’s main gaffe here is choos­ing a “googol­plex” peo­ple with dust specks when hu­mans do not even have an in­tu­ition for googols. So let’s scale the prob­lem down to a level a hu­man can un­der­stand: in­stead of a googol­plex dust specks ver­sus 50 years of tor­ture, let’s take “50 years of tor­ture ver­sus a googol (1 fol­lowed by 100 ze­ros) dust specks”, and scale it down lin­early to “1 sec­ond of tor­ture verses “6.33 x 10^90 dust specks, one per per­son”—which is still far more peo­ple than have ever lived, so let’s make it “a dust speck once per minute for ev­ery per­son on Earth for their en­tire lives (while awake) and make it retroac­tive for all of our hu­man an­ces­tors too” (let’s pre­tend for a mo­ment that hu­mans won’t evolve a re­sis­tance to dust specks as a re­sult). By do­ing this we are still elimi­nat­ing vir­tu­ally all of the dust specks.

So now we have one sec­ond of tor­ture ver­sus roughly 2 billion billions of dust specks, which is noth­ing at all com­pared to a googol of dust specks. Once the num­bers are scaled down to a level that or­di­nary col­lege grad­u­ates can be­gin to com­pre­hend, I think many of them would change their an­swer. In­deed, some peo­ple might vol­un­teer for one sec­ond of tor­ture just to save them­selves from get­ting a tiny dust speck in their eye ev­ery minute for the rest of their lives.

The fact that hu­mans can’t feel these num­bers isn’t some­thing you teach by just say­ing it. You teach it by cre­at­ing a ten­sion be­tween the feel­ing brain and the think­ing brain. Due to your ego, I would guess your brain can bet­ter imag­ine feel­ing a tiny dust speck in its eye once per minute for your en­tire life − 20 mil­lion specks—than 20 mil­lion peo­ple get­ting a tiny dust speck in their eye once, but how is it any differ­ent morally? For most peo­ple also, 20 billion peo­ple with a dust speck feels just the same as 20 mil­lion. They both feel like “re­ally big num­bers”, but in re­al­ity one num­ber is a thou­sand times worse, and your think­ing brain can see that. In this way, I hope you learn to trust your think­ing brain more than your feel­ing one.

• You might be right. I’ll have to think about this, and re­con­sider my stance. One billion is ob­vi­ously far less than 3^^^3, but you are right in that the 10 mil­lion dol­lars stolen by you would be prefer­able to me than the 100,000 dol­lars stolen by Eliezer. I also con­sider los­ing 100,000 dol­lars less than or equal to 100,000 times as bad as los­ing one dol­lar. This in­di­cates one of two things:

A) My util­ity sys­tem is deeply flawed. B) My util­ity sys­tem in­cludes some sort of ‘diffiu­sion fac­tor’ wherein a di­su­til­ity of X be­comes <X when di­vided among sev­eral peo­ple, and the di­su­til­ity be­comes lower the more peo­ple it’s di­vided among. In essence, there is some di­su­til­ity for one per­son suffer­ing a lot of di­su­til­ity, that isn’t there when it’s di­vided among a lot of peo­ple.

Of this, B seems more likely, and I didn’t take it into ac­count when con­sid­er­ing tor­ture vs. dust specks. In any case, some in­tro­spec­tion on this should help me fur­ther define my util­ity func­tion, so thanks for giv­ing me some­thing to think about.

• This ar­gu­ment does not show that putting dust specks in the eyes of 3^^^3 peo­ple is bet­ter than tor­tur­ing one per­son for 50 years. It shows that putting dust specks in the eyes of 3^^^3 peo­ple and then tel­ling them they helped save some­one from tor­ture is bet­ter than tor­tur­ing one per­son for 50 years.

• Yes—though it does mean Eliezer has to as­sume that the reader’s im­plau­si­ble state of knowl­edge is not and will not be shared by many of the 3^^^3.

• Dust, it turns out, is not nat­u­rally oc­cur­ring, but is only pro­duced as a byproduct of thought ex­per­i­ments.

• “But with this dust speck sce­nario, if we ac­cept Mr. Yud­kowsky’s rea­son­ing and choose the one-per­son-be­ing-tor­tured op­tion, we end up with a situ­a­tion in which ev­ery par­ti­ci­pant would rather that the other op­tion had been cho­sen! Cer­tainly the in­di­vi­d­ual be­ing tor­tured would pre­fer that, and each po­ten­tially dust-specked in­di­vi­d­ual* would gladly agree to ex­pe­rience an in­stant of dust-speck­i­ness in or­der to save the former in­di­vi­d­ual.”

A ques­tion for com­par­i­son: would you rather have a 1/​Googol­plex chance of be­ing tor­tured for 50 years, or lose 1 cent? (A bet­ter com­par­i­son in this case would be if you re­placed “tor­tured for 50 years” with “death”.)

Also: for the origi­nal metaphor, imag­ine that you aren’t the only per­son be­ing offered this choice, and that the peo­ple suffer­ing the con­se­quences are out of the same pool- which is how real life works, al­though in this world we have a pop­u­la­tion of 1 googol­plex rather than 7 billion. If we re­place “dust speck” with “hor­ribly tor­tured for 1 sec­ond”, and we give 1.5 billion peo­ple the same choice and pre­sume they all make the same de­ci­sion, then the choice is be­tween 1.5 billion peo­ple be­ing hor­ribly tor­tured for 50 years, and 1 googol­plex peo­ple be­gin hor­ribly tor­tured for 50 years.

• A ques­tion for com­par­i­son: would you rather have a 1/​Googol­plex chance of be­ing tor­tured for 50 years, or lose 1 cent?

When­ever I drive, I have a greater than a 1/​googlol­plex chance of get­ting into an ac­ci­dent which would leave me suffer­ing for 50 years, and I still drive. I’m not sure how to mea­sure the benefit I get from driv­ing, but there are at least some cases where it’s pretty small, even if it’s not ex­actly a cent.

• When­ever one bends down to pick up a dropped penny, one has more than a 1/​Googol­plex chance of a slip-and-fall ac­ci­dent which would leave one suffer­ing for 50 years.

• As a rather firm speck-ist, I’d like to say that this is the best at­tempt at a for­mal ex­pla­na­tion of speck­ism that I’ve read so far! I’m grate­ful for this, and pleased that I no longer need to use mud­dier and va­guer jus­tifi­ca­tions.

• Another thing that seems to be a fac­tor, at least for me, is that there’s a term in my util­ity func­tion for “fair­ness,” which usu­ally trans­lates to some­thing roughly similar to “shar­ing of bur­dens.” (I also have a term for “free­dom,” which is in con­flict with fair­ness but is on the same scale and can be traded off against it.)

Why wouldn’t this be a situ­a­tion in which “the com­plex­ity of hu­man value” comes into play? Why is it wrong to think some­thing along the lines of, “I would be will­ing to make ev­ery­one a tiny bit worse off so that no one per­son has to suffer ob­scenely”? It’s the ra­tio­nale be­hind tax­a­tion, and while it’s up for de­bate many Less Wrongers sup­port mod­er­ate tax­a­tion if it would help a few peo­ple a lot while hurt­ing a bunch of peo­ple a lit­tle bit.

Think about it: the ex­act num­ber of dol­lars taken from peo­ple in taxes don’t go di­rectly to­ward feed­ing the hun­gry. Some of it gets eaten up in bu­reau­cratic in­effi­cien­cies, some of it goes to bribery and em­bez­zle­ment, some of it goes to the mil­i­tary. This means if you taxed 1,000,000 well-off peo­ple \$1 each, but only ended up giv­ing 100 hun­gry peo­ple \$1000 each to stave of a painful death from star­va­tion, we as util­i­tar­i­ans would be ab­solutely, 100% obli­gated to op­pose this tax­a­tion sys­tem, not be­cause it’s in­effi­cient, but be­cause do­ing noth­ing would be bet­ter. There is to be no room for de­bate; it’s \$100,000 - \$1,000,000 = net loss; let the 100 starv­ing peas­ants die.

Note that you may be a liber­tar­ian and op­pose tax­a­tion on other grounds, but most liber­tar­i­ans wouldn’t say you are liter­ally do­ing moral­ity wrong if you think it’s bet­ter to take \$1 each from a mil­lion peo­ple, even if only \$100,000 of it gets used to help the poor.

I could eas­ily be find­ing ways to ra­tio­nal­ize my own faulty in­tu­itions—but I man­aged to change my mind about New­comb’s prob­lem and about the first ex­am­ple given in the above post de­spite pow­er­ful ini­tial in­tu­itions, and I man­aged to work the lat­ter out for my­self. So I think, if I’m ex­pected to change my mind here, I’m jus­tified in hold­ing out for an ex­pla­na­tion or for­mu­la­tion that clicks with me.

• That makes no sense. Just be­cause one thing cost \$1, and an­other thing cost \$1000, does not mean that the first thing hap­pen­ing 1001 times is bet­ter than the sec­ond one hap­pen­ing once.

Prefer­ences log­i­cally pre­cede prices. If they didn’t, no­body would be able to de­cide what they were will­ing to spend on any­thing in the first place. If util­i­tar­i­anism re­quires that you de­cide the value of things based on their prices, then util­i­tar­i­ans are con­formists with­out val­ues of their own, who de­rive all of their value judg­ments from non-util­i­tar­ian mar­ket par­ti­ci­pants who ac­tu­ally have val­ues.

(Be­sides, money that is spent on “over­head” does not mag­i­cally dis­ap­pear from the econ­omy. Some­one is still be­ing paid to do some­thing with that money, who in turn buys things with the money, and so on. And even if the money does dis­ap­pear—say, dol­lar bills are burnt in a fur­nace—it still would not rep­re­sent a loss of pro­duc­tive ca­pac­ity in the econ­omy. Tax­ing money and then com­pletely de­stroy­ing the money (shrink­ing the money sup­ply) is sound mon­e­tary policy, and it oc­curs on a reg­u­lar (cycli­cal) ba­sis. Your whole ar­gu­ment here is a com­plete non-starter.)

• If you as­sign an ep­silon of di­su­til­ity to a dust speck, then 3^^^3 * ep­silon is way more than 1 per­son suffer­ing 50 years of tor­ture.

This doesn’t fol­low. Ep­silon is by defi­ni­tion ar­bi­trary, there­fore I could say that I want it to be 1 /​ 4^^^4 if I want to.

If we ac­cept Eliezer’s propo­si­tion that the di­su­til­ity of a dust speck is > 0, this doesn’t pre­vent us from de­cid­ing that it is < ep­silon when as­sign­ing a finite di­su­til­ity to 50 years of tor­ture.

• For a site pro­mot­ing ra­tio­nal­ity this en­tire thread is amaz­ing for a va­ri­ety of rea­sons (can you tell I’m new here?). The ba­sic ques­tion is ir­ra­tional. The de­ci­sion for one situ­a­tion over an­other is in­fluenced by a large num­ber of in­ter­con­nected util­ities.

A per­son, or an AI, does not come to a de­ci­sion based on a sin­gle util­ity mea­sure. The de­ci­sion pro­cess draws on nu­mer­ous util­ities, many of which we do not yet know. Just a few util­ities are moral­ity, ur­gency, effort, ac­cep­tance, im­pact, area of im­pact and value.

Com­pli­cat­ing all of this is the over­lay of life ex­pe­rience that at­taches a func­tion of mag­nifi­ca­tion to each util­ity im­pact de­ci­sion. There are 7 billion, and grow­ing, unique over­lays in the world. Th­ese over­lays can in­clude unique per­sonal, so­cietal or other util­ities and have dra­matic im­pact on many of the core util­ities as well.

While you can cer­tainly as­sign some value to each choice, due to the above it will be a unique sub­jec­tive value. The breadth of val­ues do cluster in so­cietal and com­mon life ex­pe­rience util­ities en­abling some de­gree of seg­men­ta­tion. This en­ables gen­er­ally ac­cept­able de­ci­sions. The sep­a­ra­tion of the value spaces for many util­ities pre­clude a sin­gle, unified de­ci­sion. For ex­am­ple, a faith util­ity will have rad­i­cally differ­ent value spaces for Chris­ti­ans and Bud­dhists. The op­ti­mum an­swer can be very differ­ent when the choices in­clude faith util­ity con­sid­er­a­tions.

Also, the cir­cu­lar ex­am­ple of driv­ing around the Bay Area is illog­i­cal from a va­ri­ety of per­spec­tives. The util­ity of each stop is ig­nored. The move­ment of the driver around the cir­cle does not cor­re­late to the premise that al­tru­is­tic ac­tions of an in­di­vi­d­ual are cir­cu­lar.

For dis­cus­sions to have util­ity value rel­a­tive to ra­tio­nal­ity, it seems ap­pro­pri­ate to use more ad­vanced math­e­mat­ics con­cepts. Ex­am­in­ing the va­garies cre­ated when de­ci­sions in­clude com­pet­ing util­ity val­ues or are near edges of util­ity spaces are where we will ex­pand our think­ing.

• For a site pro­mot­ing ra­tio­nal­ity this en­tire thread is amaz­ing for a va­ri­ety of rea­sons (can you tell I’m new here?). The ba­sic ques­tion is ir­ra­tional. The de­ci­sion for one situ­a­tion over an­other is in­fluenced by a large num­ber of in­ter­con­nected util­ities.

So in most forms of util­i­tar­i­anism, there’s still an over­all util­ity func­tion. Hav­ing mul­ti­ple differ­ent func­tions amounts to the same thing as hav­ing a sin­gle func­tion when one needs to figure out how to bal­ance the com­pet­ing in­ter­ests.

• Granted. My point is the func­tion needs to com­pre­hend these fac­tors to come to a more in­formed de­ci­sion. Sim­ply do­ing a com­pare of two val­ues is in­ad­e­quate. Some shad­ing and weight­ing of the val­ues is re­quired, how­ever sub­jec­tive that may be. De­vis­ing a method to as­sess the amount of sub­jec­tivity would be an in­ter­est­ing dis­cus­sion. Con­sid­er­ing the com­po­si­tion of the value is the en­light­en­ing bit.

I also posit that a suite of al­gorithms should be com­pre­hended with some trig­ger func­tion in the over­all al­gorithm. One of our skills is to change modes to suit a given situ­a­tion. How sub-util­ities im­pact the value(s) served up to the over­all util­ity will vary with situ­a­tional in­puts.

The over­all util­ity func­tion needs to work with a col­lec­tion of val­ues and pro­ject each value com­bi­na­tion for­ward in time, and/​or back through his­tory, to de­ter­mine the best se­lec­tion. The na­ture of the com­plex­ity of the pro­cess de­mands us­ing more so­phis­ti­cated means. Hold­ing a dis­cus­sion at the cur­rent level feels to me to be similar to dis­cussing mul­ti­pli­ca­tion when faced with a calcu­lus prob­lem.

• Eliezer—the way ques­tion #1 is phrased, it is ba­si­cally a choice be­tween the fol­low­ing:

1. Be per­ceived as a hero, with cer­tainty.

2. Be per­ceived as a hero with 90% prob­a­bil­ity, and con­tinue not to be no­ticed with 10% prob­a­bil­ity.

This choice will be easy for most peo­ple. The ex­pected 50 ex­tra deaths are a rea­son­able sac­ri­fice for the cer­tainty of be­ing per­ceived as a hero.

The way ques­tion #2 is phrased, it is similarly a choice be­tween the fol­low­ing:

1. Be per­ceived as a villain, with cer­tainty.

2. Not be no­ticed with 90% prob­a­bil­ity, and be per­ceived as a villain with 10% prob­a­bil­ity.

Again, the choice is ob­vi­ous. Choose #2 to avoid be­ing per­ceived as a villain.

If you ar­gue that the above in­ter­pre­ta­tions are then not al­tru­is­tic, I think the “Repug­nant Con­clu­sion” link shows how fu­tile it is to try to make ac­tual “al­tru­is­tic de­ci­sions”.

• I don’t think even ev­ery­one go­ing blind is a good ex­cuse for tor­tur­ing a man for fifty years. How are they go­ing to look him in the eye when he gets out?

That’s cold brother. Real cold....

The idea of an eth­i­cal dis­con­ti­nu­ity be­tween some­thing that can de­stroy a life (50 years of tor­ture, or 1 year) and some­thing that can’t (1 minute of tor­ture, a dust speck) has some in­tu­itive plau­si­bil­ity, but...

Sorry, no. ‘Tor­ture’ and ‘dust speck’ are not two differ­ent quan­tities of the same cur­rency. I wouldn’t even be con­fi­dent try­ing to add up in­di­vi­d­ual min­utes of tor­ture to equal one year. Hu­mans do not ex­pe­rience the world like dis­in­ter­ested ma­chines. They don’t even ex­pe­rience a log­a­r­ith­mic pro­gres­sion of ‘amount of dis­com­fort.’ 50 years of tor­ture does things to the mind and body that one year (for 50 peo­ple) can never do. One year of tor­ture does things one minute can never do. One minute of tor­ture does things x dust specks in x peo­ple’s eyes could never do. None of these things reg­isters on each oth­ers’ scales.

Cash, pos­ses­sions, what­ever, I’m with you and Eliezer. Pure hu­man per­cep­tion is differ­ent, even when you count neu­rons. And no, this isn’t a blind ir­ra­tional re­ac­tion to the key word ‘tor­ture’. This is how hu­man be­ings work.

Some­thing oc­curred to me read­ing through all this ear­lier. Do we put no weight on the fact that if you pol­led the 3^^^3 peo­ple and asked them whether they would all un­dergo one dust speck to save one per­son from 50 years of tor­ture, they’d al­most cer­tainly all say yes? Who would say “no, look how many of us there are! Tor­ture him!” I find this goes a long way to ex­plod­ing the idea of ‘cu­mu­la­tive dis­com­fort’.

• The is­sue with pol­ling 3^^^3 peo­ple is that once they are all aware of the situ­a­tion, it’s no longer purely (3^^^3 dust specks) vs (50yrs tor­ture). It be­comes (3^^^3 dust specks plus 3^^^3 feel­ings of al­tru­is­ti­cally hav­ing saved a life) vs (50yrs tor­ture). The rea­son most of the peo­ple pol­led would ac­cept the dust speck is not be­cause their util­ity of a speck is more than 1/​3^^^3 their util­ity of tor­ture. It’s be­cause their util­ity of (a speck plus feel­ing like a life­saver) is more than their util­ity of (no speck plus feel­ing like a mur­derer).

• Con­sider these two facts about me:

(1) It is NOT CLEAR to me that sav­ing 1 per­son with cer­tainty is morally equiv­a­lent to sav­ing 2 peo­ple when a fair coin lands heads in a one-off deal.

(2) It is CLEAR to me that sav­ing 1000 peo­ple with p=.99 is morally bet­ter than sav­ing 1 per­son with cer­tainty.

Models are sup­posed to hew to the facts. Your model di­verges from the facts of hu­man moral judg­ments, and you re­spond by ex­hort­ing us to live up to your model.

Why should we do that?

• In a world suffi­ciently re­plete with as­piring ra­tio­nal­ists there will be not just one chance to save lives prob­a­bil­is­ti­cally, but (over the cen­turies) many. By the law of large num­bers, we can be con­fi­dent that the out­come of fol­low­ing the ex­pected-value strat­egy con­sis­tently (even if any par­tic­u­lar per­son only makes a choice like this zero or one times in their life) will be that more to­tal lives will be saved.

Some peo­ple be­lieve that “be­ing vir­tu­ous” (or such­like) is bet­ter than achiev­ing a bet­ter so­ciety-level out­come. To that view I can­not say it bet­ter than Eliezer: “A hu­man life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain’s feel­ings of com­fort or dis­com­fort with a plan.”

I see a prob­lem with Eliezer’s strat­egy that is psy­cholog­i­cal rather than moral: if 500 peo­ple die, you may be dev­as­tated, es­pe­cially if you find out later that the chance of failure was, say, 50% rather than 10%. Con­se­quen­tial­ism asks us to take this into ac­count. If you are a gen­eral mak­ing bat­tle de­ci­sions, which would weigh on you more? The death of 500 (in your effort to save 100), or aban­don­ing 100 to die at en­emy hands, know­ing you had a roughly 90% chance to save them? Could that ad­versely af­fect fu­ture de­ci­sions? (in spe­cific sce­nar­ios we must also con­sider other things, e.g. in this case whether it’s worth the cost in re­sources—mil­i­tary lead­ers know, or should know, that re­sources can be equated with lives as well...)

Note: I’m pretty con­fi­dent Eliezer wouldn’t ob­ject to you us­ing your moral sense as a tiebreaker if you had the choice be­tween sav­ing one per­son with cer­tainty and two peo­ple with 50% prob­a­bil­ity.

• I agree, you have to just mul­ti­ply.

But your math is an at­tempt to ab­stract hu­man harm to num­bers, and the prob­lem I have (and I sus­pect oth­ers in­tu­itively have) is that the ab­strac­tion is wrong. You’ve failed to un­der­stand the les­sons of the sci­ence of hap­piness: we for­get small painful things quickly. The cost of a speck in the eye, let us imag­ine, is 1 unit of harm. But that’s only true in the mo­ment the speck hits the eye. In the hour the speck hits the eye, be­cause of the hu­man abil­ity to ig­nore or for­get small pains, the ex­tended cost is 0. This is why a googol of specks is bet­ter than tor­ture, be­cause a googol 0s...

The real prob­lem is defin­ing the bound­ary be­tween mo­men­tary (tran­sient) harm and ex­tend­able harm. This is a psy­cholog­i­cal math prob­lem.

• I think this is fun­da­men­tally my is­sue. I think that even when we dis­re­gard all differ­ences be­tween peo­ple thus mak­ing all their per­sonal qual­ities like pain tol­er­ance iden­ti­cal. There are still dis­tinct lines where in­her­ently a given amount of pain has di­rect con­se­quences that any smaller amount of pain can­not pos­si­bly have in­her­ently, so no dust caused car crashes but the scars of tor­ture count. Mean­ing that yes, I think, there is some shift where an in­finite amount of pain with no in­her­ent long term con­sis­ten­cies, a dust speck, is bet­ter than a case with in­her­ent long term con­se­quences, all else equal. Un­for­tu­nately in real life this is not nor­mally the case. For ex­am­ple all of those mil­lions of stubbed toes, or what­ever, al­ter your be­hav­ior in a small way that is likely on av­er­age to be nega­tive. Thus lead­ing to non in­her­ent losses when for ex­am­ple a stubbed toe causes you to miss the ele­va­tor and thus be late and thus lose a job. So that all the sec­ondary prob­lems can over a large sam­ple lead to as much, or more, long term harm over the pop­u­la­tion, but all of that is out­side the scope of the prob­lem as pre­sented. And I think makes this a poor ques­tion for de­cid­ing real ac­tions and a difficult prob­lem to dis­cus clearly.

• Eliezer, as I’m sure you know, not ev­ery­thing can be put on a lin­ear scale. Mo­men­tary eye ir­ri­ta­tion is not the same thing as tor­ture. Mo­men­tary eye ir­ri­ta­tions should be neg­ligible in the moral calcu­lus, even when mul­ti­plied by google­plex^^^google­plex. 50 years of tor­ture could break some­one’s mind and lead to their de­struc­tion. You’re usu­ally right on the mark, but not this time.

• Would you pay one cent to pre­vent one google­plex of peo­ple from hav­ing a mo­men­tary eye ir­ra­tion?

Tor­ture can be put on a money scale as well: many many coun­tries use tor­ture in war, but we don’t spend huge amounts of money pub­li­ciz­ing and sham­ing these peo­ple (which would re­duce the amount of tor­ture in the world).

In or­der to max­i­mize the benefit of spend­ing money, you must weigh sa­cred against un­sa­cred.

• There’s an in­ter­est­ing pa­per on micro­trans­ac­tions and how hu­man ra­tio­nal­ity can’t re­ally han­dle de­ci­sions about val­ues un­der a cer­tain amount. The cog­ni­tive effort of mak­ing a de­ci­sion out­weighs the pos­si­ble benefits of mak­ing the de­ci­sion.

How much time would you spend mak­ing a de­ci­sion about how to spend a penny? You can’t make a de­ci­sion in zero time, it’s not phys­i­cally pos­si­ble. Ra­tion­ally you have to round off the penny, and the spec of dust.

• I cer­tainly wouldn’t pay that cent if there was an op­tion of pre­vent­ing 50 years of tor­ture us­ing that cent. There’s noth­ing to say that my util­ity func­tion can’t take val­ues in the sur­re­als.

• Tor­ture vs dust specks, let me see:

What would you choose for the next 50 days:

1. Re­mov­ing one mil­iliter of the daily wa­ter in­take of 100,000 peo­ple.

2. Re­mov­ing 10 liters of the daily wa­ter in­take of 1 per­son.

The con­se­quence of choice 2 would be the death of one per­son.

Yud­kowsky would choose 2, I would choose 1.

This is a ques­tion of thresh­old. Below cer­tain thresh­olds things don’t have much effect. So you can­not sim­ply add up.

Another ex­am­ple:

1. Put 1 coin on the head of each of 1,000,000 peo­ple.

2. Put 100,000 coins on the head of one guy.

What do you choose? Can we add up the dis­com­fort caused by the one coin on each of 1,000,000 peo­ple?

• Th­ese are sim­ply false com­par­i­sons.

Had Eliezer talked about tor­tur­ing some­one through the use of googel­plex of dust specks, your com­par­i­son might have merit, but as is it seems to be de­liber­ately miss­ing the point.

Cer­tainly, speak­ing for some­one else is of­ten in­ap­pro­pri­ate, and in this case is sim­ple straw­man­ning.

• I re­ally don’t see how his com­par­i­son is wrong. Could you ex­plain in more depth, please

• The com­par­i­son is in­valid be­cause the tor­ture and dust specks are be­ing com­pared as nega­tively-val­ued ends in them­selves. We’re com­par­ing U(tor­ture one per­son for 50 years) and U(dust speck one per­son) * 3^^^3. But you can’t de­ter­mine whether to take 1 ml of wa­ter per day from 100,000 peo­ple or 10 liters of wa­ter per day from 1 per­son by adding up the to­tal amount of wa­ter in each case, be­cause wa­ter isn’t util­ity.

• Per­haps this is just my mi­s­un­der­stand­ing of util­ity, but I think his point was this: I don’t un­der­stand how adding up util­ity is ob­vi­ously a le­gi­t­i­mate thing to do, just like how you claim that adding up wa­ter de­nial is ob­vi­ously not a le­gi­t­i­mate thing to do. In fact, it seems to me as though the nega­tive util­ity of get­ting a dust speck in the eye is com­pa­rable to the nega­tive util­ity of be­ing de­nied a mil­liliter of wa­ter, while the nega­tive util­ity of be­ing tor­tured for a life­time is more or less equiv­a­lent to the nega­tive util­ity of dy­ing of thirst. I don’t see why it is that the one ad­di­tion is valid while the other isn’t.

If this is just me mi­s­un­der­stand­ing util­ity, could you please point me to some read­ings so that I can bet­ter un­der­stand it?

• I don’t un­der­stand how adding up util­ity is ob­vi­ously a le­gi­t­i­mate thing to do

To start, there’s the Von Neu­mann–Mor­gen­stern the­o­rem, which shows that given some ba­sic and fairly un­con­tro­ver­sial as­sump­tions, any agent with con­sis­tent prefer­ences can have those prefer­ences ex­pressed as a util­ity func­tion. That does not re­quire, of course, that the util­ity func­tion be sim­ple or even hu­manly plau­si­ble, so it is perfectly pos­si­ble for a util­ity func­tion to spec­ify that SPECKS is preferred over TORTURE. But the idea that do­ing an un­de­sir­able thing to n dis­tinct peo­ple should be around n times as bad as do­ing it to one per­son seems plau­si­ble and defen­si­ble, in hu­man terms. There’s some dis­cus­sion of this in The “In­tu­itions” Be­hind “Utili­tar­i­anism”.

(The wa­ter sce­nario isn’t com­pa­rable to tor­ture vs. specks mainly be­cause, com­pared to 3^^^3, 100,000 is ap­prox­i­mately zero. If we changed the wa­ter sce­nario to use 3^^^3 also, and if we as­sume that hav­ing one fewer mil­liliter of wa­ter each day is a nega­tively ter­mi­nally-val­ued thing for at least a tiny frac­tion of those peo­ple, and if we as­sume that the one per­son who might die of de­hy­dra­tion wouldn’t oth­er­wise live for an ex­tremely long time, then it seems that the lat­ter op­tion would in­deed be prefer­able.)

• If you look at the as­sump­tions be­hind VNM, I’m not at all sure that the “tor­ture is worse than any amount of dust specks” crowd would agree that they’re all un­con­tro­ver­sial.

In par­tic­u­lar the ax­ioms that Wikipe­dia la­bels (3) and (3′) are al­most beg­ging the ques­tion.

Imag­ine a util­ity func­tion that maps events, not onto R, but onto (R x R) with a lex­i­co­graph­i­cal or­der­ing. This satis­fies com­plete­ness, tran­si­tivity, and in­de­pen­dence; it just doesn’t satisfy con­ti­nu­ity or the Archi­me­dian prop­erty.

But is that the end of the world? Look at con­ti­nu­ity: if L is tor­ture plus a dust speck (util­ity (-1,-1)). M is just tor­ture (util­ity (-1,0)) and N is just a dust speck ((0,-1)), then must there re­ally be a prob­a­bil­ity p such that pL + (1-p)N = M? Or would it in­stead be per­miss­able to say that for p=1, tor­ture plus dust speck is still strictly worse than tor­ture, whereas for any p<1, any tiny prob­a­bil­ity of re­duc­ing the tor­ture is worth a huge prob­a­bilty of adding that dust speck to it?

(ed­ited to fix ty­pos)

• In par­tic­u­lar, VNM con­nects util­ity with prob­a­bil­ity, so we can use an ar­gu­ment based on prob­a­bil­ity.

One per­son gain­ing N util­ity should be equally good no mat­ter who it is, if util­ity is prop­erly cal­ibrated per­son-to-per­son.

One per­son gain­ing N util­ity should be equally good as one ran­domly se­lected per­son out of N peo­ple gain­ing N util­ity.

Now we an­a­lyze it from each per­son’s per­spec­tive. They each have a 1/​N chance of gain­ing N util­ity. This is 1 unit of ex­pected util­ity, so they find it as good as surely gain­ing one unit of util­ity.

If they’re all in­differ­ent be­tween one per­son gain­ing N and ev­ery­one gain­ing 1, who’s to dis­agree?

• One per­son gain­ing N util­ity should be equally good no mat­ter who it is, if util­ity is prop­erly cal­ibrated per­son-to-per­son.

That… just seems kind of crazy. Why would it be equally Good to have Hitler gain a bunch of util­ity as to have me, for ex­am­ple, gain that. Or to have a rich per­son who has ba­si­cally ev­ery­thing they want gain a mod­est amount of util­ity, ver­sus a poor per­son who is close to star­va­tion gain­ing the same. If this lat­ter ex­am­ple isn’t tak­ing into ac­count your cal­ibra­tion per­son to per­son, could you give an ex­am­ple of what could be given to Dick Cheney that would be of equiv­a­lent Good as giv­ing a sand­wich and a job to a very hun­gry home­less per­son?

If they’re all in­differ­ent be­tween one per­son gain­ing N and ev­ery­one gain­ing 1, who’s to dis­agree?

I for one would not pre­fer that, in most cir­cum­stances. This is why I would pre­fer definitely be­ing given the price of a lot­tery ticket to play­ing the lot­tery (even as­sum­ing the lot­tery paid out 100% of its in­take).

1. You can as­sume that peo­ple start equal. A rich per­son already got a lot of util­ity, while the poor per­son already lost some. You can still do the math that de­rives util­i­tar­i­anism in the fi­nal util­ities just fine.

2. Utility =/​= Money. Un­der the VNM model I was us­ing, util­ity is defined as the thing you are risk-neu­tral in. N units of util­ity is the thing which a 1/​N chance of is worth the same as 1 unit of util­ity. So my state­ment is triv­ially true.

Let’s say, in a cer­tain sce­nario, each per­son i has util­ity u_i. We define U to be the sum of all the u_i, then by defi­ni­tion, each per­son is in­differ­ent be­tween hav­ing u_i and hav­ing a u_i/​U chance of U and a (1-u_i)/​U chance of 0. Since ev­ery­one is in­differ­ent, this sce­nario is as good as the sce­nario in which one per­son, se­lected ac­cord­ing to those prob­a­bil­ities, has U, and ev­ery­one else has 0. The good­ness of such a sce­nario should be a func­tion only of U.

1. Poli­tics is the mind-kil­ler, don’t bring con­tro­ver­sial figures such as Dick Cheney up.

2. The rea­son it is just to harm the un­just is not be­cause their hap­piness is less valuable. It is be­cause harm­ing the un­just causes some to choose jus­tice over in­jus­tice.

• (1-u_i)/​U

That should be (1-u_i/​U).

Also, “_” is mark­down for ital­ics. To dis­play un­der­scores, use “\_”.

• Let’s say, in a cer­tain sce­nario, each per­son i has util­ity ui. We define U to be the sum of all the ui, then by defi­ni­tion, each per­son is in­differ­ent be­tween hav­ing ui and hav­ing a ui/​U chance of U and a (1-u_i)/​U chance of 0.

I am hav­ing a lot of trou­ble com­ing up with a real world ex­am­ple of some­thing work­ing out this way. Could you give one, please?

You can as­sume that peo­ple start equal.

I’m not sure I know what you mean by this. Are you say­ing that we should imag­ine peo­ple are con­ceived with 0 util­ity and then get or lose a bunch based on the cir­cum­stances they’re born into, what their ge­net­ics ended up gift­ing them with, things like that?

In my con­cep­tion of my util­ity func­tion, I place value on in­creas­ing not merely the over­all util­ity, but the most com­mon level of util­ity, and de­creas­ing the de­vi­a­tion in util­ity. That is, I would pre­fer a world with 100 peo­ple each with 10 util­ity to a world with 99 peo­ple with 1 util­ity and 1 per­son with 1000 util­ity, even though the lat­ter has a higher sum of util­ity. Is there some­thing in­her­ently wrong about this?

• I am hav­ing a lot of trou­ble com­ing up with a real world ex­am­ple of some­thing work­ing out this way. Could you give one, please?

One could con­struct an ex­tremely con­trived real-world ex­am­ple rather triv­ially. A FAI has a plan that will make one per­son Space Em­peror, with who it is de­pend­ing on some sort of com­plex calcu­la­tion. It is con­sid­er­ing whether do­ing so would be a good idea or not.

The point is that a moral the­ory must con­sider such odd spe­cial cases. I can re­for­mu­late the ar­gu­ment to use a differ­ent strange sce­nario if you like, but the point isn’t the spe­cific sce­nario—it’s the math­e­mat­i­cal reg­u­lar­ity.

Are you say­ing that we should imag­ine peo­ple are con­ceived with 0 util­ity and then get or lose a bunch based on the cir­cum­stances they’re born into, what their ge­net­ics ended up gift­ing them with, things like that?

My ar­gu­ment is based on a math­e­mat­i­cal in­tu­ition and can take many differ­ent forms. That com­ment came from ask­ing you to ac­cept that giv­ing one per­son N util­ity is as good as giv­ing an­other N util­ity, which may be hard to swal­low.

So what I’m re­ally say­ing is that all you need to ac­cept is that, if we per­mute the util­ities, so that in­stead of me hav­ing 10 and you 5, you have 10 and I 5, things don’t get bet­ter or worse.

Start­ing at 0 is a red her­ring for which I apol­o­gize.

“Greet­ings, hu­mans! I am a su­per­in­tel­li­gence with strange val­ues, who is perfectly hon­est. In five min­utes, I will ran­domly choose one of you and in­crease his/​her util­ity to 1000. The oth­ers, how­ever, will re­ceive a util­ity of 1.”

“So did mine! So am I”

etc........

“Let’s check the ran­dom num­ber gen­er­a­tor … Bob wins. Sucks for the rest of you.”

The su­per-in­tel­li­gence has just, ap­par­ently, done evil, af­ter mak­ing two de­ci­sions:

The first, ev­ery­one af­fected ap­proved of

The sec­ond, in car­ry­ing out the con­se­quences of a pre-defined ran­dom pro­cess, was un­doubt­edly fair—while those who lost were un­happy, they have no cause for com­plaint.

This is a seem­ing con­tra­dic­tion.

• One could con­struct an ex­tremely con­trived real-world ex­am­ple rather triv­ially.

When I say a real world ex­am­ple, I mean one that has ac­tu­ally already oc­curred in the real world. I don’t see why I’m obli­gated to have my moral sys­tem func­tion on scales that are phys­i­cally im­pos­si­ble, or ex­traor­di­nar­ily un­likely-such as hav­ing an om­nipo­tent de­ity or alien force me to make a uni­verse-shat­ter­ing de­ci­sion, or hav­ing to make de­ci­sions in­volv­ing a phys­i­cally im­pos­si­ble num­ber of per­sons, like 3^^^^3.

I make no claims to perfec­tion about my moral sys­tem. Maybe there is a moral sys­tem that would work perfectly in all cir­cum­stances, but I cer­tainly don’t know it. But it seems to me that a re­cur­ring theme on Less Wrong is that only a fool would have cer­tainty 1 about any­thing, and this situ­a­tion seems analo­gous. It seems to me to be an act of proper hu­mil­ity to say “I can’t rea­son well with num­bers like 3^^^^3 and in all like­li­hood I will never have to, so I will make do with my de­cent moral sys­tem that seems to not lead me to ter­rible con­se­quences in the real world situ­a­tions it’s used in”.

So what I’m re­ally say­ing is that all you need to ac­cept is that, if we per­mute the util­ities, so that in­stead of me hav­ing 10 and you 5, you have 10 and I 5, things don’t get bet­ter or worse.

This is a very differ­ent claim from what I thought you were first claiming. Let’s ex­am­ine a few differ­ent situ­a­tions. I’m go­ing to say what my judg­ment of them is, and I’m go­ing to guess what yours is: please let me know if I’m cor­rect. For all of these I am as­sum­ing that you and I are equally “moral”, that is, we are both ra­tio­nal hu­man­ists who will try to help each other and ev­ery­one else.

I have 10 and you have 5, and then I have 11 and you have 4. I say this was a bad thing, I’m guess­ing you would say it is neu­tral.

I have 10 and you have 5, and then I have 9 and you have 6. I would say this is a good thing, I’m guess­ing you would say this is neu­tral.

I have 10 and you have 5, and then I have 5 and you have 10. I would say this is neu­tral, I think you would agree.

10 & 5 is bad, 9 & 6 is bet­ter, 7 & 8 = 8 & 7 is the best if we must use in­te­gers, 6 & 9 = 9 & 6 and 10 & 5 = 5 & 10.

“Greet­ings, hu­mans! I am a su­per­in­tel­li­gence with strange val­ues, who is perfectly hon­est. In five min­utes, I will ran­domly choose one of you and in­crease his/​her util­ity to 1000. The oth­ers, how­ever, will re­ceive a util­ity of 1.”

“My ex­pected util­ity just in­creased from 10 to 10.99, but the mode util­ity just de­creased from 10 to 1, and the range of the util­ity just in­creased from 0 to 999. I am un­happy about this.”

Thanks for tak­ing the time to talk about all this, it’s very in­ter­est­ing and ed­u­ca­tional. Do you have a recom­men­da­tion for a book to read on Utili­tar­i­anism, to get per­haps a more el­e­men­tary in­tro­duc­tion to it?

• When I say a real world ex­am­ple, I mean one that has ac­tu­ally already oc­curred in the real world. I don’t see why I’m obli­gated to have my moral sys­tem func­tion on scales that are phys­i­cally im­pos­si­ble, or ex­traor­di­nar­ily un­likely-such as hav­ing an om­nipo­tent de­ity or alien force me to make a uni­verse-shat­ter­ing de­ci­sion, or hav­ing to make de­ci­sions in­volv­ing a phys­i­cally im­pos­si­ble num­ber of per­sons, like 3^^^^3.

It should work in more re­al­is­tic cases, it’s just that the math is un­clear. If you are vot­ing for differ­ent par­ties, and you think that your vote will af­fect two things—one, the in­equal­ity of util­ity, and two, how much that util­ity is based on pre­dictable sources like in­her­i­tance and how much on un­pre­dictable sources like luck. You might find that an in­crease to both in­equal­ity and luck would be a change that al­most ev­ery­one would pre­fer, but your moral sys­tem bans. In­deed, if your sys­tem does not lin­early weight peo­ple’s ex­pected util­ities, such a change must be pos­si­ble.

I am us­ing the strange cases, not to show hor­rible con­se­quences, but to show in­con­sis­ten­cies be­tween judge­ments in nor­mal cases.

I have 10 and you have 5, and then I have 11 and you have 4. I say this was a bad thing, I’m guess­ing you would say it is neu­tral.

Utility is highly non­lin­ear in wealth or other non-psy­cho­me­t­ric as­pects of one’s well-be­ing. I agree with ev­ery­thing you say I agree with.

“My ex­pected util­ity just in­creased from 10 to 10.99, but the mode util­ity just de­creased from 10 to 1, and the range of the util­ity just in­creased from 0 to 999. I am un­happy about this.”

Surely these peo­ple can dis­t­in­guish there own per­sonal welfare from the good for hu­man­ity as a whole? So each in­di­vi­d­ual per­son is think­ing:

“Well, this benefits me, but it’s bad over­all.”

This surely seems ab­surd.

Note that mode is a bad mea­sure if the dis­tri­bu­tion of util­ity is bi­modal, if, for ex­am­ple, women are op­pressed, and range at­taches enor­mous sig­nifi­cance to the best-off and worst-off in­di­vi­d­u­als com­pared with the best and the worst. It is, how­ever, pos­si­ble to come up with good mea­sures of in­equal­ity.

Thanks for tak­ing the time to talk about all this, it’s very in­ter­est­ing and ed­u­ca­tional. Do you have a recom­men­da­tion for a book to read on Utili­tar­i­anism, to get per­haps a more el­e­men­tary in­tro­duc­tion to it?

No prob­lem. Sadly, I am an au­to­di­dact about util­i­tar­i­anism. In par­tic­u­lar, I came up with this ar­gu­ment on my own. I can­not recom­mend any par­tic­u­lar source—I sug­gest you ask some­one else. Do the Wiki and the Se­quences say any­thing about it?

• Note that mode is a bad mea­sure if the dis­tri­bu­tion of util­ity is bi­modal, if, for ex­am­ple, women are op­pressed, and range at­taches enor­mous sig­nifi­cance to the best-off and worst-off in­di­vi­d­u­als com­pared with the best and the worst. It is, how­ever, pos­si­ble to come up with good mea­sures of in­equal­ity.

Yeah, I just don’t re­ally know enough about prob­a­bil­ity and statis­tics to pick a good term. You do see what I’m driv­ing at, though, right? I don’t see why it should be for­bid­den to take into ac­count the dis­tri­bu­tion of util­ity, and pre­fer a more equal one.

One of my main out­side-of-school pro­jects this semester is to teach my­self prob­a­bil­ity. I’ve got In­tro to Prob­a­bil­ity by Grin­stead and Snell sit­ting next to me at the mo­ment.

Surely these peo­ple can dis­t­in­guish there own per­sonal welfare from the good for hu­man­ity as a whole? So each in­di­vi­d­ual per­son is think­ing:

“Well, this benefits me, but it’s bad over­all.”

This surely seems ab­surd.

But it doesn’t benefit the vast ma­jor­ity of them, and by my stan­dards it doesn’t benefit hu­man­ity as a whole. So each in­di­vi­d­ual per­son is think­ing “this may benefit me, but it’s much more likely to harm me. Fur­ther­more, I know what the out­come will be for the whole of hu­man­ity: in­creased in­equal­ity and de­creased most-com­mon-util­ity. There­fore, while it may help me, it prob­a­bly won’t, and it will definitely harm hu­man­ity, and so I op­pose it.”

Do the Wiki and the Se­quences say any­thing about it?

• I do see what you’re driv­ing at. I, how­ever, think that the right way to in­cor­po­rate egal­i­tar­i­anism into our de­ci­sion-mak­ing is through a risk-averse util­ity func­tion.

But it doesn’t benefit the vast ma­jor­ity of them, and by my stan­dards it doesn’t benefit hu­man­ity as a whole. So each in­di­vi­d­ual per­son is think­ing “this may benefit me, but it’s much more likely to harm me.

You are deny­ing peo­ple the abil­ity to calcu­late ex­pected util­ity, which VNM says they must use in mak­ing de­ci­sions!

• You are deny­ing peo­ple the abil­ity to calcu­late ex­pected util­ity, which VNM says they must use in mak­ing de­ci­sions!

Could you go more into what ex­actly risk-averse means? I am un­der the im­pres­sion it means that they are un­will­ing to take cer­tain bets, even though the bet in­creases their ex­pected util­ity, if the odds are low enough that they will not gain the ex­pected util­ity, which is more or less what I was try­ing to say there. Again, the rea­son I would not play even a fair lot­tery.

Okay. I’ll try to re­spond to cer­tain posts on the sub­ject and see what peo­ple recom­mend. Is there a place here to just ask for recom­mended read­ing on var­i­ous sub­jects? It seems like it would prob­a­bly be waste­ful and in­effec­tive to make a new post ask­ing for that ad­vice.

• Could you go more into what ex­actly risk-averse means? I am un­der the im­pres­sion it means that they are un­will­ing to take cer­tain bets, even though the bet in­creases their ex­pected util­ity, if the odds are low enough that they will not gain the ex­pected util­ity, which is more or less what I was try­ing to say there. Again, the rea­son I would not play even a fair lot­tery.

Risk-averse means that your util­ity func­tion is not lin­ear in wealth. A sim­ple util­ity func­tion that is of­ten used is util­ity=log(wealth). So hav­ing \$1,000 would be a util­ity of 3, \$10,000 a util­ity of 4, \$100,000 a util­ity of 5, and so on. In this case one would be in­differ­ent be­tween a 50% chance of hav­ing \$1000 and a 50% chance of \$100,000, and a 100% chance of \$10,000.

This cre­ates be­hav­ior which is quite risk-averse. If you have \$100,000, a one-in-a-mil­lion chance of \$10,000,000 would be worth about 50 cents. The ex­pected profit is \$10 dol­lars, but the ex­pected util­ity is .000002. A lot­tery which is fair in money would charge \$10, while one that is fair in util­ity would charge \$.50. This par­tic­u­lar agent would play the sec­ond but not the first.

The Von Neu­mann-Mor­gen­stern the­o­rem says that, even if an agent does not max­i­mize ex­pected profit, it must max­i­mize ex­pected util­ity for some util­ity func­tion, as long as it satis­fies cer­tain ba­sic ra­tio­nal­ity con­straints.

Okay. I’ll try to re­spond to cer­tain posts on the sub­ject and see what peo­ple recom­mend. Is there a place here to just ask for recom­mended read­ing on var­i­ous sub­jects? It seems like it would prob­a­bly be waste­ful and in­effec­tive to make a new post ask­ing for that ad­vice.

Post­ing in that thread where peo­ple are pro­vid­ing text­book recom­men­da­tions with a re­quest for that spe­cific recom­men­da­tion might make sense. I know of nowhere else to check.

• Thanks for the ex­pla­na­tion of risk averse­ness.

Post­ing in that thread where peo­ple are pro­vid­ing text­book recom­men­da­tions with a re­quest for that spe­cific recom­men­da­tion might make sense. I know of nowhere else to check.

I just checked the front page af­ter post­ing that re­ply and did just that

• Here is an ear­lier com­ment where I said es­sen­tially the same thing that Will_Sawin just said on this thread. Maybe it will help to have the same thing said twice in differ­ent words.

• Agree—I was kind of think­ing it as fric­tion. Say you have 1000 boxes in a ware­house, all pre­cisely where they need to be. Be­ing close to their cur­rent po­si­tions is bet­ter than not. Is it bet­ter to A) ap­ply 100 N of force over 1 sec­ond to 1 box, or B) 1 N of force over 1 sec­ond to all 1000 boxes? Well if they’re fric­tion­less and all on a level sur­face, do op­tion A be­cause it’s eas­ier to fix, but that’s not how the world is. Say that 1 N against the boxes isn’t even enough to defeat the static fric­tion: that means in op­tion B, none of the boxes will even move.

Back to the choice be­tween A) hav­ing a googol­plex of peo­ple have a speck of dust in their eye vs B) one per­son be­ing tor­tured for 50 years: in op­tion A, you have a googol­plex of peo­ple who lead pro­duc­tive lives who don’t even re­mem­ber that any­thing out of the or­di­nary hap­pened to them sud­denly (as­sum­ing one sin­gle dust speck doesn’t even pass the mem­o­rable thresh­old), and in op­tion B, you have a googol­plex − 1 of peo­ple lead­ing pro­duc­tive lives who don’t re­mem­ber any­thing out of the or­di­nary hap­pen­ing, and one per­son be­ing tor­tured and never ac­com­plish­ing any­thing.

• Roland, I’ll take that bet.

The idea of an eth­i­cal dis­con­ti­nu­ity be­tween some­thing that can de­stroy a life (50 years of tor­ture, or 1 year) and some­thing that can’t (1 minute of tor­ture, a dust speck) has some in­tu­itive plau­si­bil­ity, but ul­ti­mately I don’t buy it. It very much seems like death must be in the same ‘regime’ as tor­ture, but also that death is in the same regime as triv­ial harms, be­cause peo­ple risk death for triv­ial benefit all the time—I imag­ine any­one here would drive across town for \$100 or \$500 or \$1000, even though it’s slightly more dan­ger­ous than stay­ing at home. The life-de­stroy­ing as­pect means that the phys­i­cal pain is only part (prob­a­bly the smaller part) of the harm of pro­longed tor­ture, and that the bad­ness of tor­ture rises greater than lin­early with du­ra­tion, but doesn’t nec­es­sar­ily make it in­com­men­su­rable.

• Eliezer’s point would have been valid, had he cho­sen al­most any­thing other than mo­men­tary eye ir­ri­ta­tion. Even the mo­men­tary eye-ir­ri­ta­tion ex­am­ple would work if the eye ir­ri­ta­tion would lead to se­ri­ous harm (e.g. eye in­flam­ma­tion and blind­ness) in a small pro­por­tion of those af­flicted with the speck of dust. If the pre­dicted out­come was mil­lions of peo­ple go­ing blind (and then you have to con­sider the re­sult­ing costs to so­ciety), then Eliezer is ab­solutely right: shut-up and do the math.

• Imag­ine that you had the choice but once you’ve made that choice it will be ap­plied the same way when­ever some­one will get tor­tured, magic in­ter­venes, saves that one per­son and a google­plex other peo­ple get a spec in their eye.

it feels like it’s not a big deal if it hap­pens once or twice but imag­ine that across all the uni­verses where it ap­plies it ended up trig­ger­ing 3,153,600,000 times, not even half the pop­u­la­tion of our world.

sud­denly a google­plex of peo­ple are suffer­ing con­stantly and half blinded most of the time.

it feels small when it hap­pens once but the same has to ap­ply when it hap­pens again and again.

• One can eas­ily make an ar­gu­ment like the tor­ture vs. dust specks ar­gu­ment to show that the Repug­nant Con­clu­sion is not only not re­pug­nant, but cer­tainly true.

More in­tu­itively, if it weren’t true, we could find some pop­u­la­tion of 10,000 per­sons at some high stan­dard of liv­ing, such that it would be morally praise­wor­thy to save their lives at the cost of a googol­plex galax­ies filled with in­tel­li­gent be­ings. Most peo­ple would im­me­di­ately say that this is false, and so the Repug­nant Con­clu­sion is true.

• In­ter­est­ingly enough, I don’t find the Repug­nant Con­clu­sion all that re­pug­nant. Is there any­one else here who shares this in­tu­ition?

• Note here that the differ­ence is be­tween the deaths of cur­rently-liv­ing peo­ple, and pre­vent­ing the births of po­ten­tial peo­ple. In he­do­nic util­i­tar­ian terms it’s the same, but you can have other util­i­tar­ian schemes (ex. choice util­i­tar­i­anism as I com­mented above) where death ei­ther has an in­her­ent nega­tive value, or vi­o­lates the per­son’s prefer­ences against dy­ing.

BTW note that even if you draw no dis­tinc­tion, your thought ex­per­i­ment doesn’t nec­es­sar­ily prove the Repug­nant Con­clu­sion. The third op­tion is to say that be­cause the Repug­nant Con­clu­sion is false, it must be that the au­to­matic re­sponse to your thought ex­per­i­ment is in­cor­rect, i.e. that it’s OK to wipe out a googol­plex galax­ies full of peo­ple with lives barely worth liv­ing to save 10,000 peo­ple. Although I feel like most peo­ple, if they re­jected the kil­ling/​pre­vent­ing birth dis­tinc­tion, would go with the Repug­nant Con­clu­sion over that.

• Ben, you are right. Two peo­ple with dusty eyes is worse than one. But it isn’t twice as worse. It’s not even nearly twice as worse. On the other hand I would say that two peo­ple be­ing tor­tured is al­most twice as bad as one, but not quite. I’m sure I can’t write down a for­mula for my util­ity func­tion in terms of num­ber of deaths, or dusty eyes, or tor­tures, but I know one thing: it is not lin­ear. There’s noth­ing in­her­ently ir­ra­tional about choos­ing a non­lin­ear util­ity func­tion. So I will con­tinue to pre­fer any num­ber of dusty eyes to even one tor­ture. I would also pre­fer a very large num­ber of 1-day tor­tures to a singe 50-year one. (far far more than 365 * 50). Am I be­ing ir­ra­tional? How?

• OK, my fi­nal re­sponse on the sub­ject, which has had me un­able to think about any­thing else all day. Thanks to all in­volved for helping me get my thoughts in or­der on this topic, and sorry for hi­jack­ing.

there­fore bury­ing the whole group in dust

You’ve for­got­ten the rules of the game. There’s no ‘bury­ing ev­ery­one in dust.’ You ei­ther have a speck of dust in your eye and blink it away, or you don’t. And that’s for ev­ery in­di­vi­d­ual in the group. Play­ing with the num­bers doesn’t change the sce­nario much ei­ther.

My #1 com­plaint is that no-one seems both­ered by things like this:

So we then dou­ble the num­ber in set A while halv­ing their dis­com­fort.

Halv­ing their dis­com­fort? Care to go into some more depth on that? Would that be half as many neu­rons firing ‘pain!’? Rods in­serted half as deep? Thumbs only halfway screwed?

And this:

We can keep do­ing this, grad­u­ally—very grad­u­ally—diminish­ing the de­gree of dis­com­fort, and mul­ti­ply­ing by a fac­tor of a googol each time, un­til...

As far as I know, there is no rea­son why I should agree that you can do any­thing of the sort. You might be able to di­vide tor­ture by N and get dust mote, and you might not. But you cer­tainly can’t take it for granted then tell me I’m ir­ra­tional.

• Whilst the your anal­y­sis of life-sav­ing choices seems fairly un­con­tentious, I’m not en­tirely con­vinced that the ar­ith­metic of differ­ent types of suffer­ing add to­gether the way you as­sume. It seems at least plau­si­ble to me that where dust motes are in­di­vi­d­ual points, tor­ture is a sec­tion of a con­tiu­ous line, and thus you can count the points, or you can mea­sure the lengths of differ­ent lines, but no num­ber of the former will add up to the lat­ter.

• A dust speck takes a finite time, not an in­stant. Un­less I’m mi­s­un­der­stand­ing you, this makes them lines, not points.

• You’re mi­s­un­der­stand­ing. It has noth­ing to do with time—it’s not a time line. It means the dust motes are in­finites­i­mal, while the tor­ture is finite. A finite sum of in­finites­i­mals is always in­finites­i­mal.

Not that you re­ally need to use a math anal­ogy here. The point is just that there is a qual­i­ta­tive differ­ence be­tween specs of dust and tor­ture. They’re in­com­men­su­rable. You can­not di­vide tor­ture by spec of dust, be­cause nei­ther one is a num­ber to start with.

• the dust motes are infinitesimal

This is an in­ter­est­ing claim. Either it im­plies that the hu­man brain is ca­pa­ble of de­tect­ing in­finites­i­mal differ­ences in util­ity, or else it im­plies that you should have no prefer­ence be­tween hav­ing a dust speck in your eye and not hav­ing one in your eye.

• There is a perfectly good way of treat­ing this as num­bers. Trans­finite di­vi­sion is a thing. With X peo­ple ex­pe­rienc­ing in­finides­i­mal dis­com­fort and Y peo­ple ex­pe­rien­ing finite dis­com­fort if X and Y are finites then tor­ture is always worse. With X be­ing trans­finite dust specks could be worse. But in re­verse if you in­sist that the im­pacts are re­als ie finites then there are finite mul­ti­ples that go past each other that is for any r,y,z in R r>0,y>r, there is a z so that rz>y.

• I think the dust motes vs. tor­ture makes sense if you imag­ine a per­son be­ing bom­barded with dust motes for 50 years. I could eas­ily imag­ine a con­tin­u­ous stream of dust motes be­ing as bad as tor­ture (al­though pos­si­bly the lack of vari­a­tion would make it far less effec­tive than what a skil­led tor­turer could do).

Based on that, Eliezer’s be­lief is just that the same num­ber of dust motes spread out among many peo­ple is just as bad as one per­son get­ting hit by all of them. Which I will ad­mit is a bit harder to jus­tify. One pos­si­ble way to make the ar­gu­ment is to think in terms of rules util­i­tar­i­anism, and imag­ine a world where a huge num­ber of peo­ple got the choice, then com­pare one where they all choose the tor­ture vs. one where they all choose the dust motes- the former out­come would clearly be bet­ter. I’m pretty sure there are cases where this could be im­por­tant in gov­ern­ment policy.

• A googol­plex is ten to the googolth power. That’s a googol/​100 fac­tors of a googol. So we can keep do­ing this, grad­u­ally—very grad­u­ally—diminish­ing the de­gree of dis­com­fort, and mul­ti­ply­ing by a fac­tor of a googol each time, un­til we choose be­tween a googol­plex peo­ple get­ting a dust speck in their eye, and a googol­plex/​googol peo­ple get­ting two dust specks in their eye.

Maybe the strange no­ta­tion has me con­fused, but I don’t see the con­tra­dic­tion here. Con­sider sneezes. One hu­man sneez­ing N times in a row, where N>2, seems at least su­per-ex­po­nen­tially worse than N hu­mans each sneez­ing once (as­sum­ing that no no­tice­able con­se­quences for any of this last be­yond the day). In fact, if we all sneeze si­mul­ta­neously that would be pretty cool.

This next part doesn’t di­rectly ad­dress the origi­nal ques­tion. But if 3^^^3 hu­mans know that by get­ting a dust speck in their eye they helped save some­one from tor­ture, the vast ma­jor­ity would likely feel happy about this and we wind up with a moun­tain of in­creased util­ity from Dust Specks rel­a­tive to No Pain. Whereas an av­er­age-hu­man tor­ture vic­tim who learns that the tor­ture served to pre­vent dust specks might try to kill you bare-handed.

• They would all die of dust specks due to 3^^^3! Or some­thing.

• Eliezer—de­pends again on whether we’re ag­gre­gat­ing across in­di­vi­d­u­als or within one in­di­vi­d­ual. From a util­i­tar­ian per­spec­tive (see The Post That Is To Come for a non-util­i­tar­ian take), that’s my big ob­jec­tion to the specks thing. Slap­ping each of 100 peo­ple once each is not the same as slap­ping one per­son 100 times. The first is a se­ries of slaps. The sec­ond is a beat­ing.

Hon­estly, I’m not sure if I’d have given the same an­swer to all of those ques­tions w/​o hav­ing heard of the dust specks dilemma. I feel like that world is a lit­tle too weird—the thing that mo­ti­vates me to think about those ques­tions is the dust specks dilemma. They’re not the sort of things prac­ti­cal rea­son or­di­nar­ily has to worry about, or that we can or­di­nar­ily ex­pect to have well-de­vel­oped in­tu­itions about!

• ul­ti­mately hav­ing no one drive at all would save the most lives

And ul­ti­mately no dust specks and no tor­ture and lol­lipops all round would be great for ev­ery­one. Stick to the deal as pre­sented. You have a choice to make. Speed is quan­tifi­able. Death is very quan­tifi­able. Pain—even phys­i­cal pain—goes in the same cat­e­gory as love, sad­ness, con­fu­sion. They are ab­stract nouns be­cause you can­not hold, mea­sure or count them. Does N los­ing lot­tery tick­ets spread equally over N peo­ple equal one dead rel­a­tive’s worth of grief?

Re­con­sider my poll sce­nario: Wouldn’t the opinions of 3^^^3 peo­ple, all will­ing to bear the brunt of a dust speck for you, sway your judg­ment one lit­tle bit? Are you that cer­tain of your ra­tio­nal­ity? You are about to sub­mit your­self to 50 years of tor­ture, and you have 3^^^3 peo­ple scream­ing at you ‘don’t bother, it’s okay, no sin­gle per­son has a prob­lem with just blink­ing once, even those who would opt for tor­ture in your place!’ What do you re­ply? ‘Stop be­ing so damned ir­ra­tional! Just in­sert the rods!’

• How come these ex­am­ples and sub­se­quent nar­ra­tives never men­tion the value of floors and diminish­ing re­turns? Is ev­ery life val­ued the same? If there was a mon­ster or dis­ease that will kill ev­ery­one in the world there is a floor in­volved. Choice 1 of sav­ing 400 lives en­sures that hu­man­ity con­tinues (as­sum­ing 400 peo­ple are enough to re-pop­u­late the world). While have a 90% chance of sav­ing 500 leaves 10% chance that hu­man­ity on earth ends. Would you agree that floors are im­por­tant fac­tors that do change the value of an op­ti­mal out­come when they are one time events? In other words the marginal util­ity of a life is diminish­ing in this ex­am­ple.

• Great New The­o­rem in color per­cep­tion : adding to­gether 10 peo­ples’ per­cep­tions of light pink is equiv­a­lent to one per­son’s per­cep­tion of dark red. This is demon­stra­ble, as there is a con­tin­u­ous scale be­tween pink and red.

• Mr. Yud­kowsky, I’m not sure the du­ra­tion/​in­ten­sity of the tor­ture is the only bad thing rele­vant here. A friend of mine pointed out that a prob­lem with 50 years of tor­ture is that it per­ma­nently de­stroys some­one’s life. (I think it was in one of the “fake al­tru­ism” fam­ily of posts that you pointed out that be­lief of util­ity != util­ity of be­lief.) So the util­ity curve would be pretty flat for the first cou­ple thou­sand dust specks, be­gin­ning to slope down in pro­por­tion to the pain through a few min­utes of tor­ture. After that, it would quickly be­come steeper as the tor­ture be­gan to ma­te­ri­ally al­ter the per­son tor­tured. Another fac­tor to con­sider is the differ­ence be­tween pain dur­ing which you can do other things, and pain dur­ing which you can’t. So the 50-year-tor­turee’s (or even a 1-minute tor­turee’s) life is effec­tively short­ened in a way that even a 1,000,000-dust-speck per­son’s life is not. So I’m not sure peo­ple aren’t im­plic­itly in­clud­ing those fac­tors some­times, when they get mad about tor­ture. I’d rather five years of chronic back pain than five min­utes of per­ma­nently soul-crush­ing tor­ture.

You might ar­gue that it’s still ir­ra­tional, but it’s not as ob­vi­ous as you make it out to be.

• It seems a lot of peo­ple are will­ing to write off min­i­mal dis­com­fort and ap­prox­i­mate that to 0 dis­com­fort, I don’t think thats fair at all.

If we are talk­ing in terms of this ‘dis­com­fort’ lets start out with two sets of K peo­ple out of a pop­u­la­tion of X >>> K peo­ple, with each set hav­ing the same ‘dis­com­fort’ ap­plied to them each, set A and set B. One set must bear with the dis­com­fort, which set should we pick?.

Clearly at the start, both are defined to be the same. So we then dou­ble the num­ber in set A while halv­ing their dis­com­fort.

One way to define an ac­tivity A to be ‘half the dis­com­fort’ of B is to ask the av­er­age per­son how long will they take ac­tivity A for say… \$100 and the same for B, if they are will­ing to take twice as long on A than B, lets call that half. There is no such thing as in­finite dis­com­fort be­cause we are deal­ing with peo­ple here, dou­ble the num­ber of peo­ple you dou­ble the dis­com­fort.

Tor­tur­ing two peo­ple for 50 years is twice as bad as tor­tur­ing one per­son for fifty years, how do we work that out? well we have some non in­finite of dis­com­fort for “tor­tur­ing for 50 years”.

Even­tu­ally af­ter many rep­e­ti­tions, we hit on some dis­com­fort which ‘sud­denly’ some­one has ar­bi­trary said is ~ 0 dis­com­fort even though it is re­ally some small dis­com­fort. And since 0*K = 0, we can set the num­ber of peo­ple in set A to be (X-K) (a bit un­fair to have the guys in set B to be in set A too I think, give them a break) yet it will still be bet­ter to choose set A over set B. The sum of the dis­com­forts is less in A than B.

Lets say that we are now that the dis­com­fort of A is a speck of dust, and the dis­com­fort of B is 50 years tor­ture. Lets now crank up the dis­com­fort of A, un­til we hit the pre­cise point that you just about start to care about your dis­com­fort (just be­fore it changes from ~0 to some num­ber). I reckon a stubbed toe would be a good point though I bet even more dis­com­fort would be the true dis­con­ti­nu­ity point. K is now 1 per­son, and X is the en­tire hu­man pop­u­la­tion of the planet. This is fine be­cause its ~0*6.6 billion = 0, yet you take a stum­ble af­ter you stub a toe, you lose .1 min­utes of your life in a bad way, its not nice, if it was nice then the dis­com­fort would be nega­tive!

But we are all happy right? ev­ery­one stubs their toe, and a man (or woman) is saved from 50 years of tor­ture. How­ever, that comes to 1256 years of stubbed toes, that comes to a to­tal of 1256 years of lost liv­ing, time spent winc­ing at your sore toe rather than look­ing the sky and so on. Is that still ac­cept­able? dou­ble the peo­ple, 2512 years, still fine? keep on go­ing cause to you (and peo­ple in gen­eral if your right), the dis­com­fort is ~0.

Keep on dou­bling that num­ber and watch those wasted pain filled years dou­ble and dou­ble. If you sud­denly say ‘ok thats enough, a trillion subbed toes is worse than 50 years tor­ture!’ then we sim­ply dou­ble the num­ber of peo­ple tor­tured, then half the pop­u­la­tion stub their toes for one guy, half for the other. Imag­ine the hun­dreds of thou­sands of years hu­man­ity has wasted with stubbed toes, and if you dont see that as bad you should won­der if your bi­ased by scope in­sen­si­tivity.

• To get back to the ‘hu­man life’ ex­am­ples EY quotes. Imag­ine in­stead the first sce­nario pair as be­ing the last life­boat on the Ti­tanic. You can launch it safely with 40 peo­ple on board, or load in an­other 10 peo­ple, who would oth­er­wise die a cer­tain, wet, and icy death, and cre­ate a 1 in 10 chance that it will sink be­fore the Carpathia ar­rives, kil­ling all. I find that a strangely more con­vinc­ing case for op­tion 2. The sce­nar­ios as pre­sented com­bine emo­tion­ally salient and ab­stract el­e­ments, with the re­sult that the emo­tion­ally salient part will tend to be fore­ground, and the ‘% prob­a­bil­ities’ as back­ground. After all no-one ever saw any­one who was 10% dead (jokes apart).

1. floor(3^(3^(3^(3 + sin(3^^^3))))) peo­ple are tor­tured for a day.

2. floor(3^(3^(3^(3 + cos(3^^^3))))) peo­ple are tor­tured for a day.

Choose.

Well, why are cer­tain no­ta­tions for large in­te­gers to be taken se­ri­ously but not oth­ers? Shut up and do trig!

(Mainly though I want to claim dibs to the name “a googol­plex si­mul­ta­neous sneezes”)

• 24 Dec 2012 20:29 UTC
−1 points
Parent

I choose 2. Jus­tifi­ca­tion: 3^^^3 is an odd mul­ti­ple of 3, which is pretty close to be­ing an odd mul­ti­ple of π. Sin(π)=0 while cos(π)=-1. cos(3^^^3) is smaller in the lat­ter case, which nec­es­sar­ily leads to a smaller over­all num­ber (by a sig­nifi­cant amount).

• I think the ar­gu­ment is mis­guided. Why? The choice is not only hy­po­thet­i­cal but im­pos­si­ble. There is not the re­motest pos­si­bil­ity of a googol­plex per­sons even ex­ist­ing.
So I’ll tone it down to a more re­al­is­tic “equa­tion”, then I’ll ar­gue that it’s not an equa­tion af­ter all.
Then I’ll ad­mit that I’m lost, but so are you… =)
Let’s as­sume 1e7 peo­ple ex­pe­rienc­ing pain of a cer­tain in­ten­sity for one sec­ond vs. one per­sion ex­pe­rienc­ing equal pain for 1e7 sec­onds (ap­prox. 19 years).
Let’s as­sume that ev­ery per­son in ques­tion has an ex­pec­tancy of, say, 63 years of painless life. Then my situ­a­tion is eqi­va­lent to ei­ther ex­tend­ing the painless life ex­pectency of 1e7 peo­ple from 63y-1s to 63y or to ex­tend it for one per­son from 54y to 63y.
Ac­cord­ing to the law of diminish­ing re­turns, the former is definitely much less valuable than the lat­ter.
But how much so? How to quan­tify this?
I have no idea, but I claim that nei­ther do you… =)

re­gards, frank

p.s.
I have a hunch that you couldn’t fit enough peo­ple with specks in their eyes into the uni­verse to make up for one 50-year-tor­ture.

• Eliezer’s ques­tion for Paul is not par­tic­u­larly sub­tle, so I pre­sume he won’t mind if I give away where it is lead­ing. If Paul says yes, there is some num­ber of dust specks which add up to a toe stub­bing, then Eliezer can ask if there is some num­ber of toe stub­bings that add up to a nip­ple pierc­ing. If he says yes to this, he will ul­ti­mately have to ad­mit that there is some num­ber of dust specks which add up to 50 years of tor­ture.

Rather than ac­tu­ally go­ing down this road, how­ever, per­haps it would be as well if those who wish to say that the dust specks are always prefer­able to the tor­ture should the fol­low­ing facts:

1) Some peo­ple have a very good imag­i­na­tion. I could per­son­ally think of at least 100 gra­da­tions be­tween a dust speck and a toe stub­bing, 100 more be­tween the toe stub­bing and the nip­ple pierc­ing, and as many as you like be­tween the nip­ple pierc­ing and the 50 years of tor­ture.

2) Ar­gu­ing about where to say no, the lesser pain can never add up to the slightly greater pain, would look a lot like cre­ation­ists ar­gu­ing about which tran­si­tional fos­sils are merely ape-like hu­mans, and which are merely hu­man-like apes. There is a point in the tran­si­tional fos­sils where the fos­sil is so in­ter­me­di­ate that 50% of the cre­ation­ists say that it is hu­man, and 50% that it is an ape. Like­wise, there will be a point where 50% of the Speck­ists say that dust specks can add up to this in­ter­me­di­ate pain, but the in­ter­me­di­ate pain can’t add up to tor­ture, and the other 50% will say that the in­ter­me­di­ate pain can add up to tor­ture, but the specks can’t add up the in­ter­me­di­ate pain. Do you re­ally want to go down this path?

3) Is your in­tu­ition about the specks be­ing prefer­able to the tor­ture re­ally greater than the in­tu­ition you vi­o­late by posit­ing such an ab­solute di­vi­sion? Sup­pose we go down the path men­tioned above, and at some point you say that specks can add up to pain X, but not to pain X+.00001 (a rep­re­sen­ta­tion of the minute de­gree of greater pain in the next step if we choose a fine enough di­vi­sion). Do you re­ally want to say that you pre­fer that a trillion per­sons (or a google, or a google­plex, etc) than that one per­son suffer pain X+.00001?

While writ­ing this, Paul just an­swered no, the specks never add up to a toe stub. This ac­tu­ally sug­gests that he rounds down the speck to noth­ing; you don’t even no­tice it. Re­mem­ber how­ever that origi­nally Eliezer posited that you feel the ir­ri­ta­tion for a frac­tion of a sec­ond. So there is some pain there. How­ever, Paul’s an­swer to this ques­tion is sim­ply a step down the path laid out above. I would like to see his an­swer to the above. Re­mem­ber the (min­i­mally) 100 gra­da­tions be­tween the dust speck and the toe stub.

• But con­sider this: the last ex­em­plars of each species of ho­minids could re­pro­duce with the firs ex­em­plars of the fol­low­ing.

How­ever, we prob­a­bly wouldn’t be able to re­pro­duce with Homo ha­bilis.

This shows that small differ­ences sum as the dis­tance be­tween the ex­am­ined sub­jects in­creases, un­til we can clearly see that the two sub­jects are not part of the same cat­e­gory any­more.

The pains that are similar in in­ten­sity are still com­pa­rable. But there is too much differ­ence be­tween dust specks in the eye/​stubbed toe and tor­ture to con­sider them as part of the same category

• Tcp­kac: won­der­ful in­tu­ition pump.

Gary: in­ter­est­ing—my sense of the nip­ple pierc­ing case is that yes, there’s a num­ber of un­will­ing nip­ple pierc­ings that does add up to 50 years of tor­ture. It might be a num­ber larger than the earth can sup­port, but it ex­ists. I won­der why my in­tu­ition is differ­ent there. Is yours?

• Un­known, there is noth­ing in­her­ently illog­i­cal about the idea of qual­i­ta­tive tran­si­tions. My the­sis is that a speck of dust in the eye is a mean­ingless in­con­ve­nience, that tor­ture is agony, and that any amount of gen­uinely mean­ingless in­con­ve­nience is prefer­able to any amount of agony. If those terms can be given ob­jec­tive mean­ings, then a bound­ary ex­ists and it is a co­her­ent po­si­tion.

I just said gen­uinely mean­ingless. This is be­cause, in the real world, there is go­ing to be some small but nonzero prob­a­bil­ity that the speck of dust causes a car crash, for ex­am­ple, and this will surely be con­sid­er­ably more likely than a pos­i­tive effect. When very large num­bers are in­volved, this will make the specks worse than the tor­ture.

But the origi­nal sce­nario does not ask us to con­sider con­se­quences, so we are be­ing asked to ex­press a prefer­ence on the ba­sis of the in­trin­sic bad­ness of the two op­tions.

• what mat­ters is that there is crossover at some point

But there isn’t nec­es­sar­ily one. That’s the point—Eliezer is pre­sum­ing that dust speck harm is ad­di­tive and that enough of such harms will equal tor­ture. This pre­sump­tion does not seem to have a ba­sis in ra­tio­nal ar­gu­ment.

• I don’t think even ev­ery­one go­ing blind is a good ex­cuse for tor­tur­ing a man for fifty years. How are they go­ing to look him in the eye when he gets out?

The prob­lem is not that I’m afraid of mul­ti­ply­ing prob­a­bil­ity by util­ity, but that Eliezer is not fol­low­ing his own ad­vice—his util­ity func­tion is too sim­ple.

• I’m sec­ond­ing the wor­ries of peo­ple like the anony­mous of the first com­ment and Wendy. I look at the first, and I think “with no marginal util­ity, it’s an ex­pected value of 400 vs an ex­pected value of 450.” I look at the sec­ond and think “with no marginal util­ity, it’s an ex­pected value of −400 vs. an ex­pected value of −50.” Marginal util­ity con­sid­er­a­tions—plau­si­ble if these are the last 500 peo­ple on Earth—sway the first case much more eas­ily than they do the sec­ond case.

• Many were proud of this choice, and in­dig­nant that any­one should choose oth­er­wise: “How dare you con­done tor­ture!” I don’t think that’s a fair char­ac­ter­i­za­tion of that de­bate. A good num­ber of peo­ple us­ing many differ­ent rea­sons thought some­thing along the lines of neg­ligible “harm” * 3^^^3<50 years of tor­ture. That many peo­ple sprain­ing their an­kle or some­thing would be a differ­ent story. Those harms are differ­ent enough that it’s by no means ob­vi­ous which we should pre­fer, and it’s not clear that try­ing to mul­ti­ply is re­ally pro­duc­tive, whereas your ex­am­ples in this en­try are in­deed ob­vi­ous.

• Can some­one please post a link to a pa­per on math­e­mat­ics, philos­o­phy, any­thing, that ex­plains why there’s this huge dis­con­nect be­tween “one-off choices” and “choices over re­peated tri­als”? Lee?

Here’s the way across the philo­soph­i­cal “chasm”: write down the util­ity of the pos­si­ble out­comes of your ac­tion. Use prob­a­bil­ity to find the ex­pected util­ity. Do it for all your ac­tions. No­tice that if you have in­co­her­ent prefer­ences, af­ter a while, you ex­pect your util­ity to be lower than if you do not have in­co­her­ent prefer­ences.

You might have a point if there ex­isted a prefer­ence effec­tor with in­co­her­ent prefer­ences that could only ever effect one prefer­ence. I haven’t thought a lot about that one. But since your in­co­her­ent prefer­ences will show up in lots of de­ci­sions, I don’t care if this spe­cific de­ci­sion will be “re­peated” (note: none are ever re­ally re­peated ex­actly) or not. The point is that you’ll just keep los­ing those pen­nies ev­ery time you make a de­ci­sion.

1. Save 400 lives, with cer­tainty.

2. Save 500 lives, with 90% prob­a­bil­ity; save no lives, 10% prob­a­bil­ity. What are the out­comes? U(400 al­ive, 100 dead, I chose choice 1) = A, U(500 al­ive, 0 dead, I chose choice 2) = B, and U(0 al­ive, 500 dead, I chose choice 2) = C.

Re­mem­ber that prob­a­bil­ity is a mea­sure of what we don’t know. The plau­si­bil­ity that a given situ­a­tion is (will be) the case. If 1.0A > 0.9B + 0.1*C, then I pre­fer choice 1. Other­wise 2. Can you tell me what’s left out here, or thrown in that shouldn’t be? Which part of this do you have a dis­agree­ment with?

• Eliezer, I am skep­ti­cal that slo­ga­neer­ings (“shut up and calcu­late”) will not get you across this philo­soph­i­cal chasm: Why do you define the best one-off choice as the choice that would be prefered over re­peated tri­als?

• Health pro­fes­sion­als and con­sumers may change their choices when the same risks and risk re­duc­tions are pre­sented us­ing al­ter­na­tive statis­ti­cal for­mats. Based on the re­sults of 35 stud­ies re­port­ing 83 com­par­i­sons, we found the risk of a health out­come is bet­ter un­der­stood when it is pre­sented as a nat­u­ral fre­quency rather than a per­centage for di­ag­nos­tic and screen­ing tests. For in­ter­ven­tions, and on av­er­age, peo­ple per­ceive risk re­duc­tions to be larger and are more per­suaded to adopt a health in­ter­ven­tion when its effect is pre­sented in rel­a­tive terms (eg us­ing rel­a­tive risk re­duc­tion which rep­re­sents a pro­por­tional re­duc­tion) rather than in ab­solute terms (eg us­ing ab­solute risk re­duc­tion which rep­re­sents a sim­ple differ­ence). We found no differ­ences be­tween health pro­fes­sion­als and con­sumers. The im­pli­ca­tions for clini­cal and pub­lic health prac­tice are limited by the lack of re­search on how these al­ter­na­tive pre­sen­ta­tions af­fect ac­tual be­havi­our. How­ever, there are strong log­i­cal ar­gu­ments for not re­port­ing rel­a­tive val­ues alone, as they do not al­low a fair com­par­i­son of benefits and harms as ab­solute val­ues do.

http://​​on­linelibrary.wiley.com/​​doi/​​10.1002/​​14651858.CD006776.pub2/​​abstract

This is awe­some: Spin the risk!

• With all due re­spect, but this post re­minds me of why I find the ex­pec­ta­tion-calcu­la­tion kind of ra­tio­nal­ity dan­ger­ous.

IMO ex­am­ples such as the first, with known prob­a­bil­ities and a straight­for­ward way to calcu­late util­ity, are a to­tal red her­ring.

In more re­al­is­tic ex­am­ples, you’ll have to do many judg­ment calls such as the choice of model, and your best es­ti­mate of the ba­sic prob­a­bil­ities and util­ities, which will ul­ti­mately be grounded on the fuzzy, bi­ased in­tu­itive level.

I think you might re­ply that this isn’t a spe­cific fault with your ap­proach, and that ev­ery­one has to start with some ax­ioms some­where. Granted.

Now the prob­lem, as I see it, is that pick­ing these ax­ioms (in­clud­ing quan­ti­ta­tive es­ti­mates) once and for all, and then pro­ceed­ing de­duc­tively, will ex­ag­ger­ate any ini­tial choices (silly metaphor: A bit like go­ing from one point to an­other by calcu­lat­ing the an­gle and then go­ing in a straight line, in­stead of mak­ing cor­rec­tions as you go. But (quit­ting the metaphor) I’m not just talk­ing about di­ver­gence over time, but also along the de­duc­tion).

So now you have a con­clu­sion which is still based on the fuzzy and in­tu­itive, but which has an air of math­e­mat­i­cal ex­act­ness… If the model is com­plex enough, you can prob­a­bly reach any de­sired con­clu­sion by in­con­spicious pa­ram­e­ter twid­dling.

My ar­gu­ment is far from “Omg it’s so cold­hearted to mix math and moral de­ci­sions!”. I think math is an im­por­tant tool in the anal­y­sis (in­ci­den­tally, I’m a math stu­dent ;)), but that you should know its limi­ta­tions and hid­den as­sump­tions in ap­ply­ing math to the real world.

I would con­sider an act of (in­tu­itively wrongful) vi­o­lence based on a 500-page util­ity ex­pec­ta­tion calcu­la­tion no bet­ter than one based on elab­o­rate logic grounded in scrip­ture or ide­ol­ogy.

I think that, af­ter be­ing in­formed by ra­tio­nal­ity about all the value-neu­tral facts, in­tu­ition, as fal­lible as it is, should be the fi­nal ar­biter.

I think these sa­cred (no re­li­gion im­plied) val­ues you men­tion, and es­pe­cially kind­ness, do serve an im­por­tant pur­pose, namely as a safe­guard against the sub­tly flawed logic I’ve been talk­ing about.

• My first re­ac­tion to this was, “I don’t know; I don’t un­der­stand 3^^^3 or a googol, or how to com­pare the suffer­ing from a dust speck with tor­ture.” After I thought about it, I de­cided I was in­ter­pret­ing Eliezer’s ques­tion like this: as the amount of suffer­ing per per­son, say a, ap­proaches zero but the num­ber of peo­ple suffer­ing, say n, goes to in­finity, is the product a*n worse than some­body be­ing tor­tured for 50 years?” The limit­ing product is un­defined, though, isn’t it? If a goes to zero fast enough, for ex­am­ple by ceas­ing to be suffer­ing when it fall be­low the thresh­old of no­tice, then the product is not as bad as the tor­ture. I think sev­eral other com­men­tors are think­ing about it the same way im­plic­itly, and im­pose con­di­tions so the limit ex­ists. An­drew did this by putting a lower bound on a, so of course the product gets big, but it’s not the same ques­tion. Even leav­ing aside the other con­tri­bu­tions to util­ity like life-al­ter­ing effects, I’m hav­ing trou­ble mak­ing sense of this ques­tion.

• Un­known: “There is not at all the same in­tu­itive prob­lem here; it is much like the com­par­i­son made a while ago on Over­com­ing Bias be­tween can­ing and prison time; if some­one is given few enough strokes, he will pre­fer this to a cer­tain amount of prison time, while if the num­ber is con­tinu­ally in­creased, at some point he will pre­fer prison time.”

It may be a psy­cholog­i­cal fact that a per­son will always choose even­tu­ally. But this does not im­ply that those choices were made in a ra­tio­nally con­sis­tent way, or that a ra­tio­nally con­sis­tent ex­ten­sion of the de­ci­sion pro­ce­dures used would in fact in­volve ad­di­tion of as­signed util­ities. Not only might de­ci­sions in oth­er­wise un­re­solv­able cases be made by the men­tal equiv­a­lent of flip­ping a coin, just to end the in­de­ci­sion, but what counts as un­re­solv­able by im­me­di­ate prefer­ence will it­self de­pend on mood, cir­cum­stance, and other con­tin­gen­cies.

Similar con­sid­er­a­tions ap­ply to spec­u­la­tions by moral ra­tio­nal­ists re­gard­ing the form of an ideal­ized per­sonal util­ity func­tion. Ad­di­tivism and in­com­men­su­ra­bil­ism both de­rive from sim­ple moral in­tu­itions—that harms are ad­di­tive, that harms can be qual­i­ta­tively differ­ent—and both have prob­lems—de­ter­min­ing ra­tios of bad­ness ex­actly, lo­cat­ing that qual­i­ta­tive bound­ary ex­actly. Can we agree on that much?

• To put it an­other way, ev­ery­one knows that harms are ad­di­tive.

Is this one of the in­tu­itions that can be wrong, or one of those that can’t?

• My util­ity func­tion doesn’t add the way you seem to think it does. A googol­plex of dusty eyes has the same tiny nega­tive util­ity as one dusty eye as far as I’m con­cerned. Hon­estly. How could any­one pos­si­bly care how many peo­ple’s eyes get dusty. It doesn’t mat­ter. Tor­ture mat­ters a lot. But that’s not re­ally even the point. The point is that a bad thing hap­pen­ing to n peo­ple isn’t n times worse than a bad thing hap­pen­ing to one per­son.

• Ben, ac­cord­ing to your poll sug­ges­tion, we should for­bid driv­ing, be­cause each par­tic­u­lar per­son would no doubt be will­ing to drive a lit­tle bit slower to save lives, and ul­ti­mately hav­ing no one drive at all would save the most lives. But in­stead, peo­ple con­tinue to drive, thereby trad­ing many lives for their con­ve­nience.

Agree­ing with these peo­ple, I’d be quite will­ing to un­dergo the tor­ture per­son­ally, sim­ply in or­der to pre­vent the dust specks for the oth­ers. And so this works in re­verse against your poll.

Mitchell: “You’re in the same boat with the in­com­men­su­ra­bil­ists, un­able to jus­tify their magic di­vid­ing line.” No, not at all. It is true that no one is go­ing to give an ex­act value. But the is­sue is not whether you can give an ex­act value; the is­sue is whether the ex­is­tence of such a value is rea­son­able or not. The in­com­men­su­ra­bil­ists must say that there is some pe­riod of time, or some par­tic­u­lar de­gree of pain, or what­ever, such that a trillion peo­ple suffer­ing for that length of time or that de­gree of pain would always be prefer­able to one per­son suffer­ing for one sec­ond longer or suffer­ing a pain ever so slightly greater. This is the claim which is un­rea­son­able.

If some­one is will­ing to make the tor­ture and specks com­men­su­rable, it is true that this im­plies that there is some num­ber where the specks be­come ex­actly equal to the tor­ture. There is not at all the same in­tu­itive prob­lem here; it is much like the com­par­i­son made a while ago on Over­com­ing Bias be­tween can­ing and prison time; if some­one is given few enough strokes, he will pre­fer this to a cer­tain amount of prison time, while if the num­ber is con­tinu­ally in­creased, at some point he will pre­fer prison time.

• As was pointed out last time, if you in­sist that no quan­tity of dust-specks-in-in­di­vi­d­ual-eyes is com­pa­rable to one in­stance of tor­ture, then what is your bound­ary case? What about ‘half-tor­ture’, ‘quar­ter-tor­ture’, ‘mil­lionth-tor­ture’? Once you posit a qual­i­ta­tive dis­tinc­tion be­tween the bad­ness of differ­ent classes of ex­pe­rience, such that no quan­tity of ex­pe­riences in one class can pos­si­bly be worse than a sin­gle ex­pe­rience in the other class, then you have posited the ex­is­tence of a sharp di­vid­ing line on what ap­pears to be a con­tinuum of pos­si­ble in­di­vi­d­ual ex­pe­riences.

But if we adopt the con­verse po­si­tion, and as­sume that all ex­pe­riences are com­men­su­rable and ad­di­tive ag­gre­ga­tion of util­ity makes sense with­out ex­cep­tion—then we are say­ing that there is an ex­act quan­tity which mea­sures pre­cisely how much worse an in­stance of tor­ture is than an in­stance of eye ir­ri­ta­tion. This is ob­scured by the origi­nal ex­am­ple, in which an in­con­ceiv­ably large num­ber is em­ployed to make the point that if you ac­cept ad­di­tive ag­gre­ga­tion of util­ities as a uni­ver­sal prin­ci­ple, then there must come a point when the specks are worse than the tor­ture. But there must be a bound­ary case here as well: some num­ber N such that, if there are more than N specks-in-eyes, it’s worse than the tor­ture, but if there are N or less, the tor­ture wins out.

Can any ad­vo­cates of ad­di­tive ag­gre­ga­tion of util­ity defend a par­tic­u­lar value for N? Be­cause if not, you’re in the same boat with the in­com­men­su­ra­bil­ists, un­able to jus­tify their magic di­vid­ing line.

• I’m not un­able to jus­tify the “magic di­vid­ing line.”

The world with the tor­ture gives 3^^^3 peo­ple the op­por­tu­nity to lead a full, thriv­ing life.

The world with the specs gives 3^^^3+1 peo­ple the op­por­tu­nity to lead a full, thriv­ing life.

The sec­ond one is bet­ter.

• Couldn’t you ar­gue this the op­po­site way? That life is such mis­ery, that ex­tra tor­ture isn’t re­ally adding to it.

The world with the tor­ture gives 3^^^3+1 suffer­ing souls a life of mis­ery, suffer­ing and tor­ture.

The world with the specs gives 3^^^3+1 suffer­ing souls a life of mis­ery, suffer­ing and tor­ture, only ba­si­cally ev­ery­one gets ex­tra specks of dust in their eye.

In which case, the first is bet­ter?

• Lee:

Models are sup­posed to hew to the facts. Your model di­verges from the facts of hu­man moral judg­ments, and you re­spond by ex­hort­ing us to live up to your model.

Be care­ful not to con­fuse “is” and “ought”. Eliezer is not propos­ing an em­piri­cal model of hu­man psy­chol­ogy (“is”); what he is propos­ing is a nor­ma­tive the­ory (“ought”), ac­cord­ing to which hu­man in­tu­itive judge­ments may turn out to be wrong.

If what you want is an em­piri­cal the­ory that ac­cu­rately pre­dicts the judge­ments peo­ple will make, see de­nis bider’s com­ment of Jan­uary 22, 2008 at 06:49 PM.

• I think “Shut up and Mul­ti­ply” would be a good tagline for this blog, and a nice slo­gan for us anti-bias types in gen­eral!

• I’m not sure I un­der­stand at what point the tor­ture would no longer be jus­tified. It’s easy to say that it is prefer­able to a googol­plex of peo­ple with dust specks is worse than one per­son be­ing tor­tured, but there has to be some num­ber at which this is no longer the case. At some point even your prefer­ences should flip, but you never sug­gest a point where it would be ac­cept­able. Would it be some­where around 1.5-1.6 billion, as­sum­ing the dust specks were worth 1 sec­ond of pain? Is it ac­cept­able if it is just 2 peo­ple af­fected? How many dust specks go into 1 year of tor­ture? I think peo­ple would be more com­fortable with your con­clu­sion if you had some way to quan­tify it; right now all we have is your as­ser­tion that the math is in the dust speck’s fa­vor.

• As I un­der­stand it, the math is in the dust speck’s fa­vor be­cause EY used an ar­bi­trar­ily large num­ber such that it couldn’t pos­si­bly be oth­er­wise.

I think a bet­ter com­par­i­son would be be­tween 1 sec­ond of tor­ture (which I’d es­ti­mate is worth mul­ti­ple dust specks, as­sum­ing it’s not hard to get them out of your eye) and 50 years of tor­ture, in which case yes, it would flip around 1.5 billion. That is of course as­sum­ing that you don’t have a term in your util­ity func­tion where shar­ing of bur­dens is valuable- I as­sume EY would be fine with that but would in­sist that you im­ple­ment it in the in­ter­me­di­ate calcu­la­tions as well.

• “My fa­vorite anec­dote along these lines—though my books are packed at the mo­ment, so no cita­tion for now—comes from a team of re­searchers who eval­u­ated the effec­tive­ness of a cer­tain pro­ject, calcu­lat­ing the cost per life saved, and recom­mended to the gov­ern­ment that the pro­ject be im­ple­mented be­cause it was cost-effec­tive. The gov­ern­men­tal agency re­jected the re­port be­cause, they said, you couldn’t put a dol­lar value on hu­man life. After re­ject­ing the re­port, the agency de­cided [i]not[/​i] to im­ple­ment the mea­sure.”

Does any­one know of a cita­tion for this? Be­cause I’d re­ally like to be able to share it. I found this re­ally, re­ally, hilar­i­ous un­til I re­al­ized that, ac­cord­ing to Eliezer, it ac­tu­ally hap­pened and kil­led peo­ple. Although it’s still hilar­i­ous, just si­mul­ta­neously hor­rify­ing. It sounds like some­body mi­s­un­der­stood the point of their own moral grand­stand­ing. (on the other hand, I sup­pose a Deon­tol­o­gist could in fact say “you can’t put a dol­lar value on hu­man life” and liter­ally mean “com­par­ing hu­man lives to dol­lars is in­her­ently im­moral”, not “hu­man lives have a value of in­finity dol­lars”. To me as a con­se­quen­tial­ist the former seems even stupi­der than the lat­ter, but in de­on­tol­ogy it’s ac­cept­able moral rea­son­ing.)

• Let’s turn the ar­gu­ment on its head: if you could abol­ish tor­ture at the cost of ev­ery­body get­ting a speck of dust in their eye, would you do it?

Next ques­tion: If you could stop the Holo­caust at the cost of N peo­ple get­ting dust specks in their eye, what’s the max­i­mum value you’d per­mit for N? That is, is there a num­ber N with the prop­erty that if N peo­ple get dust specks in their eye, you pre­vent the Holo­caust, but if N+1 peo­ple get dust specks in their eye, then the Holo­caust pro­ceeds on sched­ule?

Ar­gu­ments that be­gin with some­thing be­nign and pro­ceed by de­gree to silly con­clu­sions have been around since an­cient times. Dress­ing them up in the lan­guage of math­e­mat­ics doesn’t change their na­ture. The kind of util­i­tar­ian model pre­sented here is not an ac­cu­rate re­flec­tion of the real world. Lo­cally you can get rea­son­able re­sults if you de­cide to treat similar things on a lin­ear scale, but for dis­parate things the lin­ear ap­prox­i­ma­tion breaks down. You can still plug num­bers into the model, but the an­swer is mean­ingless.

• Raise your hand if you (yes you, the per­son read­ing this) will sub­mit to 50 years of tor­ture in or­der to avert “least bad” dust speck mo­men­tar­ily find­ing its way into the eyes of an uni­mag­in­ably large num­ber of peo­ple.

Why was it not writ­ten “I, Eliezer Yud­kowsky, should choose to sub­mit to 50 years of tor­ture in place of a googol­plex peo­ple get­ting dust specks in their eyes”?

Why re­strict your­self to the com­fort­ing dis­tance of om­ni­science?

Did Miyamoto Musashi ever ex­hort the reader to ask his sword what he should want? Why is this not a case of us­ing a tool as an end in and of it­self rather than as a means to achieve a de­sired end?

Are you ir­ra­tional if your some­thing to pro­tect is your­self...from tor­ture?

Has any­one ever ad­dressed whether or not this ap­plies to the AGI Utility Mon­ster whose ex­pe­ri­en­tial ca­pac­ity would pre­sum­ably ex­ceed the ~7 billion hu­mans who should ra­tio­nally sub­serve Its in­ter­ests (what­ever they may be)?

• I suffer un­der no delu­sion that I’m a morally perfect in­di­vi­d­ual.

You seem to be­lieve that to iden­tify what’s the morally cor­rect path, one must also be will­ing to fol­low it. Mo­ral­ity pushes our wills to­wards that di­rec­tion, but self­ish­ness has its own role to play and here it pushes el­se­where.

But yes, I am will­ing to say that I should sub­mit to 50 years of tor­ture in or­der to save 3^^^3 peo­ple get­ting dust specks in their eyes. I’ll also openly ad­mit that that I am not will­ing to sub­mit to such. This is not con­tra­dic­tory: “should” is a moral judg­ment, but be­ing will­ing to be moral at such high cost is an­other thing en­tirely.

• I would not sub­mit to 50 years of tor­ture to avert a dust speck in the eyes of lots of peo­ple.
I sus­pect I also would not sub­mit to 50 years of tor­ture to avert a stranger be­ing sub­jected to 55 years of tor­ture.
It’s not clear to me what, if any­thing, I should in­fer from this.

• That you value your­self more than a stranger. (I don’t think there’s any­thing wrong with that, BTW, so long as this doesn’t mean you’d defect in a PD against them.)

• Sure. Sorry, what I meant was it’s not clear what I should in­fer from this about the rel­a­tive harm­ful­ness of 50 years of tor­ture, 55 years of tor­ture, and Dust Specks.

Mostly, what it seems to im­ply is that “would I choose A over B?” doesn’t nec­es­sar­ily have much to do with the harm­ful­ness to the sys­tem as a whole of A and B.

• Ready the tar and feather, but I woudn’t sub­mit my­self to even 1 year of tor­ture to avert a stranger be­ing tor­tured 50 years if no ter­rible so­cial reper­cus­sions could be ex­pected.

• Yup. I sus­pect that’s true of the over­whelming ma­jor­ity of peo­ple. It’s most likely true of me.

• Why was it not writ­ten “I, Eliezer Yud­kowsky, should choose to sub­mit to 50 years of tor­ture in place of a googol­plex peo­ple get­ting dust specks in their eyes”?

Be­cause then it’s clearly not the same ar­gu­ment any­more, and would ap­peal only to peo­ple who as­cribe to even a nar­rower form of in­cred­ibly al­tru­is­tic util­i­tar­i­anism, who I per­son­ally sus­pect don’t even ex­ist statis­ti­cally speak­ing. Say the per­son cho­sen for tor­ture is ran­dom, then it would make a bit more sense, but would es­sen­tially be the same ar­gu­ment given the ridicu­lously high num­bers in­volved.

• What’s worse, steal­ing one cent each from 5,000,000 peo­ple, or steal­ing \$49,999 from one per­son? (Let us fur­ther as­sume that money has some util­ity value.)

If we de­cide we can just add the diminished wealth to­gether, the former is clearly one cent worse: \$50,000 is stolen, as op­posed to \$49,999. But this doesn’t take into ac­count that loss of util­ity grows with each cent lost from the same per­son. Los­ing one cent won’t bother me at all; ev­ery­one else who had a cent stolen would prob­a­bly feel the same way. How­ever, \$49,999 from one per­son is enough to ruin their life: nu­mer­i­cally, less was stolen over­all, but the util­ity loss grows in­cred­ibly as it is con­cen­trated.

Another case: is an eye-mote a sec­ond for a year (31,556,926 motes) in one per­son bet­ter than 31,556,927 motes spread out evenly among 31,556,927 peo­ple? The former case would in­volve se­ri­ous loss of utilons, whereas a sin­gle mote is quickly fixed and for­got­ten: qual­i­ta­tively differ­ent from con­stant ir­ri­ta­tion. The loss of utilons from dust motes can thus be con­cen­trated and added, but not spread out and then added. (I think this may in­di­cate time and mem­ory plays a fac­tor in this, since they do in the mechanism of suffer­ing.)

In other words, a neg­ligible amount of util­ity loss can­not be mul­ti­plied so that it is prefer­able to a con­cen­trated, non-neg­ligible util­ity loss. If none of the peo­ple in­volved in the neg­ligible-group suffer in­di­vi­d­u­ally, they ob­vi­ously can’t be suffer­ing as a group, ei­ther (what would be suffer­ing—a group is not an en­tity!).

How­ever, I have read re­fu­ta­tions of this that say “well just re­place dust specks with some­thing that does cause suffer­ing.” I have no prob­lem with that; there may be “non-triv­ial” pain and “non-triv­ial” plea­sure that can be added. So in the stubbed-toes ex­am­ple, it might be non-triv­ial, since it is con­cen­trated enough to mat­ter to the in­di­vi­d­ual and cause suffer­ing; and suffer­ing is ad­di­tive.

Per­haps there is such a line in­nately built into hu­man biol­ogy, be­tween “triv­ial” and “non-triv­ial”. Eye-motes can’t ever re­ally de­grade the qual­ity of our lives, so can­not be used in ex­am­ples of this kind. But in the case of one per­son be­ing tor­tured slightly worse than ten peo­ple be­ing tor­tured slightly less, the non-triv­ial suffer­ing of the ten can be con­sid­ered to be ad­di­tive. This also solves this prob­lem.

• loss of util­ity per cent grows ex­po­nen­tially with each cent lost.

On this end of the scale, it grows (I’m not sure if it’s ex­po­nen­tial), but it doesn’t grow in­definitely; even­tu­ally it starts fal­ling.

• A good point. I’ve ed­ited to rephrase.

• What func­tion is that? I thought hu­man util­ity over money was roughly log­a­r­ith­mic, in which case loss of util­ity per cent lost would grow un­til (the­o­ret­i­cally) hit­ting an asymp­tote. (Also, why would it make sense for it to even­tu­ally start fal­ling?)

• I thought hu­man util­ity over money was roughly log­a­r­ith­mic, in which case loss of util­ity per cent lost would grow un­til (the­o­ret­i­cally) hit­ting an asymp­tote.

So you’re say­ing that be­ing broke is in­finite di­su­til­ity. How se­ri­ously have you thought about the re­al­ism of this model?

• Ob­vi­ously I didn’t mean that be­ing broke (or any­thing) is in­finite di­su­til­ity. Am I mis­taken that the util­ity of money is oth­er­wise mod­eled as log­a­r­ith­mic gen­er­ally?

• Ob­vi­ously I didn’t mean that be­ing broke (or any­thing) is in­finite di­su­til­ity.

Then what asymp­tote were you refer­ring to?

• It was in re­sponse to the “in­definitely” in the par­ent com­ment, but I think I was just think­ing of the func­tion and not about how to ap­ply it to hu­mans. So ac­tu­ally your origi­nal re­sponse was pretty much ex­actly cor­rect.
It was a silly thing to say.

I won­der if it’s cor­rect, then, that the marginal di­su­til­ity (ac­cord­ing to what­ever prefer­ences are re­vealed by how peo­ple ac­tu­ally act) of the loss of an­other dol­lar ac­tu­ally does even­tu­ally start de­creas­ing when a per­son is in enough debt. That seems hu­manly plau­si­ble.

• I have no idea what func­tion it is. I also don’t re­ally have a work­ing un­der­stand­ing of what “log­a­r­ith­mic” is. It starts fal­ling be­cause when you’re deal­ing in the thou­sands of dol­lars, the next dol­lar mat­ters less than it did when you were deal­ing in the tens of dol­lars.

• Oh, okay, I think we’re talk­ing about the same func­tion in differ­ent terms. You’re talk­ing in terms of the util­ity func­tion it­self, and I was talk­ing about how much the growth rate falls as the amount of money de­creases from some pos­i­tive start­ing point, since that’s what Hul-Gil seemed to be talk­ing about. (I think that would be hy­per­bolic rather than ex­po­nen­tial, though.)

The util­ity func­tion it­self does grow in­definitely; just re­ally slowly at some point. And at no point is its own growth speed­ing up rather than slow­ing down.

• If that pop­u­la­tion of 400 or 500 peo­ple is all that’s left of Homo Sapi­ens, then it’s ob­vi­ous to me that keep­ing 400 with prob­a­bil­ity 1 is bet­ter than keep­ing 500 with prob­a­bil­ity 0.9. Re­pop­u­lat­ing start­ing with 400 doesn’t seem much harder than re­pop­u­lat­ing start­ing with 500, but re­pop­u­lat­ing start­ing with 0 is ob­vi­ously im­pos­si­ble.

If we change the num­bers a lit­tle bit, we get a differ­ent and still in­ter­est­ing ex­am­ple. I don’t see an im­por­tant differ­ence be­tween hav­ing 10 billion happy peo­ple and hav­ing 100 billion happy peo­ple. I can’t vi­su­al­ize ei­ther num­ber, and I have no rea­son to be­lieve that hav­ing an­other 90 billion af­ter the first 10 billion gives any­thing I care about.

Mul­ti­ply­ing in­di­vi­d­ual util­ity by num­ber of peo­ple to get to­tal util­ity is a mis­take, IMO. I don’t know what the cor­rect solu­tion is, but that’s not it.

• http://​​wiki.less­wrong.com/​​wiki/​​Shut_up_and_mul­ti­ply The shut up and mul­ti­ply ar­ti­cle on the wiki (markup trou­bles...) taken in con­junc­tion with the fol­low­ing out-of-con­text para­graph strongly im­plies to read­ers of the wiki that this post is about the moral im­per­a­tive to re­pro­duce:

You know what? This isn’t about your feel­ings. A hu­man life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain’s feel­ings of com­fort or dis­com­fort with a plan. Does com­put­ing the ex­pected util­ity feel too cold-blooded for your taste? Well, that feel­ing isn’t even a feather in the scales, when a life is at stake. Just shut up and mul­ti­ply.

...is this in­ten­tional or un­in­ten­tional sub­text, and if the lat­ter, do you in­tend to re­vise the word to “calcu­late” as some peo­ple quot­ing the post have done or not bother since ap­par­ently no­body but me no­ticed in the first place?

• “I’ve won­dered in the past if per­haps the best thing LW mem­bers could do if the sin­gu­lar­ity is more than 80 (4 gen­er­a­tions) years away was sim­ply to breed like Amish...”

An Amish with a cry­on­ics fa­cil­ity. You know, that’s un­heard of!

• Huh. Dan­git, I had ctrl+F for chil­dren and re­pro­duc­tion and preg­nancy in this post and fool­ishly as­sumed that was con­clu­sive ev­i­dence.

• You know, I have always said that the trol­ley prob­lem in which you push the fat man onto the tracks to save sev­eral lives is im­moral, be­cause you are treat­ing him as a means rather than an end.

I’ve just no­ticed that if I re­frame it in such a way that it’s not so per­sonal, my in­tu­ition changes. For ex­am­ple, sup­pose it were a ques­tion of push­ing a trol­ley con­tain­ing one (un­known) per­son in front of an­other (empty) run­away trol­ley, in or­der to stop the run­away one from hit­ting a third trol­ley con­tain­ing 4 peo­ple. Sud­denly I’m ac­tively kil­ling the guy.

• Push­ing a fat man onto the tracks to save sev­eral lives is gen­er­ally con­sid­ered to be im­moral be­cause you are USING a per­son to achieve some goal.

In your case, you are only us­ing the trol­ley con­tain­ing the man to stop the death of four peo­ple. You are NOT us­ing the man be­cause the trol­ley would work re­gard­less of whether or not he is pre­sent. Thus, it is mere mis­for­tune that he is pre­sent and kil­led—ex­actly as if he were on a sid­ing where you di­verted a train to save ten peo­ple.

• Ben: as I said when I brought up the sand ex­am­ple, Eliezer used dust specks to illus­trate the “least bad” bad thing that can hap­pen. If you think that it is not even a bad thing, then of course the point will not be ap­ply. In this case you should sim­ply move to the least thing which you con­sider to be ac­tu­ally bad.

• tcp­kac, if a hun­dred peo­ple ex­pe­rienc­ing 49 years of tor­ture is worse than one per­son ex­pe­rienc­ing 50 years, then yes, you can and must com­pare. Whether you ex­tend this right down to 3^^^3 dust specks is an­other mat­ter. There might not be a for­mal frame­work for some­thing so sub­jec­tive, but there are ob­vi­ous in­con­sis­ten­cies with flat re­fusal to sum, even with some­thing as ab­stract as ‘pain’. Would an AGI need a for­mula for sum­ming dis­com­fort over mul­ti­ple peo­ple/​one per­son? Who gets to write that? Yeesh.

a brief mo­ment of ir­ri­ta­tion, enough to no­tice for a mo­ment, so at least one pain neu­ron is firing

I’m not a neu­ro­biol­o­gist—is this how it works? Are there neu­rons whose spe­cific job it is to de­liver ‘pain’ mes­sages? If we’re re­duc­ing down to this level, can we ac­tu­ally mea­sure pain? More im­por­tantly, even if we can, can we go on to as­sume that there is an in­in­ter­rupted pro­gres­sion in the neu­rolog­i­cal mechanism, all the way up this scale, from dust mote to tor­ture? For me, it’s not clear whether blink­ing away a dust mote falls un­der ‘pain’ or ‘sen­sa­tion’. Nip­ple pierc­ing is much clearer, and hence I have no prob­lem say­ing 3^^^3 Pierc­ings > Tor­ture.

• An AGI pro­ject would pre­sum­ably need a gen­er­ally ac­cepted, wa­ter­tight, ax­iom based, for­mal sys­tem of ethics, whose rules can re­li­ably be ap­plied right up to limit cases. I am guess­ing that that is the rea­son why Eliezer et al are ar­gu­ing from the ba­sis that such an an­i­mal ex­ists.
If it does, please point to it. The FHI has ethics spe­cial­ists on its staff, what do they have to say on the sub­ject ?
Based on the cur­rent dis­cus­sion, such an an­i­mal, at least as far as ‘gen­er­ally ac­cepted’ goes, does not ex­ist. My be­lief is that what we have are more or less con­sen­sual guidelines which ap­ply to situ­a­tions and choices within hu­man ex­pe­rience. Un­known’s ex­am­ples, for in­stance, tend to be ‘mid­dle of the range’ ones. When we get to­wards the limits of ev­ery­day ex­pe­rience, these guidelines break down.
Eliezer has not pro­vided us with a for­mal frame­work within which sum­ming over sin­gle ex­pe­riences for mul­ti­ple peo­ple can be com­pared to sum­ming over mul­ti­ple ex­pe­riences for one per­son. For me it stops there.

• I don’t think this is too difficult to un­der­stand. Both in both situ­a­tions, the de­ciders don’t want to be think of them­selves as pos­si­bly re­spon­si­ble for avoid­able death. In the first sce­nario, you don’t want to be the guy who made a gam­ble and ev­ery­one dies. In the sec­ond, you don’t want to choose for 100 peo­ple to die. Peo­ple make differ­ent choices in the two situ­a­tions be­cause they want to min­i­mize moral cul­pa­bil­ity.

Is that ra­tio­nal? Strictly speak­ing, mabye not. Is it hu­man? Ab­solutely!

• Ra­tional yes, if other peo­ple know of the de­ci­sion. If you never find out the re­sult of the gam­ble, are not held re­spon­si­ble and have your mem­ory wiped, then all con­found­ing in­ter­ests are wiped ex­cept the de­sire for peo­ple not to die. Only then are the ir­ra­tional op­tions ac­tu­ally ir­ra­tional.

• Cale­do­nian, offer­ing an al­ter­na­tive ex­pla­na­tion for the ev­i­dence does not im­ply that it is not ev­i­dence that Eliezer ex­pends some re­sources over­com­ing bias: it sim­ply shows that the ev­i­dence is not con­clu­sive. In fact, ev­i­dence usu­ally can be ex­plained in sev­eral differ­ent ways.

• Ah, but Un­known, know­ing it can be done in the ab­stract isn’t the same as see­ing it done.

So, Paul: Do enough dust specks—which by hy­poth­e­sis, do pro­duce a brief mo­ment of ir­ri­ta­tion, enough to no­tice for a mo­ment, so at least one pain neu­ron is firing—add up to ex­pe­rienc­ing a brief itch on your arm that you have to scratch? Also, do enough cases of feel­ing your foot come down mo­men­tar­ily on a hard peb­ble, add up to a toe stub that hurts for a few sec­onds?

How about listen­ing to loud mu­sic blast­ing from a car out­side your win­dow—would enough in­stances of that ever add up to one case of be­ing forced to watch “Plan Nine from Outer Space” twice in a row?

And can you swear on a stack of copies of “The Ori­gin of Species” that you would have given the same an­swers to all those ques­tions, if I’d asked you be­fore you’d ever heard of the Dust Specks Dilemma?

• Eliezer—no, I don’t think there is. At least, not if the dust specks are dis­tributed over mul­ti­ple peo­ple. Maybe lo­cal­ized in one per­son—a dust speck ev­ery 10th/​sec for a suffi­ciently long pe­riod of time might add up to a toe stub.

• The fact that Eliezer has changed his mind sev­eral times on Over­com­ing Bias is ev­i­dence that he ex­pends some re­sources over­com­ing bias; if he didn’t, we would ex­pect ex­actly what you say. It is true that he hasn’t changed his mind of­ten, so this fact (at least by it­self) is not ev­i­dence that he ex­pends many re­sources in this way.

• 1. In this whole se­ries of posts you are silently pre­sup­pos­ing that util­i­tar­i­anism is the only ra­tio­nal sys­tem of ethics. Which is strange, be­cause if peo­ple have differ­ent util­ity func­tions Ar­row’s im­pos­si­bil­ity the­o­rem makes it im­pos­si­ble to ar­rive at a “ra­tio­nal” (in this blogs bayesian-con­sis­tent abuse of the term) ag­gre­gate util­ity func­tion. So ir­ra­tional­ity is not only ra­tio­nal but the only ra­tio­nal op­tion. Funny what peo­ple will sell as over­com­ing bias.

2. In this par­tic­u­lar case the in­tro­duc­tory ex­am­ple fails, be­cause 1 kil­ling != − 1 sav­ing. Re­mov­ing a drown­ing man from the pool is ob­vi­ously bet­ter then merely ab­stain­ing from drown­ing an other man in the pool.

3. The feel­ing of su­pe­ri­or­ity over all those bi­ased pro­les is a bias. In fact it is very ob­vi­ously among your main bi­ases and con­se­quently one you should spend a dis­pro­por­tional amount of re­sources on over­com­ing.

• Re­mov­ing a drown­ing man from the pool is ob­vi­ously bet­ter then merely ab­stain­ing from drown­ing an other man in the pool.

I don’t think it’s ob­vi­ous. Thought ex­per­i­ment: Steve is kil­ling Ann by drown­ing, and Beth is about to drown by ac­ci­dent nearby. I have a cell phone con­nec­tion open to Steve, and I have time to ei­ther con­vince Steve to stop drown­ing Ann, or to con­vince Steve to save Beth but still drown Ann. It is not ob­vi­ous to me that I should choose the lat­ter op­tion.

• Do you mean to as­sert that choos­ing the lat­ter op­tion in your sce­nario and the former op­tion in Salu­ta­tor’s sce­nario is in­con­sis­tent?

If so, you might want to un­pack your think­ing a lit­tle more, as I don’t fol­low it. What you’ve de­scribed in your thought ex­per­i­ment isn’t a choice be­tween res­cu­ing a drown­ing per­son and ab­stain­ing from drown­ing a per­son, and the differ­ence seems po­ten­tially im­por­tant.

• The op­tions I’m choos­ing be­tween are Steve res­cu­ing a drown­ing per­son and Steve ab­stain­ing from drown­ing a per­son. If one of those op­tions is ob­vi­ously bet­ter than the other, then the same re­la­tion­ship should hold when I can choose Steve’s ac­tions rather than my own.

• Ah!
Either I can con­vince Steve to stop drown­ing Ann, or [con­vince Steve to] save Beth.
I get it now.
I had read it as ei­ther I can con­vince Steve to stop drown­ing Ann, or [I can] save Beth.
Thanks for the clar­ifi­ca­tion… I’d been gen­uinely con­fused.

• I’ve ed­ited it to hope­fully make it un­am­bigu­ous—I hope no one reads that as Steve con­vinc­ing him­self.

• Paul, is there a num­ber of dust specks that add up to stub­bing your toe—not smash­ing it or any­thing, but stub­bing it painfully enough that you very definitely no­tice, and it throbs for a few sec­onds be­fore fad­ing?

• Anon wrote: “Any ques­tion of ethics is en­tirely an­swered by ar­bi­trar­ily cho­sen eth­i­cal sys­tem, there­fore there are no “right” or “bet­ter” an­swers.”

Mat­ters of prefer­ence are en­tirely sub­jec­tive, but for any evolved agent they are far from ar­bi­trary, and sub­ject to in­creas­ing agree­ment to the ex­tent that they re­flect in­creas­ingly fun­da­men­tal val­ues in com­mon.

• Some of us would pre­fer to kill them all, re­gard­less.

• I think I’m go­ing to have to write an­other of my own posts on this (hadn’t I already?), when I have time. Which might not be for a while—which might be never—we’ll see.

For now, let me ask you this Eliezer: of­ten, we think that our in­tu­itions about cases provide a re­li­able guide to moral­ity. Without that, there’s a se­ri­ous ques­tion about where our moral prin­ci­ples come from. (I, for one, think that ques­tion has its most se­ri­ous bite right on util­i­tar­ian moral prin­ci­ples… at least Kant, say, had an ar­gu­ment about how the na­ture of moral claims leads to his prin­ci­ples.)

So sup­pose—hy­po­thet­i­cally, and I do mean hy­po­thet­i­cally—that our best ar­gu­ment for the claim “one ought to max­i­mize net welfare” comes by in­duc­tion from our in­tu­itions about in­di­vi­d­ual cases. Could we then le­gi­t­i­mately use that prin­ci­ple to defend the op­po­site of our in­tu­itions about cases like this?

More later, I hope.

• After try­ing sev­eral ideas, I re­al­ized that my per­sonal util­ity func­tion con­verges, among its other fea­tures. And it’s ob­vous in ret­ro­spect. After all, there’s only so much hor­ror I can feel. But while you call this nasty names like “scope in­sen­si­tivity”, I em­brace it. It’s my util­ity func­tion. It’s not good or bad or wrong or bi­ased, it just is. (Scope in­sen­si­tivity with re­gard to prob­a­bil­ities is, of course, still bad.)

I still think that one man should be tor­tured a lot in­stead of many be­ing tor­tured slightly less, be­cause higher in­di­vi­d­ual suffer­ing re­sults in a higher point of con­ver­gence.

This also ex­plains why our minds re­ject “Pas­cal’s mug­ging”.

• what about be­ing tied down with your eyes taped open, and then a hand­ful of sand thrown in your face

Un­known—this is called tor­ture, and as such would reg­ister on my tor­ture scale. Is it as bad as wa­ter­board­ing? No. Do I mea­sure them on a com­pa­rable scale? Yes. Can I, hence, imag­ine a value for N where N(Sand) > (Water­board­ing)? Yes, I can. I stand by my pre­vi­ous as­ser­tion.

How­ever, I’m be­gin­ning to see that this is a prob­lem of in­ter­pre­ta­tion. I am fully on board with Eliezer’s math, I’m happy to shut up and mul­ti­ply lives by prob­a­bil­ities, and I do have gen­uine doubts about whether I’m bas­ing my de­ci­sion on squeamish­ness. I hope not. But cur­rently I see no rea­son to think I am.

Each per­son merely thinks that he wouldn’t mind suffer­ing a speck as an in­di­vi­d­ual in or­der to save some­one from tor­ture.

Each per­son merely thinks? But you would re­tort, con­fi­dently, that they are in fact er­ro­neous and ir­ra­tional, you are ra­tio­nal and cor­rect, and ask that the rods be in­serted posthaste? To be hon­est mate, even if I did per­son­ally be­lieve N dust motes ‘added up to tor­ture’, if all those peo­ple said ‘don’t bother, we’ll take the dust’, I’d do so. If only be­cause the per­ceived au­thor­ity of that many peo­ple as­sert­ing some­thing would (by Eliezer’s own logic) amount to enor­mous ev­i­dence that my de­ci­sion was wrong. And this is vin­di­cated (for peo­ple like me at least) in that if I were one of the 3^^^3 and you were the un­lucky one, I’d urge you to re­con­sider along with ev­ery­one else.

What if they each are will­ing to be tor­tured for 25 years? Is it bet­ter to tor­ture a googol­plex peo­ple for 25 years than one per­son for 50 years?

Th­ese are two differ­ent ques­tions. The first is about peo­ple tel­ling you what they’re will­ing to do. The sec­ond is you de­cid­ing what gets done. The sec­ond is the sce­nario that we’re con­fronted with, and my pre­vi­ous com­ment ad­dresses that ques­tion.

Larry, you’re not right. Two peo­ple get­ting dust motes in their eye is worse than one. Two peo­ple get­ting tor­tured is worse than one.

• Do we put no weight on the fact that if you pol­led the 3^^^3 peo­ple and asked them whether they would all un­dergo one dust speck to save one per­son from 50 years of tor­ture, they’d al­most cer­tainly all say yes?

What if they each are will­ing to be tor­tured for 25 years? Is it bet­ter to tor­ture a googol­plex peo­ple for 25 years than one per­son for 50 years?

• Un­known,

such that a trillion peo­ple suffer­ing for that length of time or that de­gree of pain would always be prefer­able to one per­son suffer­ing for one sec­ond longer or suffer­ing a pain ever so slightly greater.

As I wrote yes­ter­day, a dust mote reg­isters ex­actly zero on my tor­ture scale, and tor­ture reg­isters fun­da­men­tally off the scale (not just off the top, off) on my dust mote scale. Tor­ture can be placed, if not dis­cretely quan­tified, on my tor­ture scale. Dust motes can not. If it’s a short enough space of time that you could con­vince me N dust motes would be worse, I’d say your idea of tor­ture is differ­ent from mine.

• I think this ar­ti­cle could have been im­proved by split­ting it into two; one of them to dis­cuss the origi­nal prob­lem (is it bet­ter to save 400 for sure than to gam­ble on sav­ing 500 with cer­tainty 90%), and the other to dis­cuss the rea­sons why peo­ple pick the other one if you rephrase the ques­tion. They’re both in­ter­est­ing, but pre­sent­ing them at once makes the dis­cus­sion too con­fused.

And the sec­ond half… specks of dust in the eye and tor­ture can both be de­scribed as “bad things”. That doesn’t mean they’re the same kind of thing with differ­ent mag­ni­tudes. That was mostly a waste of time to me.

• The com­ments on this post are no bet­ter than those on the Tor­ture vs. Dust Specks post. In other words, sim­ply bring the word “tor­ture” into the dis­cus­sion and peo­ple au­to­mat­i­cally be­come ir­ra­tional. It’s hap­pened to some of the other threads as well, when some­one men­tioned tor­ture.

It strongly sug­gests that not many of the read­ers have made much progress in over­com­ing their bi­ases.

By the way, Eliezer has cor­rected the origi­nal post; anony­mous was cor­rect about the num­bers.

• I would sim­ply ar­gue that a dust speck has 0 di­su­til­ity.

• Pick some other in­con­ve­nience which has a small but non-zero di­su­til­ity and re­peat the ex­er­cise.

• I’m not dis­put­ing the val­idity of the thought pro­cess. I don’t think the ex­am­ple was well cho­sen, how­ever. A dust speck, ig­nor­ing ex­ter­nal­ities, doesn’t af­fect any­thing. Us­ing even a pin­prick would have made the ex­am­ple far bet­ter.

• That’d be Fight­ing the Hy­po­thet­i­cal.

• It’s an ex­tremely hy­po­thet­i­cal situ­a­tion. How­ever, why should it, ig­nor­ing ex­ter­nal­ities as the prob­lem re­quired, be mea­sured at any di­su­til­ity? That dust speck has no im­pact on my life in any way, other than mak­ing me blink. No pain is in­volved.

• Be­cause it’s one of the pa­ram­e­ters of the thought ex­per­i­ment that a dust speck causes a minis­cule amount of di­su­til­ity.

• I think I un­der­stand the point of the re­cent se­ries of posts, but I find them rather un­satis­fy­ing. It seems to me that there is a prob­lem with trans­lat­ing emo­tional situ­a­tions into prob­a­bil­ity calcu­la­tions. This is a very real and in­ter­est­ing prob­lem, but say­ing “shut up and mul­ti­ply” is not a good way to ap­proach it. Bor­row­ing from ‘A Tech­ni­cal Ex­pla­na­tion’ it’s kind of like the blue ten­ta­cle ques­tion. When I am asked what would I do when faced with the choice be­tween a googol­plex of dust specks or 50 years of tor­ture, my re­ac­tion is: But that would never hap­pen! Or, per­haps, I would tell the psy­chopath who was try­ing to force me to make such a choice to go f- him­self.

• A life barely worth liv­ing is worth liv­ing. I see no press­ing need to dis­agree with the Repug­nant Con­clu­sion it­self.

How­ever, I sus­pect there is a lot of con­fu­sion be­tween “a life barely worth liv­ing” and “a life barely good enough that the per­son won’t com­mit suicide”.

A life barely good enough that the per­son won’t com­mit suicide is well into the nega­tives.

• Not to men­tion the con­fu­sion be­tween “a life barely worth liv­ing” and “a life that has some typ­i­cal num­ber of bad ex­pe­riences in it and barely any good ex­pe­riences”.

• I don’t un­der­stand why it’s sup­posed to be some­how bet­ter to have more peo­ple, even if they are equally hap­pen. 10 billion happy peo­ple is bet­ter than 5 billion equally happy peo­ple? Why? It makes no in­tu­itive sense to me, I have no in­nate prefer­ence be­tween the two (all else equal), and yet I’m sup­posed to ac­cept it as a premise.

• It makes some sense in terms of to­tal hap­piness, since 10 billion happy peo­ple would give a higher to­tal hap­piness than 5 billion happy peo­ple.

• Isn’t it usu­ally brought up by peo­ple who want you to re­ject it as a premise, as an ar­gu­ment against he­do­nic pos­i­tive util­i­tar­i­anism?

Per­son­ally I do dis­agree with that premise and more gen­er­ally with he­do­nic util­i­tar­i­anism. My util­ity func­tion is more like “choice” or “free­dom” (an ideal world would be one where ev­ery­one can do what­ever they want, and in a non-ideal one we should try to op­ti­mize to get as close to that as pos­si­ble), so based on that I have no prefer­ence with re­gards to peo­ple who haven’t been born yet, since they’re in­ca­pable of choos­ing whether or not to be al­ive. (on the other hand my in­tu­ition is that bring­ing dead peo­ple back would be good if it were pos­si­ble… I sup­pose that if the dead per­son didn’t want to die at the mo­ment of death, that would be com­pat­i­ble with my ideas, and I don’t think it’s that far off from my ac­tual, in­tu­itive rea­sons for feel­ing that way.)

• But the Repug­nant Con­clu­sion is wrong. Peo­ple who don’t ex­ist have no in­ter­est in ex­ist­ing; they don’t have any in­ter­ests, be­cause they don’t ex­ist. To make the world a bet­ter place means mak­ing it a bet­ter place for peo­ple who already ex­ist. If you add a new per­son to that pool of ‘peo­ple who ex­ist’, then of course mak­ing the world a bet­ter place means mak­ing it a bet­ter place for that per­son as well. But there’s no rea­son to go around adding imag­i­nary ba­bies (as in the ex­am­ple from part one of the linked ar­ti­cle) to that pool for the sake of in­creas­ing to­tal hap­piness. It’s av­er­age hap­piness on a per­sonal level—not to­tal hap­piness—which makes peo­ple happy, and mak­ing peo­ple happy is sort of the whole point of ‘mak­ing the world a bet­ter place’. Or else why bother? To be hon­est, the en­tire Repug­nant Con­clu­sion ar­ti­cle felt a lit­tle silly to me.

1. 400 peo­ple die, with cer­tainty.

2. 90% chance no one dies; 10% chance 500 peo­ple die.

ITYM 1. 100 peo­ple die, with cer­tainty.

• 400 peo­ple die, with cer­tainty.

Should that be 100?

• The prob­lem here is that you don’t KNOW that the prob­a­bil­ity is 90%. What if it’s 80%? or 60%? or 12%? In real life you will only run the ex­per­i­ment once. The prob­a­bil­ities are just a GUESS. The per­son who is mak­ing the guess has no idea what the real prob­a­bil­ities are. And as Mr. Yud­kowsky has pointed out el­se­where, peo­ple con­sis­tently tend to un­der­es­ti­mate the difficulty of a task. They can’t even es­ti­mate with any ac­cu­racy how long it will take them to finish their home­work. If you aren’t in the busi­ness of sav­ing peo­ple’s lives in EXACTLY this same way, on a reg­u­lar ba­sis, the es­ti­mate of 90% is prob­a­bly crap. And so is the es­ti­mate of 100% prob­a­bil­ity of sav­ing 400 lives. All you can re­ally say, is that you see fewer difficul­ties that way, from where you are stand­ing now. It’s a crap shoot, ei­ther way, be­cause, once you get started, no mat­ter which op­tion you choose, difficul­ties you hadn’t an­ti­ci­pated will arise.

This re­minds me of ‘the bridge ex­per­i­ment’, where a test sub­ject is given the op­por­tu­nity to throw a fat per­son off a bridge in front of a train, and thereby save the lives of 5 per­sons trapped on the tracks up ahead. The psy­chol­o­gists be­moaned the lack of ra­tio­nal­ity of the test sub­jects, since most of them wouldn’t throw the fat per­son off the bridge, and thus trade the lives of one per­son, for five. I was like, ‘ARE YOU CRAZY? Do you think one fat per­son would DERAIL A TRAIN? What do you think cow catch­ers are for, fool? What if he BOUNCED a cou­ple of times, and didn’t end up on the rails? It’s pre­pos­ter­ous. The odds are 1000 to 1 against suc­cess. No sane per­son would take that bet.’

The psy­chol­o­gists sup­pos­edly fixed this con­cern by tel­ling the test sub­jects that it was guaran­teed that throw­ing the fat per­son off the bridge would suc­ceed. Didn’t work, be­cause peo­ple STILL wouldn’t buy into their pre­pos­ter­ous plan.

Then the psy­chol­o­gists changed the ex­per­i­ment so that the test sub­ject would just have to throw a switch on the track which would di­vert the train from the track where the five peo­ple were trapped to a track where just one per­son was trapped (still fat by the way). Far more of the test sub­jects said they would flip the switch than had said they would throw some­one off the bridge. The psy­chol­o­gists sug­gested some pre­pos­ter­ous sound­ing rea­son for the differ­ence, I don’t even re­mem­ber what, but it seemed to me that the change was be­cause the plan just seemed a lot more likely to suc­ceed. The test sub­jects DISCOUNTED the as­surances of the psy­chol­o­gists that the ‘throw some­one off the bridge plan’ would suc­ceed. And quite ra­tio­nally too, if you ask me. What ra­tio­nal per­son would rely on the opinion of a psy­chol­o­gist on such a mat­ter?

When the 90%/​500 or 100%/​400 ques­tion was posed, I felt my­self hav­ing ex­actly the same re­ac­tion. I im­me­di­ately felt DUBIOUS that the odds were ac­tu­ally 90%. I im­me­di­ately dis­counted the odds. By quite a bit, in fact. Per­haps that was be­cause of lack of self con­fi­dence, or hard won pes­simism from years of real life ex­pe­rience, but I im­me­di­ately dis­counted the odds. I bet a lot of other peo­ple did too. And I wouldn’t take the bet, for ex­actly that rea­son. I didn’t BELIEVE the odds, as given. I was skep­ti­cal. In­ter­est­ingly enough though, I was less skep­ti­cal of the ‘can’t fail/​100%’ es­ti­mate, than of the 90% es­ti­mate. Maybe I could eas­ily imag­ine a sce­nario where there was no chance of failure at all, but couldn’t eas­ily imag­ine a sce­nario where the odds were, re­li­ably, 90%. Once you start throw­ing around num­bers like 90%, in an im­perfect world, what you’re re­ally say­ing is ‘there is SOME chance of failure’. Es­ti­mat­ing how much chance, would be very much a judge­ment call.

So maybe what you’re look­ing at here isn’t ir­ra­tional­ity, or the in­abil­ity to mul­ti­ply, but rather ra­tio­nal pes­simism about it be­ing as easy as claimed.

• You are mak­ing as­sump­tion the feel­ing caused by hav­ing dust spec in your eye is in same cat­e­gory as feel­ing of be­ing tor­tured for 50 years.

Would you rather have googol­plex peo­ple drink a glass of wa­ter or have one per­son tor­tured for 50 years? Would you rather have googol­plex peo­ple put on their un­der­wear in the morn­ing or have one per­son tor­tured for 50 years? If you put feel­ing of dust spec in same cat­e­gory as feel­ings aris­ing from 50 years of tor­ture, you can put pretty much any­thing in that cat­e­gory and you end up prefer­ring one per­son be­ing tor­tured for 50 years to to al­most any phys­i­cal phe­nom­ena that would hap­pen to googol­plex peo­ple.

And even if it’s in same cat­e­gory? I bet that just hav­ing a thought causes some ex­tremely small ac­tivity in brain ar­eas re­lated to pain. Mul­ti­ply that by large enough num­ber and the to­tal pain value will be greater than pain value of per­son be­ing tor­tured for 50 years! I would hope that there is no one who would pre­fer one per­son be­ing tor­tured for 50 years to 3^^^3 per­sons hav­ing a thought...

• You are dodg­ing an im­por­tant part of the ques­tion.

The “dust speck” was origi­nally adopted as a con­ve­nient la­bel for the small­est imag­in­able unit of di­su­til­ity. If I be­lieve that di­su­til­ity ex­ists at all and that events can be ranked by how much di­su­til­ity they cause, it seems to fol­lows that there’s some “small­est amount of di­su­til­ity I’m will­ing to talk about.” If it’s not a dust speck for you, fine; pick a differ­ent ex­am­ple: stub­bing your toe, maybe. Or if that’s not bad enough to ap­pear on your radar screen, cut­ting your toe off. The par­tic­u­lar doesn’t mat­ter.

What­ever par­tic­u­lar small prob­lem you choose, then ask your­self how you com­pare small-prob­lem-to-lots-of-peo­ple with large-prob­lem-to-fewer-peo­ple.

If di­su­til­ities add across peo­ple, then for some num­ber of peo­ple I ar­rive at the coun­ter­in­tu­itive con­clu­sion that 50 years of tor­ture to one per­son is prefer­able to small-prob­lem-to-lots-of-peo­ple. And if I re­sist the temp­ta­tion to flinch, I can ei­ther learn some­thing about my in­tu­itions and how they break down when faced with very large and very small num­bers, or I can en­dorse my in­tu­itions and re­ject the idea that di­su­til­ities add across peo­ple.

• What­ever par­tic­u­lar small prob­lem you choose, then ask your­self how you com­pare small-prob­lem-to-lots-of-peo­ple with large-prob­lem-to-fewer-peo­ple. If di­su­til­ities add across peo­ple, then for some num­ber of peo­ple I ar­rive at the coun­ter­in­tu­itive con­clu­sion that 50 years of tor­ture to one per­son is prefer­able to small-prob­lem-to-lots-of-peo­ple.

It is coun­ter­in­tu­itive, and at least for me it’s REALLY coun­ter­in­tu­itive. On wether to save 400 peo­ple or 500 peo­ple with 90% change it didn’t take me many sec­onds to choose sec­ond op­tion, but this feels very differ­ent. Now that you put it in terms of unit of di­su­tily is­ntead of dust specks it is eas­ier to think about, and on some level it does feel like tor­ture of one per­son would be the log­i­cal choice. And then part of my mind starts screamin this is wrong.

Thanks for your re­ply though, I’ll have to think about all this.

• I sus­pect it’s re­ally coun­ter­in­tu­itive to most peo­ple. That’s why it gets so much dis­cus­sion, and in par­tic­u­lar why so many peo­ple fight the hy­po­thet­i­cal so hard. The “yeah, that makes sense, but then my brain starts scream­ing” re­ac­tion is pretty com­mon.

And yes, I agree that if we com­pare things that are closer to­gether in scale, our in­tu­itions don’t break down quite so dra­mat­i­cally.

• Altru­ism isn’t the warm fuzzy feel­ing you get from be­ing al­tru­is­tic. If you’re do­ing it for the spiritual benefit, that is noth­ing but self­ish­ness. The pri­mary thing is to help oth­ers, what­ever the means. So shut up and mul­ti­ply!

That’s how you get a warm and fuzzy feel­ing if you’re a con­se­quen­tial­ist. If you’re a de­on­tol­o­gist, you get it by obe­di­ence to the Rule, or of­ten more eas­ily by say­ing “Yay, Rule!”

• Mart­inH:

See the fol­low-up here.

(If a dust speck is zero, you could sub­sti­tute “stubbed toe”.)

In­ci­den­tally, my own an­swer to the tor­ture vs. dust specks ques­tion was to bite the other bul­let and say that, given any two differ­ent in­ten­si­ties of suffer­ing, there is a suffi­ciently long finite du­ra­tion of the greater in­ten­sity such that I’d pick a Nearly In­finite du­ra­tion of the lesser de­gree over it. In other words, yeah, I’d con­demn a Nearly In­finite num­ber of peo­ple to 50 years of slightly less bad tor­ture to spare a large enough finite group from 50 years of slightly worse tor­ture.

In real life, I con­sider my­self lucky that ques­tions like that one are only hy­po­thet­i­cal.

• Un­known: I didn’t deny that they’re com­pa­rable, at least in the brute sense of my be­ing able to ex­press a prefer­ence. But I did deny that any num­ber of dis­tributed dust specks can ever end up to tor­ture. And the rea­son I give for that de­nial is just that dis­tribu­tive prob­lem. (Well, there are other rea­sons too, but one thing at a time.)

• Paul : “Slap­ping each of 100 peo­ple once each is not the same as slap­ping one per­son 100 times.”

This is ab­solutely true. But no one has said that these two things are equal. The point is that it is pos­si­ble to as­sign each case a value, and these val­ues are com­pa­rable: ei­ther you pre­fer to slap each of 100 peo­ple once, or you pre­fer to slap one per­son 100 times. And once you be­gin as­sign­ing prefer­ences, in the end you must ad­mit that the dust specks, dis­tributed over mul­ti­ple peo­ple, are prefer­able to the tor­ture in one in­di­vi­d­ual. Your only al­ter­na­tives to this will be to con­tra­dict your own prefer­ences, or to ad­mit to some ab­surd prefer­ence such as “I would rather tor­ture a mil­lion peo­ple for 49 years than one per­son for 50.”

• Ben and Mitchell: the prob­lem is that “mean­ingless in­con­ve­nience” and “agony” do not seem to have a com­mon bound­ary. But this is only be­cause there could be many tran­si­tional stages such as “fairly in­con­ve­nient” and “se­ri­ously in­con­ve­nient,” and so on. But sooner or later, you must come to stages which have a com­mon bound­ary. Then the prob­lem I men­tioned will arise: in or­der to main­tain your po­si­tion, you will be forced to main­tain that pain of a cer­tain de­gree, suffered by any num­ber of peo­ple and for any length of time, is worse than a very slightly greater pain suffered by a sin­gle per­son for a very short time. This may not be log­i­cally in­co­her­ent but at least it is not very rea­son­able.

I say “a very slightly greater pain” be­cause it is in­deed ev­i­dent that we ex­pe­rience pain as some­thing like a con­tinuum, where it is always pos­si­ble for it to slowly in­crease or de­crease. Even though it is pos­si­ble for it to in­crease or de­crease by a large amount sud­denly, there is no ne­ces­sity for this to hap­pen.

• Wrong, anon. If there are ob­jec­tive means by which eth­i­cal sys­tems can be eval­u­ated, there can be both bet­ter and right an­swers.

• Cale­do­nian, of course that can­not be demon­strated. But who needs a demon­stra­tion? Larry D’anna said, “A googol­plex of dusty eyes has the same tiny nega­tive util­ity as one dusty eye as far as I’m con­cerned.” If this is the case, does a billion deaths have the same nega­tive util­ity as one death?

To put it an­other way, ev­ery­one knows that harms are ad­di­tive.

• The as­sump­tion that harms are ad­di­tive is a key part of the demon­stra­tion that harm/​benefit calcu­la­tions can be ra­tio­nal.

So, has it been demon­strated that one can­not be ra­tio­nal with­out mak­ing that as­sump­tion?

• Ben: the poll sce­nario might per­suade me if all the peo­ple ac­tu­ally be­lieved that the situ­a­tion with the dust specks, as a whole, were bet­ter than the tor­ture situ­a­tion. But this isn’t the case, or we couldn’t be hav­ing this dis­cus­sion. Each per­son merely thinks that he wouldn’t mind suffer­ing a speck as an in­di­vi­d­ual in or­der to save some­one from tor­ture.

As for a speck reg­is­ter­ing zero on your tor­ture scale: what about be­ing tied down with your eyes taped open, and then a hand­ful of sand thrown in your face? Does that reg­ister zero too? The point would be to take the min­i­mum which is on the same scale, and pro­ceed from there.

As for me, I don’t have any spe­cific tor­ture scale. I do have a pain scale, and dust specks and tor­ture are both on it.

• It looks like there was an in­fer­en­tial dis­tance prob­lem re­sult­ing from the fact that many ei­ther haven’t read or don’t re­mem­ber the origi­nal tor­ture vs dust specks post. Eliezer may have to ex­plain the cir­cu­lar­ity prob­lem in more de­tail.

• Save 400 lives, with cer­tainty. Save 500 lives, with 90% prob­a­bil­ity; save no lives, 10% prob­a­bil­ity.

I’m sur­prised how few peo­ple are re­act­ing to the im­plau­si­bil­ity of this thought ex­per­i­ment. When not in statis­tics class, God rarely gives out pri­ors. Prob­a­bil­ities other than 0+ep­silon and 1-ep­silon tend to come from hu­man schol­ar­ship, which is an of­ten im­perfect pro­cess. It is hard to imag­ine a non-con­trived situ­a­tion where you would have as much con­fi­dence in the 9010 out­come as in the cer­tain out­come.

Sup­pose the “90/​10” figure comes from cure rates in a study of 20-year-old men, but your 500 pa­tients are mostly mid­dle-aged. You have the choice of disarm­ing a bomb that will kill 400 peo­ple with prob­a­bil­ity 1-ep­silon, or of tak­ing that “90/​10″ es­ti­mate, re­ally, re­ally se­ri­ously; I know which choice I would make.

• I’m bet­ting 10 cred­i­bil­ity units on Yud­kowsky pub­li­cly ad­mit­ting that he was wrong on this one.

• It will be in­ter­est­ing to see if this is one of the mis­takes Eliezer quietly re­tracts, or one of the mis­takes that he in­sists upon mak­ing over and over no mat­ter what the crit­i­cism.

• GreedyAl­gorithm, this is the con­ver­sa­tion I want to have.

The sen­tence in your ar­gu­ment that I can­not swal­low is this one: “No­tice that if you have in­co­her­ent prefer­ences, af­ter a while, you ex­pect your util­ity to be lower than if you do not have in­co­her­ent prefer­ences.” This is cir­cu­lar, is it not?

You want to es­tab­lish that any de­ci­sion, x, should be made in ac­cor­dance w/​ max­i­mum ex­pected util­ity the­ory (“shut up and calcu­late”). You ask me to con­sider X = {x_i}, the set of many de­ci­sions over my life (“af­ter a while”). You say that the ex­pected value of U(X) is only max­i­mized when the ex­pected value of U(x_i) is max­i­mized for each i. True enough. But why should I want to max­i­mize the ex­pected value of U(X)? That re­quires ev­ery bit as much (and per­haps the same) jus­tifi­ca­tion as max­i­miz­ing the ex­pected value of U(x_i) for each i, which is what you sought to es­tab­lish.

• This whole ar­gu­ment only washes if you as­sume that things work “nor­mally” (eg like they do in the real field, eg are sub­ject to the ax­ioms that make ad­di­tion/​sub­trac­tion/​calcu­lus work). In fact we know that util­ity doesn’t be­have nor­mally when con­sid­er­ing mul­ti­ple agents (as proved by ar­rows im­pos­si­bil­ity the­orm), so the “cor­rect” an­swer is that we can’t have a true pareto-op­ti­mal solu­tion to the eye-dust-vs-tor­ture prob­lem. There is no rea­son why you couldn’t con­tstruct a ring/​field/​group for util­ity which pro­duced some of the solu­tions the OP dis­misses, and in fact IMO those would be bet­ter rep­re­sen­ta­tions of hu­man util­ity than a straight nor­mal in­ter­pre­ta­tion.

• I am truly con­fused. This post does not en­dorse ei­ther side.

I just would like to note some­thing about my cog­ni­tive pro­cess here: in the “step by step” ar­gu­ment, what I seem to be think­ing is “rigor­ously the same tor­ture” and “for more peo­ple”. The ar­gu­ment may be sound, but it does not seem to be hit­ting my brain in a sound way

• “Most peo­ple choose op­tion 1.” I find this hard to be­lieve. Were they forced to an­swer quickly or un­der cog­ni­tive load, and with­out ac­cess to a calcu­la­tor or pen and pa­per? I would ap­pre­ci­ate it if you could edit the post to provide a cita­tion.

• This thresh­old thing is in­ter­est­ing. Just to make the idea it­self solid, imag­ine this. You have a type of iron bar that can bend com­pletely elas­ti­cally (no de­for­ma­tion) if forces less than 100N is ap­plied to it. Say they are more valuable if they have no such de­for­ma­tions. Would you ap­ply 90N to 5 billion bars or 110N to one bar?

With this thought ex­per­i­ment, I reckon the idea is solid­ified and ob­vi­ous, yes? The ques­tion that still re­mains, then, is whether dust specks in eyes is or is not af­fected by some thresh­old.

Though I sup­pose the is­sue could ac­tu­ally be dropped com­pletely, if we now agree that the idea of thresh­old is real. If there is a thresh­old and some­thing is be­low that thresh­old, then the util­ity of do­ing it is in­deed zero, re­gard­less of how many times you do it. If some­thing is above the thresh­old, shut up (or don’t) and mul­ti­ply.

• In the tor­ture vs dust specks com­par­i­son, it is im­por­tant not to dis­card the di­su­til­ities of un­fair­ness, nor of moral haz­ards. One can­not pub­li­cly ac­knowl­edge the su­pe­ri­or­ity of “one guy tor­tured” vs ” lots of peo­ple mildly in­con­ve­nienced” with­out oth­ers, in­clud­ing po­ten­tially our poli­ti­ci­ans, or en­e­mies, de­cid­ing that this sup­ports their use of ac­tual tor­ture on ac­tual peo­ple “for the greater good”. Ac­cept­ing tor­ture has a nega­tive util­ity for many peo­ple.

Also we hu­mans value fair­ness, and pre­fer that things be evenly dis­tributed (fair­ness has pos­i­tive util­ity). The di­su­til­ity of even a tiny frac­tion of those peo­ple know­ing that some­one was tor­tured so as to spare them from dust specks, when added to­gether, would prob­a­bly ex­ceed that of the per­son be­ing tor­tured. The dan­ger of “shut up and mul­ti­ply” is that some­one might be mul­ti­ply­ing the wrong things.

Re­ject­ing the prin­ci­ple that we can’t, in gen­eral, sac­ri­fice one per­son for the good of many, also has di­su­til­ity. If we were to ac­cept tor­tur­ing some­one to pre­vent a lot of dust specks, imag­ine how much time would have to be spent ar­gu­ing whether we can take away this other guy’s prop­erty for some greater good (which might fail to de­liver, and might have been sug­gested for some­one’s self-in­ter­est).

• It would be a very differ­ent kind of eval­u­a­tion, but the im­por­tance would mat­ter differ­ently if it were the /​last/​ 500 hu­mans we were talk­ing about—and there was a 90% chance that all would live and a 10% chance that all would die on one path­way ver­sus a guaran­teed 100 dy­ing on the other path­way. But since they are just /​some group/​ of 500 hu­mans with pre­sumedly other groups other places, it is worth the in­vest­ment—gam­bling in this way pays out in less lives lost, on av­er­age.

• 20 Dec 2012 17:00 UTC
0 points

How many units of phys­i­cal pain per speck? How many units of per­ceived mis­treat­ment per speck? How many in­stances of ba­sic hu­man rights in­fringe­ments per speck?

I don’t have a ter­mi­nal value as fol­low­ing: “Peo­ple should never ex­pe­rience an in­finites­i­mally ir­ri­tat­ing dust speck hit­ting their eye”

I do have a ter­mi­nal value as fol­low­ing: “Peo­ple should never go through tor­ture”

So in this case we can make an­other calcu­la­tion which is: Googol­plex of in­stances that are com­pat­i­ble with my ter­mi­nal val­ues vs a sin­gle event that is not com­pat­i­ble with my ter­mi­nal val­ues.

I’d like to add how­ever that if these events are causally con­nected then dust specks would be­come the ob­vi­ous choice. I’m sure there’s a cer­tain prob­a­bil­ity of get­ting into a car ac­ci­dent due to blink­ing etc, lots of other ways to make es­sen­tially the same ar­gu­ment. Any­way that as­pect was not em­pha­sized in ei­ther post so I take it was not in­tended ei­ther.

If the in­tial op­tion 1 was writ­ten as “Save 400 lives with cer­tainty, 100 peo­ple die with cer­tainty” it would be less mis­lead­ing. Be­cause if you in­ter­pret the op­tion 1 as no one dy­ing, it ac­tu­ally is the cor­rect choice, al­though it later be­comes clear any­way.

• Be­cause if you in­ter­pret the op­tion 1 as no one dying

Such a read­ing would, frankly, be at the very least ex­tremely care­less.

When the jux­ta­po­si­tion is be­tween sav­ing 400 lives or sav­ing 500 lives, it’s ob­vi­ous that an ad­di­tional 100 peo­ple are NOT be­ing saved in the first sce­nario.

• I don’t have a ter­mi­nal value as fol­low­ing: “Peo­ple should never ex­pe­rience an in­finites­i­mally ir­ri­tat­ing dust speck hit­ting their eye” I do have a ter­mi­nal value as fol­low­ing: “Peo­ple should never go through tor­ture”

Are you sure you can iden­tify your ter­mi­nal val­ues as well as that? Most peo­ple can’t.

If so, can you please give a full list of your ter­mi­nal val­ues, or as full such a list as you can make it? Thanks in ad­vance.

• To iden­tify a sin­gle value does not re­quire you to iden­tify all your val­ues, which your sar­donic com­ment seems to sug­gest. I chose that phras­ing be­cause it was plau­si­ble. In cre­at­ing this ex­am­ple of ter­mi­nal val­ues I did no want to get into a full anal­y­sis of what’s wrong here, I merely in­tended to point out that the tor­ture is not an ob­vi­ous op­tion, and do that with a com­pact re­ply. The post seems to sug­gest con­vert­ibil­ity be­tween dust specks and tor­ture, if you can come up with a cou­ple of ways to weigh the situ­a­tion where con­vert­ibil­ity does not fol­low, it be­comes a triv­ial is­sue to keep list­ing. That in my opinion is suffi­cient to con­clude that there is no ob­vi­ous ul­ti­mately ab­solutely cor­rect right an­swer, and that you should pro­ceed with care in­stead of shrug­ging and giv­ing the tor­ture ver­dict. Most of the ac­tual prob­lems with the dilemma do not stem from the num­ber googol­plex, but rather this be­ing a hy­po­thet­i­cal setup which seems to elimi­nate con­se­quences, and con­se­quences are usu­ally a big of part of what peo­ple per­ceive right and wrong, how­ever you can ar­gue that when ex­am­in­ing con­se­quences even­tu­ally you will hit some kind ter­mi­nal val­ues. So there you have it.

• If you don’t con­sider this par­tic­u­lar one type of di­su­til­ity (dust speck) con­vert­ible into the other (tor­ture), the stan­dard fol­low-up ar­gu­ment is to ask you to iden­tify the small­est kind of di­su­til­ity that might nonethe­less be some­how con­vert­ible.

The typ­i­cal list of ex­am­ples in­clude “a year’s un­just im­pris­on­ment”, “a bro­ken leg”, “a split­ting mi­graine”, “a di­ar­rhea”, “an an­noy­ing hic­cup”, “a pa­per­cut”, “a stubbed toe”.

Would any of these, if tak­ing the place of the “dust speck”, change your po­si­tion so that it’s now in favour of prefer­ring to avert 3^^^3 rep­e­ti­tions of the lesser di­su­til­ity, rather than avert the sin­gle per­son from be­ing tor­tured?

E.g. is it bet­ter to save a sin­gle per­son from be­ing tor­tured for 50 years, or bet­ter to save 3^^^3 peo­ple from suffer­ing a year’s un­just im­pris­on­ment?

• As you can see from both of my above com­ments it’s not the math­e­mat­i­cal as­pect that’s prob­le­matic. You choos­ing the word “di­su­til­ity” means you’ve already ac­cepted these units as con­vert­ible to a sin­gle cur­rency.

• In what man­ner would you pre­fer some­one to de­cide such dilem­mas? Ar­gu­ing that the var­i­ous suffer­ings might not be con­vert­ible at all is more of an ad­di­tional prob­lem, not a solu­tion—not an al­gorithm that in­di­cates how a per­son or an AI should de­cide.

I don’t ex­pect that you think that an AI should ex­plode in such a dilemma, nor that it should pre­fer to save nei­ther po­ten­tial tor­ture vic­tim nor po­ten­tial dust­specked mul­ti­tudes....

• 18 Mar 2012 10:15 UTC
0 points

I re­ject the idea that hu­man suffer­ing is a lin­ear func­tion. Once you ac­cept such an idea, it’s not too difficult too say that to avoid minor in­con­ve­niences of suffi­ciently large num­ber of peo­ple we should tor­ture one per­son for his whole life.

Here is a ques­tion to demon­strate:

Two peo­ple are sched­uled to be tor­tured for no rea­son you know. One is to be tor­tured for two days, the other for three. You know that sub­jects are equally re­sis­tant to tor­ture.

You have the choice:

• Re­duce the time one sub­ject is tor­tured from 3 to 2 days. Other is tor­tured for 2d.

• Re­duce the time one sub­ject is tor­tured from 2 to 1 day. Other is tor­tured for 3d.

Which choice would you make?

I cer­tainly hope that most of so called util­i­tar­i­ans can see the im­por­tance of that choice. To grow in­fat­u­ated with a small cold flame is to blind your­self to the pos­si­bil­ity of be­ing wrong in your es­ti­mates. Be­fore you say that tor­ture is prefer­able to dust­specks ask your­self how does hu­man suffer­ing scale from specks to tor­ture.

• I re­ject the idea that hu­man suffer­ing is a lin­ear func­tion. Once you ac­cept such an idea, it’s not too difficult too say that to avoid minor in­con­ve­niences of suffi­ciently large num­ber of peo­ple we should tor­ture one per­son for his whole life.

To re­ject the sec­ond con­cept, you don’t just need to re­ject the idea of hu­man suffer­ing as a lin­ear func­tion, you need to re­ject the idea of hu­man suffer­ing as quan­tifi­able at all—whether it can be ex­pressed as a lin­ear or ex­po­nen­tial or any other kind of func­tion.

Here is a ques­tion to demon­strate:

Two peo­ple are sched­uled to be tor­tured for no rea­son you know. One is to be tor­tured for two days, the other for three. > You know that sub­jects are equally re­sis­tant to tor­ture. You have the choice:
Re­duce the time one sub­ject is tor­tured from 3 to 2 days. Other is tor­tured for 2d. Re­duce the time one sub­ject is tor­tured from 2 to 1 day. Other is tor­tured for 3d.
Which choice would you make?

I don’t ac­tu­ally un­der­stand the pur­pose of this ques­tion. For me to an­swer it cor­rectly I’d have to know much more about the last­ing effects of tor­ture over 1, 2 or 3 days re­spec­tively to figure out whether “1day of tor­ture + 3 days of tor­ture” is greater or less di­su­til­ity than “2 times 2 days of tor­ture”.

None of the util­i­tar­i­ans here, as far as I know, has ever ar­gued that you can just mul­ti­ply the times­pan of tor­ture to find the di­su­til­ity thereof. I think your ar­gu­ment is not ac­tu­ally at­tack­ing any­thing we rec­og­nize as util­i­tar­i­anism.

Be­fore you say that tor­ture is prefer­able to dust­specks ask your­self how does hu­man suffer­ing scale from specks to tor­ture.

I at­tempted such a scal­ing in a thread of a differ­ent fo­rum. For me It went a bit like this:
20,000 dust specks worse than a sin­gle pa­per­cut.
50 pa­per­cuts worse than a 5-minute hic­cup
20 hic­cups worse than a half-hour headache
50 half-hour headaches worse than a evening of di­ar­rhea
400 di­ar­rheas worse than a bro­ken leg.
30 bro­ken legs worse than a month of un­just im­pris­on­ments.
200 un­just im­pris­on­ments worse than a month of tor­ture
100 tor­ture-months worse than a tor­ture year
10 tor­ture-years worse than a tor­ture of 5 years
10 tor­ture-5-years worse than tor­ture of 20 years
5 tor­ture-20-years worse than tor­ture of 50 years

That took me to 120,000,000,000,000,000,000 dust­specks (one speck per per­son) be­ing worse than 50 years of tor­ture. So ba­si­cally specks equiv­a­lent to the pop­u­la­tion of 20 billion Earths

The num­ber still seems a bit too small for me, and I’d cur­rently prob­a­bly re­vise up­wards some of the steps above (e.g. the fac­tor cor­re­spond­ing be­tween di­ar­rheas and bro­ken legs, and per­haps a few other figures there).

• Ah. But you see, I don’t think one can get away with clas­sify­ing any phys­i­cal part of our uni­verse as prin­ci­pally un­quan­tifi­able (Ev­i­dence leads me to be­lieve that mea­sure­ments of pain or dam­age are pos­si­ble for ex­am­ple, though they are not ideally ac­cu­rate). And I do not ar­gue that no judge­ments in favour of mod­er­ate in­di­vi­d­ual dam­age vs. huge spread dam­age will be jus­tified. Just that in spe­cific case of dust­specks ver­sus tor­ture I don’t think most of us should choose tor­ture.

I don’t ac­tu­ally un­der­stand the pur­pose of this ques­tion. For me to an­swer it cor­rectly I’d have to know much more about the last­ing effects of tor­ture over 1, 2 or 3 days re­spec­tively to figure out whether “1day of tor­ture + 3 days of tor­ture” is greater or less di­su­til­ity than “2 times 2 days of tor­ture”.

The thing about that choice is that one does not have over­whelming ev­i­dence available. What would be your best es­ti­mate of di­su­til­ity func­tion, given the ev­i­dence you cur­rently posses? If you per­son­ally had to make such a choice, you would be forced to con­sider at least some di­su­til­ity func­tion or ad­mit that you are mak­ing a judge­ment with­out tak­ing into ac­count well-be­ing of sub­jects. The whole idea be­hind that ques­tion is to point out that some kind of util­ity func­tion is re­quired and it is that func­tion that ul­ti­mately de­ter­mines your an­swer.

As to the cor­rect an­swer, I don’t see how could any­one ever give a perfectly cor­rect one (as there is no way to know what spe­cific effect tor­ture will have on each of the sub­jects in ad­vance, though in fu­ture I ex­pect we will be able to give pretty good es­ti­mates), but if I was forced to make such a choice, I would definitely try to take the op­tion with the least to­tal dam­age to sub­jects. And I do not cur­rently think that 3 days of tor­ture would be less dam­ag­ing than two and that the sec­ond day would do more or equal harm com­pared to the third.

None of the util­i­tar­i­ans here, as far as I know, has ever ar­gued that you can just mul­ti­ply the times­pan of tor­ture to find the di­su­til­ity thereof. I think your ar­gu­ment is not ac­tu­ally at­tack­ing any­thing we rec­og­nize as util­i­tar­i­anism.

I sure hope so. I ex­pect that my mean­ing was lost in so called part. I am just ter­rified that some peo­ple will be tempted to don the robes of util­i­tar­i­anism and ar­gue in favour of op­press­ing some small groups for the benefit of the whole so­ciety at large. We may not favour such ar­gu­ments in poli­tics right now, but are you sure that in fu­ture such a flawed call to prag­ma­tism will not be­come a dan­ger? I’m even more ter­rified by the fact that Eliezer posted this here with­out sev­eral ex­plicit warn­ings about such a dan­ger. Just as know­ing about bi­ases can be dan­ger­ous this ex­am­ple is po­ten­tially lethal.

So the point I wanted to make is that be­fore con­sid­er­ing a choice in such a dilemma one has to care­fully ex­am­ine his (dis)util­ity func­tions, not just shut up and mul­ti­ply.

• am just ter­rified that some peo­ple will be tempted to don the robes of util­i­tar­i­anism and ar­gue in favour of op­press­ing some small groups for the benefit of the whole so­ciety at large.

Op­pres­sion of minori­ties can hap­pen both via con­se­quen­tial­is­tic claims (e.g. Stalin) but also via de­on­tolog­i­cal claims (e.g. Is­lamists). But ei­ther way such so­cieties have proved them­selves tyran­ni­cal for pretty much al­most ev­ery­one, not just the most op­pressed minori­ties. So in those cases it’s not “op­press­ing the few for the benefit of the many”, it ends up be­ing “op­press­ing the many for the benefit of the few”. Like­wise with slave­hold­ing so­cieties, etc..

A bet­ter ex­am­ple of op­press­ing the few for the benefit of the many is how our own (mod­ern, Western) so­cieties lock away crim­i­nals. We op­press pris­on­ers (and frankly we op­press them too much and too need­lessly in my point of view) for the sup­posed benefit of the whole so­ciety. You may ar­gue that we op­press them be­cause they de­serve it and NOT for the benefit of so­ciety—but would you con­sider the ex­is­tence of pris­ons jus­tifi­able if it was just done be­cause the pris­on­ers de­serve it, and not that so­ciety as a whole also benefit­ted by such lock­ing away of pris­on­ers? I, for one, would not.

If you ar­gue that we only lock away the peo­ple we be­lieve guilty—that’s not true ei­ther. We de­tain sus­pects as well, (and there­fore op­press them in this man­ner), be­fore their guilt is de­ter­mined. And we fully ex­pect that atleast some of them will be found not guilty. We cur­rently ac­cept this op­pres­sion of (rel­a­tively few) in­no­cent, for the benefit of the so­ciety as a whole.

• In­deed. But pris­ons can be jus­tified from a prag­matic point of view. Cer­tainly we de­tain these peo­ple for the benefit of the many, but we do not tor­ture them and lately there is a trend to give them more op­por­tu­ni­ties to work and cre­ate. I should note here, that I ab­solutely ab­hor death penalty, so let’s not go off on that tan­gent.

As for Stalin, I am ashamed to ad­mit that I can not re­mem­ber him ac­tu­ally us­ing ap­peal to prag­ma­tism to con­vince some­one. Per­haps it was more like giv­ing the au­di­ence a safe route out of challeng­ing him af­ter he made the de­ci­sion alone. As in his ar­gu­ment sounds vaguely con­vinc­ing so I don’t have to feel guilty about avoid­ing be­ing the first to dis­sent and go­ing to GULag. Can you see how much more con­ve­nient it may be for a dic­taor to give such a line of re­treat? Usu­ally dic­ta­tors are not in po­si­tion where there is a real need to con­vince peo­ple of some­thing by ar­gu­ing, as far as I can see.

What I have in mind is a situ­a­tion some­time in not dis­tant fu­ture, when ap­peals to prag­ma­tism will be­come more com­mon in poli­tics, but gen­eral pop­u­la­tion is not yet ready to spot skewed util­ity func­tions in such a dilemma. So it will in­deed be­come pos­si­ble to con­vince the ma­jor­ity of pop­u­la­tion to will­ingly co­op­er­ate and op­press the few. Most peo­ple re­ally don’t re­quire that much con­vinc­ing when it comes to cer­tain in­con­ve­nience for me vs. dis­tant suffer­ing for some strangers that I will prob­a­bly never see type of choice any­way. And when such a Master of ra­tio­nal­ity as Eliezer him­self ar­gued in favour of tor­tur­ing some hap­less chap for 50 years just so many peo­ple would be spared an in­con­ve­nience of blink­ing once, you can see where this will go. I’m just afraid that while Eliezer tries to in­stil a cer­tainly use­ful prin­ci­ple of shut up and mul­ti­ply he may well be set­ting some peo­ple up to pre­fer “the many” side of such a dilemma. Can­not be too cau­tious when teach­ing as­piring ra­tio­nal­ists.

• But pris­ons can be jus­tified from a prag­matic point of view.

What’s the differ­ence be­tween the “prag­matic point of view” (which it seems you jus­tify) and the “benefit of the many” (which I un­der­stand you don’t jus­tify)? This seems to me a mean­ingless dis­tinc­tion.

Cer­tainly we de­tain these peo­ple for the benefit of the many, but we do not tor­ture them

Well, most peo­ple don’t per­ceive enough benefit for so­ciety to hurt­ing pris­on­ers more than they cur­rently are be­ing hurt. So that’s rather be­sides the point, isn’t it? The point is we de­tain and op­press the few for the benefit of the many.

and lately there is a trend to give them more op­por­tu­ni­ties to work and cre­ate.

Even as­sum­ing I ac­cept such a trend ex­ists (not sure about it), again we don’t con­sider such op­por­tu­ni­ties to be against the benefit of the many. So it’s be­sides the point.

So it will in­deed be­come pos­si­ble to con­vince the ma­jor­ity of pop­u­la­tion to will­ingly co­op­er­ate and op­press the few.

As I already said we already co­op­er­ate in or­der to op­press the few. We call those few “pris­on­ers”, which we’re op­press­ing for the benefit of the many.

And when such a Master of ra­tio­nal­ity as Eliezer him­self ar­gued in favour of tor­tur­ing some hap­less chap for 50 years just so many peo­ple would be spared an in­con­ve­nience of blink­ing once, you can see where this will go.

No, I’m sorry, but I re­ally REALLY don’t see where it’s sup­posed to be go­ing. In the cur­rent world peo­ple are tor­tured to death for much less rea­son than that. Not even for the small benefit of 3^^^^3 peo­ple, but for no benefit or even for nega­tive benefit.

I’d rather ar­gue with some­one about tor­ture on the terms of ex­pected util­ity and di­su­til­ity for the whole of hu­man­ity, rather than with some­one who just re­peats the mantra “If you op­pose tor­ture, then you’re just a ter­ror­ist-lover who hates our free­doms” or for that mat­ter the op­po­site “If you sup­port tor­ture for any rea­son what­so­ever, even in ex­treme hy­po­thet­i­cal sce­nar­ios, you’re just as bad as the ter­ror­ists”.

And cur­rently it’s the lat­ter prac­tice that seems dom­i­nant in ac­tual dis­cus­sions (and defenses also) of tor­ture, not any util­i­tar­ian tac­tic of as­sign­ing util­ities to ex­pected out­comes.

• What’s the differ­ence be­tween the “prag­matic point of view” (which it seems you jus­tify) and the “benefit of the many” (which I un­der­stand you don’t jus­tify)? This seems to me a mean­ingless dis­tinc­tion.

It seems that way be­cause it is that way. I sim­ply failed to com­mu­ni­cate my idea prop­erly. In fact I men­tioned that

I do not ar­gue that no judge­ments in favour of mod­er­ate in­di­vi­d­ual dam­age vs. huge spread dam­age will be jus­tified. Just that in spe­cific case of dust­specks ver­sus tor­ture I don’t think most of us should choose tor­ture.

What I truly want is not to dis­miss “benefit of the many”(noth­ing wrong with it), but to bring into fo­cus the is­sue of com­par­ing util­ity func­tions, which in this case I think Eliezer messed up.

As I already said we already co­op­er­ate in or­der to op­press the few. We call those few “pris­on­ers”, which we’re op­press­ing for the benefit of the many.

Yes we do. And it seems that we both pre­fer to ac­tu­ally talk about such de­ci­sions in terms of util­ity gain or loss. But just be­cause two of us are be­ing rea­son­able does not mean that ev­ery­one else will be. What wor­ries me is that some peo­ple learn­ing about “the Way” form Eliezer’s post may ac­quire a bit of bias to­ward “the many” side of such dilem­mas. And then when the is­sue will arise in the fu­ture they will chose the wrong side and per­haps con­vince many oth­ers to take the wrong side.

No, I’m sorry, but I re­ally REALLY don’t see where it’s sup­posed to be go­ing. In the cur­rent world peo­ple are tor­tured to death for much less rea­son than that. Not even for the small benefit of 3^^^^3 peo­ple, but for no benefit or even for nega­tive benefit.

Now this is not cer­tain, but I ex­pect Eliezer to have a huge im­pact on the fu­ture of our species, be­cause is­sues of think­ing and de­cid­ing are in­deed cen­tral to our daily lives. And any in­ad­ver­tent mis­take here or in his book will have no­tice­able con­se­quences. Some­one in the fu­ture will take out that book and point to how Eliezer prefers to con­demn one per­son to tor­ture in­stead of hav­ing 3^^^^3 peo­ple blink, and the au­di­ence may well be con­vinced that it is bet­ter in gen­eral to pre­fer “the many”, be­cause Eliezer will be an au­thor­ity and their brains will just dump 3^^^^3 into “many” men­tal bucket. Bet­ter to in­tro­duce a few cau­tion­ary lines into that post and book now, while there is time to do it.

I’d rather ar­gue with some­one about tor­ture on the terms of ex­pected util­ity and di­su­til­ity for the whole of hu­man­ity, rather than with some­one who just re­peats the mantra “If you op­pose tor­ture, then you’re just a ter­ror­ist-lover who hates our free­doms” or for that mat­ter the op­po­site “If you sup­port tor­ture for any rea­son what­so­ever, even in ex­treme hy­po­thet­i­cal sce­nar­ios, you’re just as bad as the ter­ror­ists”.

So would I. I am not try­ing to ar­gue with you here. As far as I can see we agree on pretty much ev­ery­thing so far. I prob­a­bly just fail to con­vey my ideas most of the time.

• Here’s an­other word prob­lem for you.

There is a dis­ease—painful, but not usu­ally life threat­en­ing—that is rapidly be­com­ing a pan­demic. Med­i­cal sci­ence is not go­ing to be able to cure the dis­ease for the next sev­eral decades, which means that many mil­lions of peo­ple will have to en­dure it, and a few dozen will prob­a­bly die. You can find a cure for the dis­ease, but to do so you’ll have to perform ag­o­niz­ing, ul­ti­mately lethal, ex­per­i­ments on a young and healthy hu­man sub­ject.

Do you do it?

• I note the an­swer to this seems par­tic­u­larly straight­for­ward if the few dozen who would prob­a­bly die would also have been young and healthy at the time. Even more con­ve­nient if the sub­ject is a vol­un­teer, and/​or if the ex­per­i­men­tor (pos­si­bly with a staff of non-sen­tient robot record-keep­ers and es­say-com­pilers, rather than hu­mans?) did them on him­self/​her­self/​them­self(?).

(I per­son­ally have an ex­tremely strong de­sire to sur­vive eter­nally, but I un­der­stand there are (/​have his­tor­i­cally been) peo­ple who would will­ingly risk death or even die for cer­tain in or­der to save oth­ers. Per­haps if sac­ri­fic­ing my­self was the only way to save my sister, say, though that’s a some­what un­fair situ­a­tion to sug­gest as rele­vant. Again, tempt­ing to just use a less-ego­cen­tric vol­un­teer in­stead if available.) (Re­sults-based rea­son­ing, rather than ideal­is­tic/​cau­tious ac­tion-based rea­son­ing. Par­tic­u­larly given pub­lic back­lash, I can un­der­stand why a gov­ern­men­tal body would choose to keep its hands as clean as pos­si­ble in­stead and al­low a mas­sive tragedy rather than stain­ing their hands with a sin. Hmm.)

• Per­haps if sac­ri­fic­ing my­self was the only way to save my sister, say, though that’s a some­what un­fair situ­a­tion to sug­gest as rele­vant.

As­sume the least-con­ve­nient pos­si­ble world. It’s not like this one is fair ei­ther...

• In­deed. nods

If sac­ri­fice of my­self was nec­es­sary to (hope to?) save the per­son men­tioned, I hope that I would {be con­sis­tent with my cur­rent per­cep­tion of my likely ac­tions} and go through with it, though I do not claim com­plete cer­tainty of my ac­tions.

If those that would die from the hy­po­thet­i­cal dis­ease were the soon-to-die-any­way (very el­derly/​in­firm), I would likely choose to spend my time on more sig­nifi­cant ar­eas of re­search (life ex­ten­sion, more-fatal/​-painful dis­eases).

If all other sig­nifi­cant ar­eas had been dealt with or were be­ing ad­e­quately dealt with, per­haps ren­der­ing the dis­ease the only re­main­ing ail­ment that hu­man­ity suffered from, I might carry out the re­search for the sake of com­plete­ness. I might also wait a few decades de­pend­ing on whether or not it would be fixed even with­out do­ing so.

A prob­lem here is that the more in­con­ve­nient I make one de­ci­sion, the more con­ve­nient I make the other. If I jump ahead to a hy­po­thet­i­cal cases where the choices were com­pletely bal­anced ei­ther way, I might just flip a coin, since I pre­sum­ably wouldn’t care which one I took.

Then again, the stack­ing could be cho­sen such that no mat­ter which I took it would be emo­tion­ally dev­as­tat­ing… that though con­ve­niently (hah) com­prises such a slim frac­tion of all pos­si­bil­ities that I gain by as­sum­ing there to always be some as­pect that would make a differ­ence or which could be ex­ploited in some way, given that if there were then I could find it and make a sound de­ci­sion and that if the weren’t my po­si­tion would not in fact change (by the na­ture of the bal­anced setup).

Step­ping back and con­sid­er­ing the least-con­ve­nient con­sid­er­a­tion ar­gu­ment, I no­tice that its pri­mary func­tion may be to get peo­ple to ac­cept two op­tions as both con­ceiv­able in differ­ent cir­cum­stances, rather than re­ject­ing one on tech­ni­cal­ities. If I already ac­knowl­edge that I would be in­clined to make a differ­ent choice de­pend­ing on differ­ent cir­cum­stances, am I freed from that ap­pli­ca­tion I won­der?

• I don’t know if I un­der­stood your cir­cu­lar ar­gu­ment right, but you are ba­si­cally say­ing that if 50 years of tor­ture for one per­son (50yt1) < dust­speck for a googol­plex (ds10^10^100) then 50yt1>49.9999999yt10^100>49.9999998yt10^200>...>ds10^10^100

if this is not what you are say­ing, then I don’t un­der­stand your point and ask to elu­ci­date it. if it is, then I think there is a se­ri­ous flaw here: in the 50yt1 sce­nario, some­one is suffer­ing, i.e. feel­ing pain in the ds10^10^100 sce­nario, there is a mere an­noy­ance. There has there­fore to be a point in that se­quence, where one can con­sis­tently ar­gue X10^Y<X′10^2Y, where X is the last “pain”, X’ is the first mere an­noy­ance, there­fore in­ter­rupt­ing the chain.

I hope this is un­der­stand­able.

EDIT to avoid dou­ble post: I think the kind of rea­son­ing you are us­ing is very, very dan­ger­ous if you try a grad­ual trans­for­ma­tion be­tween two things with differ­ent qual­ity not just quan­tity. It is clear that the two ex­tremes of the se­quence have a differ­ent qual­ity, but you are as­sum­ing the only thing that changes is quan­tity.

• Now that I have read (and com­mented on) the “Sav­age ax­iom” (http://​​less­wrong.com/​​lw/​​5te/​​a_sum­mary_of_sav­ages_foun­da­tions_for_prob­a­bil­ity/​​) thread, I would like to note here on this thread that there are no com­putable solu­tions to the Sav­age ax­ioms.

Now, of course, in this uni­verse, googol­plexes are ut­terly ir­rele­vant. If an AI could har­ness ev­ery planck vol­ume of space in the ob­serv­able uni­verse to each perform one com­pu­ta­tions per planck time, all the stars would burn out long be­fore it got any­where close to 2^1024 com­pu­ta­tions, which is a long way off from a googol­plex. So it seems to me “cir­cu­lar al­tru­ism” on this level is of ab­solutely no con­se­quence.

Of course we might still cling to the “thought ex­per­i­ment” as­pect of it. I don’t see why we should, but even if we do, it doesn’t help: ideal ra­tio­nal­ity, in the Sav­age sense, isn’t even com­putable. No AI, even with un­limited time and space to make up its mind, can be ra­tio­nal, in the sense of always choos­ing the course that max­imises some util­ity func­tion with re­spect to some sub­jec­tive prob­a­bil­ity dis­tri­bu­tion in all situ­a­tions. So some­thing still has to give. Of course there are lots of ways to do this. We can be “ra­tio­nal within ep­silon” if you like, but this ep­silon will mat­ter when con­sid­er­ing these googol­plex cir­cu­lar­ity ar­gu­ments. I’m skep­ti­cal that there is any­thing co­her­ent here at all.

• I, like oth­ers, can do the maths just fine, so what? How does it fol­low that cir­cu­lar prefer­ences over very long chains of re­motely pos­si­ble pairs of choices should cause me to doubt strong moral in­tu­ition? Be­cause I would, un­der care­fully con­trived con­di­tions, lose against the allegedly op­ti­mal solu­tion… As for grand­stand­ing, hah! To pre­sume to call this brand of con­se­quen­tial­ism “ra­tio­nal­ity” is already quite rhetor­i­cal. Never mind warm fuzzies, bare swords, flames, and chim­panzees.

• I en­joyed the read, and it makes a lot of sense. how­ever… it leaves me with a..
Hrm.
Well, I’m no math­e­mat­i­cian, but be­tween be­ing a mon­key and never mul­ti­ply­ing and always feel­ing, and be­ing a robot, always mul­ti­ply­ing and never feel­ing, I think I’ll stick with be­ing hu­man and do both.

• I’m fol­low­ing a link from Pharyn­gula, and I don’t have time to read the com­ments, my apolo­gies if I’m re­peat­ing something

I think you’re up against the sorites para­dox, you are con­fus­ing ap­ples and or­anges in com­par­ing tor­ture to a dust speck and that there is no prac­ti­cal way to im­ple­ment the com­pu­ta­tion you pro­pose.

Peo­ple who whine about the dust specks in their eyes will get an un­sym­pa­thetic re­sponse from me—I care zero about it. Peo­ple who have been tor­tured for a minute will have my ut­most con­cern and sym­pa­thy. Some­where along the line, one turns into the other, but in your ex­am­ple, a google­plex of ze­roes is still zero.

Tor­ture is qual­i­ta­tively differ­ent from pain, say the pain of a de­bil­i­tat­ing dis­ease. Tor­ture in­volves the in­ten­tional in­flic­tion of suffer­ing for the sake of the suffer­ing, ex­treme loss of con­trol, the ab­sence of sym­pa­thy and em­pa­thy, ex­treme un­cer­tainty about the fu­ture and so on. The men­tal im­pact of tor­ture is qual­i­ta­tively differ­ent from ac­ci­den­tal pain.

Univer­sal in­formed con­sent and shared risk would seem to my moral gut to be nec­es­sary pre­con­di­tions to make me stom­ach this kind of util­i­tar­ian calcu­lus.

So this large pop­u­la­tion that agrees that the oc­ca­sional vic­tim en­hances the over­all util­ity would share the risk of be­com­ing the vic­tim. In that sce­nario, how many peo­ple would ac­cept the life­time tor­ture lot­tery ticket in ex­change for a life­time free of dust motes? Without know­ing the an­swer to this ques­tion, they can’t es­ti­mate their own risk.

• Cale­do­nian, offer­ing an al­ter­na­tive ex­pla­na­tion for the ev­i­dence does not im­ply that it is not ev­i­dence that Eliezer ex­pends some re­sources over­com­ing bias:

Of course it’s not ev­i­dence for that sce­nario. There are al­ter­na­tive and sim­pler ex­pla­na­tions.

If the data does not per­mit us to dis­t­in­guish be­tween A and ~A, it’s not ev­i­dence for ei­ther state.

• Enough with the ab­stract. It’s difficult to make a valid equa­tion since dust=x, tor­ture=y, and x=!y. So why don’t you just re­place dust in the equa­tion with tor­ture. Like a re­ally small amount of tor­ture, but still tor­ture. Maybe, say, ev­ery­body gets a nip­ple pierced un­will­ingly.

• TGGP—how about in­ter­nal con­sis­tency? How about for­mal re­quire­ments, if we be­lieve that moral claims should have a cer­tain form by virtue of their be­ing moral claims? Those two have the po­ten­tial to knock out a lot of can­di­dates…

• TGGP, are you fa­mil­iar with the teach­ings of Je­sus?
Yes, I was raised Chris­tian and I’ve read the Gospels. I don’t think they provide an ob­jec­tive stan­dard of moral­ity, just the jew­ish Phari­saic tra­di­tion filtered through a Hel­lenis­tic lens.

Mat­ters of prefer­ence are en­tirely sub­jec­tive, but for any evolved agent they are far from ar­bi­trary, and sub­ject to in­creas­ing agree­ment to the ex­tent that they re­flect in­creas­ingly fun­da­men­tal val­ues in com­mon.
That is rele­vant to what ethics peo­ple may fa­vor, but not to any truth or ob­jec­tive stan­dard. Agree­ment among peo­ple is the re­sult of sub­jec­tive judg­ment.

• Mitchell, my sen­ti­ments ex­actly. Dust caus­ing car crashes isn’t part of the game as set up here—the idea is that you blink it away in­stantly, hence ‘the least bad thing that can hap­pen to you’.

The only stick­ler in the back of my mind is how I am (un­con­sciously?) cat­e­goris­ing such things as ‘in­con­ve­nience’ or ‘agony’. Where does stub­bing my toe sit? How about cut­ting my­self shav­ing? At what point do I switch to 3^^^3(Event) = Tor­ture?

TGGP, are you fa­mil­iar with the teach­ings of Je­sus?

• Cale­do­nian, what is an ob­jec­tive stan­dard by which an eth­i­cal sys­tem can be eval­u­ated?

• I’ve writ­ten and saved a(nother) re­sponse; if you’d be so kind as to ap­prove it?

• Ben, I think you might not have un­der­stood what I was say­ing about the poll. My point was that each in­di­vi­d­ual is sim­ply say­ing that he does not have a prob­lem with suffer­ing a dust speck to save some­one from tor­ture. But the is­sue isn’t whether one in­di­vi­d­ual should suffer a dust speck to save some­one, but whether the whole group should suffer dust specks for this pur­pose. And it isn’t true that the whole group thinks that the whole group should suffer dust specks for this pur­pose. If it were, there wouldn’t be any dis­agree­ment about this ques­tion, since I and oth­ers ar­gu­ing this po­si­tion would pre­sum­ably be among the group. In other words, your ar­gu­ment from hy­po­thet­i­cal au­thor­ity fails be­cause hu­man opinions are not in fact that con­sis­tent.

Sup­pose that 1 per­son per google (out of the 3^^^3 per­sons) is threat­ened with 50 years of tor­ture? Should each mem­ber of the group ac­cept a dust speck for each per­son threat­ened with tor­ture, there­fore bury­ing the whole group in dust?

• Eliezer,

What do specks have to do with cir­cu­lar­ity? Where in last posts you ex­plain that cer­tain groups of de­ci­sion prob­lems are math­e­mat­i­cally equiv­a­lent, in­de­pen­dent on ac­tual de­ci­sion, here you ar­gue for a par­tic­u­lar de­ci­sion. Note that util­ity is not nec­es­sar­ily lin­ear of num­ber of peo­ple.

• The value placed on items is re­ally what mat­ters be­cause we don’t value ev­ery­thing the same. The true ques­tion is why do we value them differ­ently or are we re­ally just mis­calcu­lat­ing the ex­pected value? Every equa­tion has to be learned from 2+2=4 on and maybe we are just head­ing up that learn­ing curve.

• The idea of sav­ing some­one life has a great value to the per­son who did the sav­ing. They are a hero even if it is only one life. The sub­se­quent in­di­vi­d­u­als diminish in the util­ity they de­liver be­cause be­ing a hero car­ries such great a re­turn and only re­quires sav­ing one per­son verse ev­ery­one. Peo­ple who choose op­tion 1 are: not do­ing the math or value life differ­ently be­tween in­di­vi­d­u­als be­cause of the effect it has on them.

• “I think peo­ple would be more com­fortable with your con­clu­sion if you had some way to quan­tify it; right now all we have is your as­ser­tion that the math is in the dust speck’s fa­vor.”

The ac­tual tip­ping point de­pends on your par­tic­u­lar sub­jec­tive as­sess­ment of rel­a­tive util­ity. The ac­tual tip­ping point doesn’t mat­ter; what mat­ters is that there is crossover at some point, there­fore such rea­son­ing about prefer­ences, like San Jose --> San Fran­cisco --> Oak­land --> San Jose is in­co­her­ent.

• (I should say that I as­sumed that a bag of de­ci­sions is worth as much as the sum of the util­ities of the in­di­vi­d­ual de­ci­sions.)

• “The pri­mary thing is to help oth­ers, what­ever the means. So shut up and mul­ti­ply!”

Would you sub­mit to tor­ture for 50 years to save countless peo­ple? I’m not sure I would, but I think I’m more com­fortable with the idea of be­ing self-in­ter­ested and see­ing all things through the prism of self in­ter­est.

Similar prob­lem: if you had this choice—you can die peace­fully and ex­pe­rience no af­ter­life, or liter­ally ex­pe­rience hell for 100 years if one was re­warded with an eter­nity of heaven, would you choose the lat­ter? Calcu­lat­ing which pro­vides the great­est util­ity, the lat­ter would be prefer­able, but I’m not sure I would choose it.

• Eliezer, can you ex­plain what you mean by say­ing “it’s the same gam­ble”? If the point is to com­pare two op­tions and choose one, then what mat­ters is their val­ues rel­a­tive to each other. So, 400 cer­tain lives saved is bet­ter than a 90% chance of 500 lives saved and 10% chance of 500 deaths, which is it­self bet­ter than 400 cer­tain deaths.

Per­haps it would help to define the pa­ram­e­ters more clearly. Do your first two op­tions have an up­per limit of 500 deaths (as the sec­ond two op­tions seem to), or is there no limit to the num­ber of deaths that may oc­cur apart from the lucky 4-500?

• I’m sorry, but I find this line of ar­gu­ment not very use­ful. If I re­mem­ber cor­rectly (which I may not be do­ing), a googol­plex is larger than the es­ti­mated num­ber of atoms in the uni­verse. No­body has any idea of what it im­plies ex­cept “re­ally, re­ally big”, so when your con­cepts get up there, peo­ple have to do the math, since the num­bers mean noth­ing. Most of us would agree that hav­ing a re­ally re­ally lot of peo­ple both­ered just a bit is bet­ter than hav­ing one per­son suffer for a long life. That has lit­tle to do with math and a lot to do with our pre­cep­tion of suffer­ing and a feel­ing that each of us has only one life. Wor­ry­ing about dis­con­ti­nu­ities in this kind of dis­cus­sion seems al­most puerile.

A more in­ter­est­ing dis­con­ti­nu­ity that we run into quite fre­quently is our will­ing­ness to make great efforts and sac­ri­fices to save the lives of chil­dren and then de­cide that at the age of 18, young men cease to be chil­dren and we send them off to war. What hap­pens in our brains when young men turn 18? Sure these 18 year olds are all testos­terone fired up and look­ing for a fight, but the dis­con­ti­nu­ity of the moral logic is strange. Have you talked about this at all?

(By the way, one of the sad­dest mu­se­ums in the world is in Salta, Ar­gentina, where they dis­play mum­mies of chil­dren who were made drunk and buried al­ive to pla­cate a now long for­got­ten god, but that is get­ting off the point.)

• No­body has any idea of what it im­plies ex­cept “re­ally, re­ally big”, so when your con­cepts get up there, peo­ple have to do the math, since the num­bers mean noth­ing.

This ap­plies just as much to num­bers such as mil­lion and billion, which peo­ple mixes up reg­u­larly; the prob­lem though is that peo­ple dont do the math, de­spite not un­der­stand­ing the mag­ni­tudes if the num­bers, and those num­bers of peo­ple are ac­tu­ally around.

Per­son­aly, if I first try to vi­su­al­ize a crowd of a hun­dred peo­ple, and then a crowd of a thou­sand, the sec­ond crowd seems about three times as large. If I start with a thou­sand, and then try a hun­dred, this time around the hun­dred peo­ple crowd seems a lot big­ger than it did last time. And the big­ger num­bers I try with, the worse it gets, and there is a long way to go to get to 7′000′000′000(# of peo­ple of earth). All sorts of bi­ases seems to be at work here, an­chor­ing among them. Re­sult: Shut up and mul­ti­ply!

[Edit: Spel­ling]

• This is fur­ther ev­i­denced by the fact that most peo­ple dont know about the long and short scales, and never no­ticed.

• This is an ex­cel­lent point, but your spel­ling er­rors are dis­tract­ing. You said “av” seven times when you meant “a”, and “an­cor­ing” in the last line should be “an­chor­ing”.

• What hap­pens in our brains when young men turn 18?

They’ve prob­a­bly already had sex once by then, and thus a fair chance to pass on their genes. No­tice that we’re not as ea­ger to send 18-year-old women off to war.

• It de­pends on the ac­tual situ­a­tion and my goal.

Imag­ine I were a ship cap­tain as­signed to try to If res­cue a vi­able sam­ple of a cul­ture from a zone that was about to be geno­cided, I would be very likely to take the 400 peo­pleweights (in­clud­ing books or what­ever else they val­ued as much as peo­ple) of evac­uees, un­less some­one made a con­vinc­ing case that the ex­tra 100 peo­ple were vi­tal cul­tural or ge­netic car­ri­ers. For definite­ness, imag­ine my ship is rated to carry up to 400 peo­pleweight worth of pas­sen­gers in al­most any weather, but 500 peo­ple would over­load it to the point of sink­ing dur­ing a storm of the sort that the weather ex­perts pre­dict 10 per­cent prob­a­ble dur­ing voy­age to safe har­bor.

Peo­ple are not dol­lars or bales of cot­ton to be sold at mar­ket. You can’t just count heads and mul­ti­ply that num­ber by utilons per head and say “This an­swer is best, any other an­swer is fool­ish.”

Well ob­vi­ously you can do that, but the main re­ward for do­ing so is the feel­ing that you are smarter than the poor dumb fools who be­lieve that the world is com­plex and situ­a­tion de­pen­dent. That is, you can give your­self a sort of warm fuzzy feel­ing of smug su­pe­ri­or­ity by defeat­ing the straw man you con­structed as your fool­ish com­peti­tor in the In­tel­li­gence Sweep­stakes.

That be­ing said, if there re­ally is no other in­for­ma­tion available, I would take the same choice Eliezer recom­mends; I just deny that it is the only non fool­ish choice.

This ap­plies to lot­tery tick­ets as well. A slim chance at es­cap­ing eco­nomic hell might be worth more than its nom­i­nal ex­pected re­turn value to a given in­di­vi­d­ual. 100 mil­lion dol­lars might very well have a per­sonal util­ity over a billion times the value of one dol­lar for ex­am­ple, if that per­son’s deep goals would be fa­cil­i­tated mightily by the big win and not at all by a sin­gle dol­lar or any rea­son­able num­ber of dol­lars they might ex­pect to save over the available time. Also, if any en­ter­tain­ment dol­lar is not a fool­ish waste, then a dol­lar spent on a lot­tery ticket is worth its ex­pected win­ning value plus its en­ter­tain­ment value, which varies /​profoundly/​ from per­son to per­son.

I my­self pre­fer to give peo­ple \$1 lot­tery tick­ets in­stead of \$2.95 witty birth­day cards. Am I wise or fool­ish in this? But posts here have branded all lot­tery pur­chases as fool­ish, so I must be a fool. I bow to the col­lec­tive wis­dom here and ad­mit that I am a fool. There is a lot of other ev­i­dence that sup­ports this con­clu­sion :)

if you give your­self over to ra­tio­nal­ity with­out hold­ing back, you will find that ra­tio­nal­ity gives to you in re­turn.

I heartily agree, that’s one rea­son I try to avoid trot­ting out ap­plause lights to trig­ger other peo­ple into giv­ing me warm fuzzies.

I am happy for one per­son to be tor­tured for 50 years to stave off the dust specks, as long as that per­son is me. In fact, this pretty much sums up my ca­reer in soft­ware de­vel­op­ment, it is not my fa­vorite thing to do but I en­dured cu­bi­cle hell for many years in ex­change for money in part, but also be­cause of my deep be­lief that solv­ing an­noy­ing lit­tle bugs and glitches that might in­con­ve­nience many many peo­ple was an ac­tivity im­por­tant enough to over­ride my per­sonal prefer­ences; I could eas­ily have found other com­bi­na­tions of pay and fun that pleased me bet­ter, so I have ac­tu­ally been through this dilemma in muted form in real life and chose to per­son­ally suffer to hold off ‘specks’ like poorly de­signed user in­ter­faces.

I do have great ad­mira­tion for Eliezer but he claims to want to be more ra­tio­nal and to wel­come crit­i­cism in­tended to pro­mote his progress on The Way, so I thought it would be ok to be crit­i­cal of this post, which irked me be­cause para­graph four is a straw man “fool” phrased in sec­ond per­son, which seems like a sort of pre-emp­tive ad hominem against any reader of the post fool­ish enough to dis­agree with the premise of the writer. This seems like an ex­tremely poor sub­sti­tute for ra­tio­nal dis­course, the sort of non­sense that could cost the writer Quir­rell points, and none of us want that. I don’t want to seem hos­tile, but since I am ex­actly the sort of fool who dis­agreed with the premise of para­graph 3, I do feel like I was be­ing flamed a bit, and since I am ap­par­ently made of straw, flames make me ner­vous :)

• I my­self pre­fer to give peo­ple \$1 lot­tery tick­ets in­stead of \$2.95 witty birth­day cards. Am I wise or fool­ish in this? But posts here have branded all lot­tery pur­chases as fool­ish, so I must be a fool.

The fool­ish thing is to con­sider those two op­tions the only choices.

• I my­self pre­fer to give peo­ple \$1 lot­tery tick­ets in­stead of \$2.95 witty birth­day cards. Am I wise or fool­ish in this?

You are fool­ish in this.

Birth­day cards show that you speci­fi­cally thought of some­one’s birth­day and are cel­e­brat­ing it. Giv­ing them some­thing generic, re­gard­less of value, doesn’t serve the same pur­pose as a birth­day card. By your rea­son­ing you could not only sub­sti­tute lot­tery tick­ets for birth­day cards, you could sub­sti­tute lot­tery tick­ets for say­ing the words “happy birth­day” as well, thus never wish­ing them a happy birth­day ei­ther.

Fur­ther­more, since the lot­tery ticket is cheaper than the birth­day card, and ev­ery­one knows this, and (ap­par­ently) this cheap­ness is one of your rea­sons for do­ing this, you are vi­o­lat­ing so­cial ex­pec­ta­tions about when it is ac­cept­able to be ob­vi­ously cheap. (You can still be cheap, but you can’t be ob­vi­ously cheap about it.)

• 50 years of tor­ture is go­ing to ruin some­one’s life. Dust specks and stubbed toes are not go­ing to ruin any­one’s life, which makes the num­ber of peo­ple with dust specks and stubbed toes ir­rele­vant. That’s the thresh­old. You can’t mul­ti­ply one to get to the other.

The amount hu­man­ity loses to a dust speck in some­one’s eye is ex­actly 0, un­less that per­son was pi­lot­ing an air­craft or some­thing and crashes be­cause of it, which—based on my read­ing of the premise—doesn’t seem to be the case. A stubbed toe might cost more, but even that is only true if you treat hu­man­ity as an amor­pheous, hive­minded mass rather than a group of in­di­vi­d­u­als.

• A stubbed toe can put some­one in a bad mood which af­fects their be­hav­ior un­til it feels bet­ter, and that can put a damper on their whole day.

I in­tu­itively see 3^^^^3 stubbed toes as worse than 50 years of tor­ture, but not 3^^^^3 dust specks, but this is a sce­nario where I feel I should at the very least be highly sus­pi­cious of my in­tu­ition.

• Ruin­ing some­one’s day is still on the other side of the thresh­old from ru­in­ing some­one’s life. Now, if all the stubbed toes were go­ing to hap­pen to the same per­son, that would be differ­ent.

I guess I could say that the line is be­tween be­ing hurt, and be­ing de­stroyed. The point at which I would start to look at the man be­ing tor­tured as a prefer­able op­tion is when the pain be­ing suffered by the googol­plex oth­ers would be bad enough to cause se­ri­ous fi­nan­cial, so­cial, or phys­i­cal dam­age to each of them as in­di­vi­d­u­als. That’s the line.

• If you give a sig­nifi­cant frac­tion of 3^^^^3 peo­ple a bad day, it adds up to more time worth of un­happy ex­pe­rience than a 50 year life that is mis­er­able 100% of the time, more times over than we can pos­si­bly imag­ine.

A sin­gle sec­ond each in the lives of ev­ery per­son on Earth adds up to about 2000 years of cu­mu­la­tive life ex­pe­rience. That’s not even a drop in the ocean of 3^^^^3 peo­ple.

Of course, if you give even an in­finites­i­mally tiny frac­tion of that many peo­ple, say a trillion, sin­gle bad days, that’s prob­a­bly go­ing to lead to at least a few mil­lion ru­ined re­la­tion­ships and lost jobs. 3^^^^3 stubbed toes would lead to more ru­ined lives than the num­ber of peo­ple who’ve ac­tu­ally ever lived. But even if you as­sume no spillover effects, it’s still a greater mass of cu­mu­la­tive nega­tive ex­pe­rience than has oc­curred in the en­tirety of hu­man his­tory.

• 50 years of tor­ture is go­ing to ruin some­one’s life.

And a dust speck is go­ing to ruin some­one’s frac­tion of a sec­ond. How many frac­tions of a sec­ond do a life make?

• You’re mak­ing it sound like all hu­mans share a sin­gle con­scious­ness and pool their life ex­pe­riences. Every hu­man has a differ­ent life, a differ­ent con­scious­ness. To­tal­ling the value of a sec­ond from any num­ber of hu­mans can never equal the value of a hu­man life­time, be­cause you won’t have caused any se­ri­ous prob­lems for any per­son.

• Define “se­ri­ous”. Spec­ify one harm X that is just barely not se­ri­ous, and one Y that is just a lit­tle worse, and is se­ri­ous. Ver­ify that you can find an N such that YN > 1 hu­man life, and that there is no N such that XN > 1 hu­man life.

• X = los­ing a finger. Y = los­ing a hand.

Los­ing a finger is trau­matic and pro­duces chronic dis­figure­ment and loss of some man­ual dex­ter­ity, but (as long as it isn’t a thumb or in­dex finger) it isn’t go­ing to truly hand­i­cap some­one. Los­ing a hand WILL truly hand­i­cap some­one. I would rather ev­ery­one lose a finger than one per­son lose a hand.

• I would rather ev­ery­one lose a finger than one per­son lose a hand.

I’m pretty sure that if an in­vad­ing alien fleet came and de­manded ev­ery hu­man lose a sin­gle finger, there’d be more than enough peo­ple that’d be will­ing to sac­ri­fice their very lives to pre­vent that trib­ute—and though I’m not sure I’d be as brave as that, I’d most cer­tainly be will­ing to sac­ri­fice my hand in or­der to save a finger of each of 6 billion peo­ple.

• Peo­ple would sac­ri­fice their lives for it. How­ever, would that choice be ra­tio­nal? Espe­cially if we con­sider the like­li­hood that a war with the aliens might re­sult in mas­sive civilian ca­su­alties? Fight­ing is only a good idea if win­ning puts you in a bet­ter po­si­tion than you would oth­er­wise be in.

Be­ing will­ing to sac­ri­fice your hand is no­ble, and I would prob­a­bly do the same thing. But if you’re talk­ing about some­one ELSE’S hand, you need to look at what los­ing a finger re­ally costs in life ex­pe­rience and work­ing abil­ity ver­sus los­ing a hand.

• Ac­tu­ally, let’s make it closer. X = los­ing a finger, Y = los­ing a thumb. My an­swer would still be the same. Miss­ing a finger isn’t a huge set­back. Miss­ing a thumb is.

• To­tal­ling the value of a sec­ond from any num­ber of hu­mans can never equal the value of a hu­man life­time,

I don’t see why not.

, be­cause you won’t have caused any se­ri­ous prob­lems for any per­son.

You’ll have caused an in­fini­tise­mal prob­lem to a truly hu­mon­gous num­ber of peo­ple.

Even be­fore I had dis­cov­ered LessWrong or met the dust-speck-vs-tor­ture prob­lem, I had pub­li­cally won­dered if some com­puter virus-cre­ators (es­pe­cially those fa­mous viruses that af­fected mil­lions of peo­ple wor­ld­wide, hi­jack­ing email ser­vices, etc) were even worse in the re­sults of their ac­tions than your av­er­age mur­derer or av­er­age rapist. They stole some min­utes out of many mil­lion peo­ple’s lives. Minutes stolen from how many mil­lions peo­ple be­come morally equiv­a­lent to mur­der­ing a per­son?

So the is­sue ex­ists: If dust specks aren’t enough for you, how about break­ing a leg of 3^^^^3 peo­ple. This doesn’t ruin their whole life, but it may ruin a whole month for them. Does the equa­tion seem differ­ent now (by talk­ing about a month in­stead of mil­lisec­ond), that you would pre­fer to have a sin­gle per­son tor­tured for 50 years in­stead of 3^^^^3 peo­ple hav­ing a leg bro­ken?

• Well, if those 3^^^^3 peo­ple be­ing crip­pled for a month is go­ing to shut down the galac­tic econ­omy, then tor­tur­ing some­one for fifty years is prefer­able. If we’re just tak­ing the suffer­ing of one per­son who broke his leg, then had 3^^^^3 other peo­ple en­dure the same thing in ISOLATION (say each of the leg-break­ers lives in a differ­ent par­allel uni­verse and thus no so­ciety has to give more than one month’s worth of worker’s com­pen­sa­tion), on the other hand, I would rather have ev­ery­one break their legs.

• I see. This is then no longer about not caus­ing “se­ri­ous prob­lems”—be­cause a bro­ken leg is a se­ri­ous prob­lem.

But how far does your ar­gu­ment ex­tent. Let’s in­crease the amount of in­di­vi­d­ual harm: How about 3^^^^3 peo­ple tor­tured for 3 months, vs a sin­gle per­son be­ing tor­tured for 50 years? Which would you rather?

How about 3^^^^3 peo­ple im­pris­oned un­justly for ten years, in rather bad but not tor­tur­ing con­di­tions, vs a sin­gle per­son be­ing tor­tured for 50 years. Which would you rather?

- For the sake of this dis­cus­sion, we in­deed con­sider the cases in­di­vi­d­ual (we can imag­ine each case hap­pen­ing in a par­allel uni­verse, as you sug­gest)

• Three months of tor­ture is enough to cause im­mense and lon­glast­ing psy­cholog­i­cal scar­ring. Ten years out of your life is some­thing that changes the course of a per­son’s life. I would rather some­one be tor­tured for fifty years than have ei­ther of the above hap­pen to a large num­ber of peo­ple.

I think your choice of bro­ken leg is pretty much ex­actly at the thresh­old. I can’t think of any­thing worse than that that wouldn’t stand a good chance of ru­in­ing some­one’s life.

• As already stated some­where above, with or with­out “sa­cred” val­ues hu­mans in­vari­ably be­lieve in thresh­olds where the ex­pected nega­tive util­ity jumps ex­po­nen­tially. If I be­lieve that lengthy tor­ture is a vastly differ­ent event (for the in­di­vi­d­ual in ques­tion, and we clearly aren’t con­sid­er­ing any rip­ples) from a quickly and cleanly am­pu­tated limb, I’ll still ad­just my prefer­ences ac­cord­ingly. I’m only act­ing on my be­liefs about hu­man con­scious­ness. That’s not ir­ra­tional in it­self. There­fore… sorry, tried two in­tu­itive there­fores but nei­ther one checks out. I’ll get back on it.

• Cale­do­nian, of course that can­not be demon­strated.

Of course? It is hardly ob­vi­ous to me that such a thing is be­yond demon­stra­tion, even if we cur­rently do not know.

But who needs a demon­stra­tion?

Peo­ple in­ter­ested in ra­tio­nal think­ing who aren’t idiots. At the very least.

So, which fac­tor rules you out?

• Your con­clu­sion fol­lows very clearly from the re­search re­sults but it does not ap­ply to the new situ­a­tion. Do­ing the math is a false premise. Few peo­ple have per­sonal ex­pe­rience of be­ing tor­tured and more im­por­tantly no one who dis­agrees with you un­der­stands what you per­son­ally mean by the dust-speck. Per­haps if it was saw­dust or get­ting pool wa­ter splashed in your eye, then it would fi­nally reg­ister more clearly. Again, you (prob­a­bly) haven’t been tor­tured but you have gone through life with­out even con­ciously reg­is­ter­ing a dust speck in your eye. With a lit­tle ad­just­ment above a thresh­old many peo­ple might switch sides. Pain is not lin­ear.

• This form of rea­son­ing, while cor­rect within a speci­fied con­text, is dan­ger­ously flawed with re­gard to ap­pli­ca­tion within con­texts suffi­ciently com­plex that out­comes can­not be effec­tively mod­eled. This in­cludes much of moral in­ter­est to hu­mans. In such cases, as with evolu­tion­ary com­pu­ta­tion, an op­ti­mum strat­egy ex­ploits best-known prin­ci­ples syn­er­gis­ti­cly pro­mot­ing a max­i­mally-co­her­ent set of pre­sent val­ues, rather than tar­get­ing illu­sory, re­al­is­ti­cally un­speci­fi­able con­se­quences. Your “ra­tio­nal­ity” is cor­rect but in­com­plete. This speaks as well to the well-known para­doxes of all con­se­quen­tial­ist ethics.

• You’ve offi­cially given me the best ex­am­ple of the in­her­ent flaw in the util­i­tar­ian model of moral­ity. Nor­mally, I use the ex­am­ple of a man who is the sole provider of an ar­bi­trar­ily large fam­ily mur­der­ing an old home­less man. Utili­tar­i­anism says he should go free. The mur­der’s fam­ily, of size X, will all ex­pe­rience di­su­til­ity from his im­pris­on­ment. Call that Y. The home­less man, liter­ally no one will miss. No fam­ily mem­bers to gain util­ity from ex­act­ing jus­tice. There­fore, since X*Y > 0, the mur­derer should go back to pro­vid­ing for his fam­ily. I do not be­lieve any ra­tio­nal per­son would con­sider that just, moral, or even rea­son­able.

I’m all for ra­tio­nal eval­u­a­tions of prob­lems, but ra­tio­nal­ity does not ap­ply to moral ar­gu­ments. Mo­ral­ity is an emo­tional re­sponse by its very na­ture. Ra­tional ar­gu­ments are fine when we’re com­par­ing large num­bers of peo­ple. A plan that will save 400 lives vs. a plan that has a 90% chance to save 500 lives. That’s not moral­ity, that’s ra­tio­nal­ity. It doesn’t truly be­come about moral­ity un­til it’s per­sonal. If you could save the lives of 3 peo­ple you’ve never met, would you let your­self be tor­tured? Would you tor­ture some­one? Re­gard­less of your an­swer, it is eas­ier said than done...

P.S. I’m not a psy­chol­o­gist, but I imag­ine if you had differ­ent an­swers to tor­tur­ing vs. be­ing tor­tured, that says some­thing about you. Not sure what…

• Nor­mally, I use the ex­am­ple of a man who is the sole provider of an ar­bi­trar­ily large fam­ily mur­der­ing an old home­less man. Utili­tar­i­anism says he should go free. The mur­der’s fam­ily, of size X, will all ex­pe­rience di­su­til­ity from his im­pris­on­ment. Call that Y. The home­less man, liter­ally no one will miss. No fam­ily mem­bers to gain util­ity from ex­act­ing jus­tice. There­fore, since X*Y > 0, the mur­derer should go back to pro­vid­ing for his fam­ily. I do not be­lieve any ra­tio­nal per­son would con­sider that just, moral, or even rea­son­able.

Err...effec­tively le­gal­iz­ing mur­der of large classes of the pop­u­la­tion would tend to in­crease the mur­der rate, cost­ing far more lives in ag­gre­gate, set­ting aside the dire con­se­quences on so­cial or­der and co­op­er­a­tion. You should use an ex­am­ple where the re­pel­lent recom­men­da­tion ac­tu­ally in­creases rather than de­creases hap­piness/​welfare.

• Well, I could qual­ify my ex­am­ple, say­ing surveillance en­sures only peo­ple who provide zero util­ity are al­lowed to be mur­dered, but as I said, the ar­ti­cle makes my point much bet­ter, even if it doesn’t mean to. A sin­gle speck of dust, even an an­noy­ing and slightly painful one, in the eyes of X peo­ple NEVER adds up to 50 years of tor­ture for an in­di­vi­d­ual. It doesn’t mat­ter how large you make X, 7 billion, a googol­plex, or 13^^^^^^^^41. It’s ir­rele­vant.

• Imag­ine that you find your­self vis­it­ing a hy­po­thet­i­cal cul­ture that ac­knowl­edges two im­por­tantly dis­tinct classes of peo­ple: mas­ters and slaves. By cul­tural con­ven­tion, slaves are un­der­stood to have effec­tively no moral weight; caus­ing their suffer­ing, death, in­jury etc. is sim­ply a prop­erty crime, analo­gous to van­dal­ism. Slaves and mas­ters are dis­t­in­guished solely by a visi­ble hered­itable trait that you don’t con­sider in any wayrele­vant to their moral weight as peo­ple.

Shortly af­ter your ar­rival, a thou­sand slaves are rounded up and kil­led. You, as a prop­erly emo­tional moral thinker, pre­sum­ably ex­press your dis­may at this, and the na­tives ex­plain that you needn’t worry; it was just a mar­ket cor­rec­tion and the eco­nomics of the situ­a­tion are such that the mas­ters are bet­ter off now. You ex­plain in turn that your dis­may is not eco­nomic in na­ture; it’s be­cause those slaves have moral weight.

They look at you, puz­zled.

How might you go about ex­plain­ing to them that they’re wrong, and slaves re­ally do have moral weight?

Some time later, you re­turn home, and find your­self en­ter­tain­ing a vis­i­tor from an­other realm who is hor­rified by the dis­cov­ery that a mil­lion old au­to­mo­biles have re­cently been de­stroyed. You ex­plain that it’s OK, the ma­te­ri­als are be­ing re­cy­cled to make bet­ter prod­ucts, and he ex­plains in turn that his dis­may is be­cause au­to­mo­biles have moral weight.

How might you go about ex­plain­ing to him that he’s wrong, and cars re­ally don’t have moral weight?

• “You might have been a slave” is imag­in­able in a way that “you might have been an au­to­mo­bil” is not. See Rawls and Kant.

• Yup. But would they ar­gue as Ja­gan did that “ra­tio­nal­ity does not ap­ply to moral ar­gu­ments. Mo­ral­ity is an emo­tional re­sponse by its very na­ture”? I’m speci­fi­cally in­ter­ested in Ja­gan’s an­swers to my ques­tions, given that as­ser­tion.

• I could qual­ify my ex­am­ple, say­ing surveillance en­sures only peo­ple who provide zero util­ity are al­lowed to be mur­dered,

If some peo­ple’s lives are worth zero util­ity, then by defi­ni­tion they are worth­less. That’s what “zero util­ity” means. Did you mean some­thing else? Be­cause it seems to me that no­body is worth­less to me in real life, and that’s why your ex­am­ple doesn’t work.

A sin­gle speck of dust, even an an­noy­ing and slightly painful one, in the eyes of X peo­ple NEVER adds up to 50 years of tor­ture for an in­di­vi­d­ual. It doesn’t mat­ter how large you make X, 7 billion, a googol­plex, or 13^^^^^^^^41. It’s ir­rele­vant.

And you judge it ir­rele­vant based on what? Con­sid­er­ing that scope in­sen­si­tivity is a known bias in hu­mans, so “in­stinct” is re­li­ably go­ing to go wrong in this case with­out mind­hack­ing. Two mur­ders are worse than one mur­der, two gorups of peo­ple with dust­specks in heir eyes are worse than one such group; at what point does this stop be­ing true?

• The mur­der’s fam­ily, of size X, will all ex­pe­rience di­su­til­ity from his im­pris­on­ment. Call that Y. The home­less man, liter­ally no one will miss. No fam­ily mem­bers to gain util­ity from ex­act­ing jus­tice. There­fore, since X*Y > 0, the mur­derer should go back to pro­vid­ing for his fam­ily.

You’re over­look­ing the di­su­til­ity to the mur­dered man. Ac­tu­ally, what you de­scribe is Pru­dent Pre­da­tion, a fa­mous ob­jec­tion to ego­ism, not util­i­tar­i­anism.

• I think you for­got to finish this:

Ac­tu­ally, what you de­scribe is Pru­dent Pre­da­tion, a fa­mous ob­jec­tion to ego­ism, not

Ex­cel­lent point about the mur­dered man, though.

• I’m all for ra­tio­nal eval­u­a­tions of prob­lems, but ra­tio­nal­ity does not ap­ply to moral ar­gu­ments. Mo­ral­ity is an emo­tional re­sponse by its very na­ture. Ra­tional ar­gu­ments are fine when we’re com­par­ing large num­bers of peo­ple.

I don’t un­der­stand this. Sure, small amounts of­ten have more emo­tional force (“near mode”) than large ones (“far mode”.) But that doesn’t make it right to let your bias hurt peo­ple. OTOH, you said “It doesn’t truly be­come about moral­ity un­til it’s per­sonal”, so maybe you mean some­thing un­usual when you say “moral­ity”.

I’m not a psy­chol­o­gist, but I imag­ine if you had differ­ent an­swers to tor­tur­ing vs. be­ing tor­tured, that says some­thing about you. Not sure what...

Hu­mans are of­ten un­able to con­form perfectly to their de­sires, even when they know what the best choice is. It’s known as “akra­sia”. For ex­am­ple, ad­dicts of­ten want to stop tak­ing the drugs. If you couldn’t bring your­self to make that sac­ri­fice, that doesn’t mean you shouldn’t, or that you be­lieve you shouldn’t. (Not say­ing you think it does, just not­ing for the record.)

• The fact that Eliezer has changed his mind sev­eral times on Over­com­ing Bias is ev­i­dence that he ex­pends some re­sources over­com­ing bias

No, he could sim­ply be bi­ased to­wards main­tain­ing a high sta­tus by ac­cept­ing the dom­i­nant opinions in his so­cial groups. If look­ing bad in oth­ers’ eyes is some­thing you wish to avoid, you’ll re­ject ar­gu­ments that oth­ers re­ject whether you think they’re right or not.

• What leads you to be­lieve he ex­pends any of his re­sources over­com­ing his bi­ases? Past be­hav­ior sug­gests he’s re­peat­ing his er­rors over and over again with­out cor­rec­tion.

• The fact that you felt it de­sir­able to ask that ques­tion means that the metaphor has failed. The mes­sage you at­tempted to send has been over­whelmed by its own noise.

• Any ques­tion of ethics is en­tirely an­swered by ar­bi­trar­ily cho­sen eth­i­cal sys­tem, there­fore there are no “right” or “bet­ter” an­swers.

• Once again we’ve high­lighted the im­ma­tu­rity of pre­sent-day moral think­ing—the kind that leads in­evitably to Parfit’s Repug­nant Con­clu­sion. But any para­dox is merely a mat­ter of in­suffi­cient con­text; in the big­ger pic­ture all the pieces must fit.

Here we have peo­ple strug­gling over the rel­a­tive moral weight of tor­ture ver­sus dust specks, with­out rec­og­niz­ing that there is no ob­jec­tive mea­sure of moral­ity, but only ob­jec­tive mea­sures of agree­ment on moral val­ues.

The is­sue at hand can be mod­eled co­her­ently in terms of the rele­vant dis­tances (re­gard­less of how highly di­men­sional, or what par­tic­u­lar dis­tance met­ric) be­tween the as­ses­sor’s preferred state and the as­ses­sor’s per­cep­tion of the al­ter­na­tive states. Re­gard­less of the par­tic­u­lar (nec­es­sar­ily sub­jec­tive) model and eval­u­a­tion func­tion, there must be some scalar dis­tance be­tween the two states within the as­ses­sor’s model (since a ra­tio­nal as­ses­sor can have only a sin­gle co­her­ent model of re­al­ity, and the al­ter­na­tive states are not iden­ti­cal.) Fur­ther­more in­tro­duc­ing a mul­ti­plier on the or­der of a googol­plex over­whelms any pos­si­ble scale in any re­al­iz­able model, lead­ing to an effec­tive in­finity, forc­ing one (if one’s rea­son­ing is to be co­her­ent) to view that state as dom­i­nant.

All of this (as pre­sented by Eliezer) is perfectly ra­tio­nal—but merely a spe­cial case and in­ap­pro­pri­ate to de­ci­sion-mak­ing within a com­plex evolv­ing con­text where ac­tual con­se­quences are effec­tively un­pre­dictable.

If one faces a deep and wide chasm im­ped­ing de­sired trade with a neigh­bor­ing tribe, should one ra­tio­nally pro­ceed to achieve the de­sired out­come: an op­ti­mum bridge?

Or should one fo­cus not on per­ceived out­comes, but rather on most effec­tively ex­press­ing one’s val­ues-com­plex: Ie, valu­ing not the bridge, but effec­tive in­ter­ac­tion (in­clud­ing trade), and pro­ceed­ing to ex­ploit best-known prin­ci­ples pro­mot­ing in­ter­ac­tion, for ex­am­ple com­mu­ni­ca­tions, air trans­port, repli­ca­tion rather than trans­port...and maybe even a bridge?

The un­der­ly­ing point is that within a com­plex evolu­tion­ary en­vi­ron­ment, spe­cific out­comes can’t be re­li­ably pre­dicted. There­fore to the ex­tent that the sys­tem (within its en­vi­ron­ment of in­ter­ac­tion) can­not be effec­tively mod­eled, an op­ti­mum strat­egy is one that leads to dis­cov­er­ing the preferred fu­ture through the ex­er­cise of in­creas­ingly sci­en­tific (in­stru­men­tal) prin­ci­ples pro­mot­ing an in­creas­ingly co­her­ent model of evolv­ing val­ues.

In the nar­row case of a com­pletely speci­fied con­text, it’s all the same. In the broader, more com­plex world we ex­pe­rience, it means the differ­ence be­tween co­her­ence and para­dox.

The Repug­nant Con­clu­sion fails (as does all con­se­quen­tial­ist ethics when ex­trap­o­lated) be­cause it pre­sumes to model a moral sce­nario in­cor­po­rat­ing an ob­jec­tive point of view. Same prob­lem here.