# Torture vs. Dust Specks

“What’s the worst that can hap­pen?” goes the op­ti­mistic say­ing. It’s prob­a­bly a bad ques­tion to ask any­one with a cre­ative imag­i­na­tion. Let’s con­sider the prob­lem on an in­di­vi­d­ual level: it’s not re­ally the worst that can hap­pen, but would nonethe­less be fairly bad, if you were hor­ribly tor­tured for a num­ber of years. This is one of the worse things that can re­al­is­ti­cally hap­pen to one per­son in to­day’s world.

What’s the least bad, bad thing that can hap­pen? Well, sup­pose a dust speck floated into your eye and ir­ri­tated it just a lit­tle, for a frac­tion of a sec­ond, barely enough to make you no­tice be­fore you blink and wipe away the dust speck.

For our next in­gre­di­ent, we need a large num­ber. Let’s use 3^^^3, writ­ten in Knuth’s up-ar­row no­ta­tion:

• 3^3 = 27.

• 3^^3 = (3^(3^3)) = 3^27 = 7625597484987.

• 3^^^3 = (3^^(3^^3)) = 3^^7625597484987 = (3^(3^(3^(… 7625597484987 times …)))).

3^^^3 is an ex­po­nen­tial tower of 3s which is 7,625,597,484,987 lay­ers tall. You start with 1; raise 3 to the power of 1 to get 3; raise 3 to the power of 3 to get 27; raise 3 to the power of 27 to get 7625597484987; raise 3 to the power of 7625597484987 to get a num­ber much larger than the num­ber of atoms in the uni­verse, but which could still be writ­ten down in base 10, on 100 square kilo­me­ters of pa­per; then raise 3 to that power; and con­tinue un­til you’ve ex­po­nen­ti­ated 7625597484987 times. That’s 3^^^3. It’s the small­est sim­ple in­con­ceiv­ably huge num­ber I know.

Now here’s the moral dilemma. If nei­ther event is go­ing to hap­pen to you per­son­ally, but you still had to choose one or the other:

Would you pre­fer that one per­son be hor­ribly tor­tured for fifty years with­out hope or rest, or that 3^^^3 peo­ple get dust specks in their eyes?

I think the an­swer is ob­vi­ous. How about you?

• The an­swer that’s ob­vi­ous to me is that my men­tal moral ma­chin­ery—both the bit that says “specks of dust in the eye can’t out­weigh tor­ture, no mat­ter how many there are” and the bit that says “how­ever small the bad­ness of a thing, enough rep­e­ti­tion of it can make it ar­bi­trar­ily awful” or “max­i­mize ex­pected sum of util­ities”—wasn’t de­signed for ques­tions with num­bers like 3^^^3 in. In view of which, I profoundly mis­trust any an­swer I might hap­pen to find “ob­vi­ous” to the ques­tion it­self.

• Isn’t this just ap­peal to hu­mil­ity? If not, what makes this differ­ent?

• It is not hu­mil­ity to note that ex­trap­o­lat­ing mod­els uni­mag­in­ably far be­yond their nor­mal op­er­at­ing ranges is a fraught busi­ness. Just be­cause we can ap­ply a cer­tain util­ity ap­prox­i­ma­tion to our mon­key­sphere, or even a few or­ders of mag­ni­tude above our mon­key­sphere, doesn’t mean the limit­ing be­hav­ior matches our ap­prox­i­ma­tion.

• In other words, you’re meta-cog­i­ta­tion is 1 - do I trust my very cer­tain in­tu­ition? or 2 - do I trust the heuris­tic from for­mal/​math­e­mat­i­cal think­ing (that I see as use­ful par­tially and speci­fi­cally to com­pen­sate for in­ac­cu­ra­cies in our in­tu­ition)?

• Robin: dare I sug­gest that one area of rele­vant ex­per­tise is nor­ma­tive philos­o­phy for-@#%(^^\$-sake?!

It’s just painful—re­ally, re­ally, painful—to see dozens of com­ments filled with blinkered non­sense like “the con­tra­dic­tion be­tween in­tu­ition and philo­soph­i­cal con­clu­sion” when the alleged “philo­soph­i­cal con­clu­sion” hinges on some ridicu­lous sim­plis­tic Ben­thamite util­i­tar­i­anism that no­body out­side of cer­tain eco­nomics de­part­ments and in­su­lar tech­no­cratic com­puter-geek blog com­mu­ni­ties ac­tu­ally ac­cepts! My model for the tor­ture case is swiftly be­com­ing fifty years of read­ing the com­ments to this post.

The “ob­vi­ous­ness” of the dust mote an­swer to peo­ple like Robin, Eliezer, and many com­menters de­pends on the fol­low­ing three claims:

a) you can un­prob­le­mat­i­cally ag­gre­gate plea­sure and pain across time, space, and in­di­vi­d­u­al­ity,

b) all types of plea­sures and pains are com­men­su­rable such that for all i, j, given a quan­tity of plea­sure/​pain ex­pe­rience i, you can find a quan­tity of plea­sure/​pain ex­pe­rience j that is equal to (or greater or less than) it. (i.e. that plea­sures and pains ex­ist on one di­men­sion)

c) it is a moral fact that we ought to se­lect the world with more plea­sure and less pain.

But each of those three claims is hotly, hotly con­tested. And al­most no­body who has ever thought about the ques­tions se­ri­ously be­lieves all three. I ex­pect there are a few (has any­one posed the three be­liefs in that form to Peter Singer?), but, man, if you’re a Bayesian and you up­date your be­liefs about those three claims based on the gen­eral opinions of peo­ple with ex­per­tise in the rele­vant area, well, you ain’t ac­cept­ing all three. No way, no how.

• dare I sug­gest that one area of rele­vant ex­per­tise is nor­ma­tive philos­o­phy for-@#%(^^\$-sake?!

As some­one who has stud­ied moral philos­o­phy for many years, I would like to point out that I agree with Robin and Eliezer, and that I know many pro­fes­sional moral philoso­phers who would agree with them, too, if pre­sented with this moral dilemma. It is also worth not­ing that, many com­ments above, Gav­er­ick Ma­theny pro­vided a link to a pa­per by a pro­fes­sional moral philoso­pher, pub­lished in one of the two most pres­ti­gious moral philos­o­phy jour­nals in the English-speak­ing world, which defends es­sen­tially the same con­clu­sion. And as the ar­gu­ment pre­sented in that pa­per makes clear, the con­clu­sion that one should tor­ture need not be mo­ti­vated by a the­o­ret­i­cal com­mit­ment to some sub­stan­tive the­sis about the na­ture of pain or ag­gre­ga­tion (as God­wer claims), but fol­lows in­stead by tran­si­tivity from a se­ries of com­par­i­sons that ev­ery­one—in­clud­ing those who deny that con­clu­sion—finds in­tu­itively plau­si­ble.

• If any­one still has a hard time be­liev­ing that this is not an un­ortho­dox po­si­tion among Philoso­phers, I’d like to recom­mend Shelly Ka­gan’s ex­cel­lent The Limits of Mo­ral­ity, which dis­cusses ‘rad­i­cal con­se­quen­tial­ism’ and defends a similar con­clu­sion.

• Tor­ture,

Con­sider three pos­si­bil­ities:

(a) A dusk speck hits you with prob­a­bil­ity one, (b) You face an ad­di­tional prob­a­bil­ity 1/​( 3^^^3) of be­ing tor­tured for 50 years, (c) You must blink your eyes for a frac­tion of a sec­ond, just long enough to pre­vent a dusk speck from hit­ting you in the eye.

Most peo­ple would pick (c) over (a). Yet, 1/​( 3^^^3) is such a small num­ber that by blink­ing your eyes one more time than you nor­mally would you in­crease your chances of be­ing cap­tured by a sadist and tor­tured for 50 years by more than 1/​( 3^^^3). Thus, (b) must be bet­ter than (c). Con­se­quently, most peo­ple should pre­fer (b) to (a).

• You know, that ac­tu­ally per­suaded me to over­ride my in­tu­itions and pick tor­ture over dust specks.

• You don’t even have to go that far. Re­place “dust specks” with “the in­con­ve­nience of not go­ing out­side the house” and “tiny chance of tor­ture” with “tiny chance that be­ing out­side the house will lead to you get­ting kil­led”.

• Yeah, I un­der­stood the point.

• A con­sis­tent util­i­tar­ian would choose the tor­ture, but I don’t think it’s the moral choice.

Let’s bring this a lit­tle closer to home. Hy­po­thet­i­cally, let’s say you get to live your life again 3^^^3 times. Would you pre­fer to have an ad­di­tional dust speck in your eye in each of your fu­ture lives, or else be tor­tured for 50 years in a sin­gle one of them?

Any tak­ers for the tor­ture?

• Man that’s a good one. It’s cer­tainly in­ter­est­ing to know that my abil­ity to over­ride in­tu­ition when it comes to large num­bers is far less effec­tive when the ques­tion is ap­plied to me per­son­ally. I’m as­sum­ing that this ques­tion as­sumes no other ill effects from the specks. And I know I should pick the tor­ture. I know that if the tor­ture is the best out­come for other peo­ple, it’s the best out­come for my­self. But if I was given that choice in real life, I don’t think I would as of writ­ing this com­ment.

I have some cor­rect­ing to do.

• Ac­tu­ally, I ended up re­solv­ing this at some point. I would in fact pick the dust specks in this case, be­cause the situ­a­tions aren’t iden­ti­cal. I’d spend a lot of time in my 3^^^3 lives wor­ry­ing if I’m go­ing to start be­ing tor­tured for 50 years, but I wouldn’t worry about the dust specks. Tech­ni­cally, the di­su­til­ity of the dust specks is worse, but my brain can’t com­pre­hend the num­ber “3^^^3”, so it would worry more about the tor­ture hap­pen­ing to me. Ad­ding in the di­su­til­ity of wor­ry­ing about the tor­ture, even a small amount, across 3^^^3 /​ 2 lives, and it’s clear that I should pick the dust specks for my­self in this situ­a­tion, re­gard­less of whether or not I choose tor­ture in the origi­nal prob­lem.

• This is sort of avoid­ing the ques­tion. What if you made the choice, but then had your mem­ory erased about the whole dilemma right af­ter­wards? As­sum­ing you knew be­fore mak­ing your choice that your mem­ory would be erased, of course.

• Then I choose the tor­ture. I’ve grown a bit more com­fortable with over­rid­ing in­tu­ition in re­gards to ex­tremely large num­bers since my origi­nal re­ply 3 months ago.

• Even when ap­ply­ing the cold cruel calcu­lus of moral util­i­tar­i­anism, I think that most peo­ple ac­knowl­edge that egal­i­tar­i­anism in a so­ciety has value in it­self, and as­sign it pos­i­tive util­ity. Would you rather be born into a coun­try where 910 peo­ple are des­ti­tute (<\$1000/​yr), and the last is very wealthy (100,000/​yr)? Or, be born into a coun­try where al­most all peo­ple sub­sist on a mod­est (6-8000/​yr) amount?

Any sys­tem that al­lo­cates benefits (say, wealth) more fairly might be prefer­able to one that al­lo­cates more wealth in a more un­equal fash­ion. And, the same goes for nega­tive benefits. The dust specks may re­sult in more to­tal mis­ery, but there is util­ity in dis­tribut­ing that mis­ery equally.

• I don’t be­lieve egal­i­tar­i­anism has value in it­self. Tell me, would you rather get all your wealth con­tin­u­ously through­out the year, or get a dis­pro­por­tionate amount on Christ­mas?

If wealth is evenly dis­tributed, it will lead to more to­tal hap­piness, but I don’t see any ad­van­tage in hap­piness be­ing evenly dis­tributed.

I don’t see how your com­ment re­lates to this post.

• Per­haps it could be framed in terms of the util­ity of psy­cholog­i­cal com­fort. Sup­pose that one per­son is tor­tured to avoid 3^^^3 peo­ple get­ting dust specks. Won’t al­most ev­ery one of those 3^^^3 peo­ple em­pathize with the tor­tured per­son enough to feel a pang of dis­com­fort more un­com­fortable than a dust speck?

• Only if they find out that the tor­tured per­son ex­ists, which would be an event that’s not in the prob­lem state­ment.

• Well, there’s valu­ing money at more util­ity per dol­lar when you have less money and less util­ity per dol­lar when you have more money, which makes perfect sense. But that’s not the same as egal­i­tar­i­anism as part of util­ity.

• Third-to-last sen­tence sets up a false di­chotomy be­tween “more fairly” and “more un­equal.”

• Very-Re­lated Ques­tion: Typ­i­cal home­o­pathic dilu­tions are 10^(-60). On av­er­age, this would re­quire giv­ing two billion doses per sec­ond to six billion peo­ple for 4 billion years to de­liver a sin­gle molecule of the origi­nal ma­te­rial to any pa­tient.

Could one ar­gue that if we ad­minister a home­o­pathic pill of vi­tamin C in the above dilu­tion to ev­ery liv­ing per­son for the next 3^^^3 gen­er­a­tions, the im­pact would be a hu­mon­gous amount of flu-elimi­na­tion?

If any­one con­vinces me that yes, I might ac­cept to be a Tor­turer. Other­wise, I as­sume that the neg­ligi­bil­ity of the speck, plus peo­ple’s re­silience, would make no last­ing effects. Di­su­til­ity would van­ish in mil­isec­onds. If they wouldn’t even no­tice or have mem­ory of the specks af­ter a while, it’d equate to zero di­su­til­ity.

It’s not that I can’t do the maths. It’s that the evil of the speck seems too diluted to do harm.

Just like home­opa­thy is too diluted to do good.

• That’s not re­ally the point. The “dust speck” just means the mildest pos­si­ble harm that a per­son can suffer; if you don’t think a dust speck with no long-term con­se­quences can be harm­ful, you should men­tally sub­sti­tute a stubbed toe (with no long-term con­se­quences) or the like.

• Could one ar­gue that if we ad­minister a home­o­pathic pill of vi­tamin C in the above dilu­tion to ev­ery liv­ing per­son for the next 3^^^3 gen­er­a­tions, the im­pact would be a hu­mon­gous amount of flu-elimi­na­tion?

Easily. 3^^^3 = 3^^27 = 3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3 is so much larger than 10^60 that it is al­most cer­tain that many peo­ple will re­ceive sig­nifi­cant doses of vi­tamin C. Heck, 3^3^3^3^3^3 ~= 8.719e115 >> 10^60, and that’s merely 3^^6. If there is any causal re­la­tion­ship at all be­tween re­ceiv­ing a dose of vi­tamin C and flu re­sis­tance (which I be­lieve you im­ply for the pur­poses of the ques­tion), then a tremen­dous num­ber of peo­ple will be pro­tected from the flu—much, much in ex­cess of 3^^26.

• al­most cer­tain that many peo­ple will re­ceive sig­nifi­cant doses of vi­tamin C

Not what I said.

Each per­son will re­ceive vi­tamin C diluted in the ra­tio of 10^(-60) (see refer­ence here). The amount is the same for ev­ery­one, con­stant. Strictly one dose per per­son (as it was one speck per per­son).

But the num­ber of per­sons are all peo­ple al­ive in the next 3^^^3 gen­er­a­tions.

If there is any causal re­la­tion­ship at all be­tween re­ceiv­ing a dose of vi­tamin C and flu resistance

...which wouldn’t mean it is lin­ear at all. Above a cer­tain dose can be lethal; be­low, can have no effect.

Does it sound rea­son­able that if you eat one nanogram of bread dur­ing se­vere star­va­tion, it would re­tard your death in pre­cisely zero sec­onds?

• Does it sound rea­son­able that if you eat one nanogram of bread dur­ing se­vere star­va­tion, it would re­tard your death in pre­cisely zero sec­onds?

No. You use en­ergy at some finite rate (I’ll as­sume 2000 kilo­calories/​day, dunno how much star­va­tion af­fects this). A nanogram of bread con­tains a nonzero amount of en­ergy (~2.5 microcalories). So it in­creases your life ex­pec­tancy by a nonzero time (~100 nanosec­onds). A similar anal­y­sis can be performed for any­thing down to and in­clud­ing a sin­gle molecule.

• But each pa­tient re­ceives less than 10^60 molecules—one must as­sume some prob­a­bil­ity dis­tri­bu­tion on the num­ber of molecules if we are to sup­pose any med­i­ca­tion is de­liv­ered at all. As­sum­ing the dilu­tions are performed as pre­scribed in a typ­i­cal home­o­pathic prepa­ra­tion, a minus­cule frac­tion will ran­domly have sig­nifi­cantly more than the ex­pected con­cen­tra­tion, but even so at least the log­a­r­ithm of the frac­tion will be on an or­der of mag­ni­tude with the log­a­r­ithm of 10^-60 -- and there­fore will still mul­ti­ply to a tremen­dous num­ber in 3^^^3 cases.

That said, even if you as­sume that the dis­tri­bu­tion is ex­actly as even as pos­si­ble—ev­ery pa­tient re­ceives ei­ther zero or one molecule of vi­tamin C—there will be a minus­cule prob­a­bil­ity that the effect of that one molecule will be at the tip­ping point. Truly minus­cule—prob­a­bly on the or­der of 10^-20 to 10^-25, a few in one Avo­gadro’s num­ber—but this still cor­re­sponds to aid­ing 1 in 10^80 to 10^85 peo­ple, which mul­ti­plies to a tremen­dous num­ber in 3^^^3 cases.

• Math­e­mat­i­cally, I have to agree with your re­ply: you ei­ther have no molecules or at least one. And then, your calcu­la­tions hold true. And I’m wrong.

Phys­iolog­i­cally, though, my ar­gu­ment is that the “nanoutil­ity” that this molecule would add would have such a neg­ligible effect that noth­ing would change in the per­son’s life mea­sured by any prac­ti­cal pur­poses. It will pass com­pletely un­no­ticed (zero!) — for each per­son in the 3^^^3 gen­er­a­tions.

I as­sume a fuzzy scale of flu, so that no sin­gle molecule would turn sure-flu to sure-non-flu. As I as­sumed with the specks.

• Even if you perform the more so­phis­ti­cated anal­y­sis, the prob­a­bil­ity of the flu should shift slightly—and that slightly will be on the or­der of 10^-23, as be­fore. And that times 3^^^3...

• If you could take all the pain and dis­com­fort you will ever feel in your life, and com­press it into a 12-hour in­ter­val, so you re­ally feel ALL of it right then, and then af­ter the 12 hours are up you have no ill effects—would you do it? I cer­tainly would. In fact, I would prob­a­bly make the trade even if it were 2 or 3 times longer-last­ing and of the same in­ten­sity. But some­thing doesn’t make sense now… am I say­ing I would gladly dou­ble or triple the pain I feel over my whole life?

The up­shot is that there are some very non­lin­ear phe­nom­ena in­volved with calcu­lat­ing amounts of suffer­ing, as Psy-Kosh and oth­ers have pointed out. You may in­deed move along one co­or­di­nate in “suffer­ing-space” by 3^^^3 units, but it isn’t just ab­solute mag­ni­tude that’s rele­vant. That is, you can­not re­ca­pitu­late the “effect” of fifty years of tor­tur­ing with iso­lated dust specks. As the re­sponses here make clear, we do not sim­ply map mag­ni­tudes in suffer­ing space to moral rele­vance, but in­stead we con­sider the ac­tual lo­ca­tions and con­tours. (Com­pare: you de­cide to go for a 10-mile hike. But your en­joy­ment of the hike de­pends more on where you go, than the dis­tance trav­eled.)

• “If you could take all the pain and dis­com­fort you will ever feel in your life, and com­press it into a 12-hour in­ter­val, so you re­ally feel ALL of it right then, and then af­ter the 12 hours are up you have no ill effects—would you do it? I cer­tainly would.“”

Hubris. You don’t know, can’t know, how that pain would/​could be in­stru­men­tal in pro­cess­ing ex­ter­nal stim­uli in ways that en­able you to make bet­ter de­ci­sions.

“The sort of pain that builds char­ac­ter, as they say”.

The con­cept of pro­cess­ing ‘pain’ in all its forms is rooted very deep in hu­man­ity—get rid of it en­tirely (as op­posed to mod­u­lat­ing it as we cur­rently do), and you run a strong risk of throw­ing the baby out with the bath­wa­ter, es­pe­cially if you then have an as­surance that your life will have no pain go­ing for­ward. There’s a strong ar­gu­ment to be made for defer­ence to tra­di­tional hu­man ex­pe­rience in the face of the un­known.

• Anon prime: dol­lars are not util­ity. Eco­nomic egal­i­tar­i­anism is in­stru­men­tally de­sir­able. We don’t nor­mally fa­vor all types of equal­ity, as Robin fre­quently points out.

Kyle: cute

Eliezer: My im­pulse is to choose the tor­ture, even when I imag­ine very bad kinds of tor­ture and very small an­noy­ances (I think that one can go smaller than a dust mote, pos­si­bly some­thing like a let­ter on the spine of a book that your eye sweeps over be­ing in a shade less well se­lected a font). Then, how­ever, I think of how much longer the tor­ture could last and still not out­weigh the triv­ial an­noy­ances if I am to take the util­i­tar­ian per­spec­tive and my mind breaks. Con­don­ing 50 years of tor­ture, or even a day worth, is pretty much the same as con­don­ing uni­verses of ag­o­nium last­ing for eons in the face of num­bers like these, and I don’t think that I can con­done that for any amount of a triv­ial benefit.

• (This was my fa­vorite re­ply, BTW.)

• I ad­mire the re­straint in­volved in wait­ing nearly five years be­fore se­lect­ing a fa­vorite.

• Well too bad he didn’t wait a year longer then ;). I think prefer­ring tor­ture is the wrong an­swer for the same rea­son that I think uni­ver­sal health-care is a good idea. The fi­nan­cial cost of se­ri­ous ill­ness and in­jury is dis­tributed over the tax­pay­ing pop­u­la­tion so no sin­gle in­di­vi­d­ual has to deal with a spike in med­i­cal costs ru­in­ing their life. And I think it’s still the cor­rect moral choice re­gard­less of whether uni­ver­sal health-care hap­pens to be more ex­pen­sive or not.

Analo­gous I think the ex­act same ap­plies to dust vs tor­ture. I don’t think the cor­rect moral choice is about min­i­miz­ing the to­tal area un­der the pain-curve at all, it’s about avoid­ing se­vere pain-spikes for any given in­di­vi­d­ual even at the cost of hav­ing a larger area un­der the curve. I don’t think “shut up and mul­ti­ply” ap­plies here in it’s sim­plis­tic con­cep­tion in the way it might ap­ply in the sce­nario where you have to choose whether 400 peo­ple live for sure or 500 peo­ple live with .9 prob­a­bil­ity (and die with .1 prob­a­bil­ity).

Ir­re­spec­tive of the former how­ever, the thought ex­per­i­ment is a bit prob­le­matic be­cause it’s more com­plex than ap­par­ent at first, if we re­ally take it se­ri­ously. Eliezer said the dust-specks are “barely no­ticed”, but be­ing con­scious or aware of some­thing isn’t an ei­ther-or thing, aware­ness falls on a con­tinuum so what­ever “pain” the dust-specks causes has to be mul­ti­plied by how aware the per­son re­ally is. If some­one is tor­tured that per­son is pre­sum­ably very aware of the phys­i­cal and emo­tional pain.

Other pos­si­ble con­se­quences like last­ing dam­age or so­cial reper­cus­sions not count­ing, I don’t re­ally care all that much about any kind of pain that hap­pens to me while I’m not aware of it. I could prob­a­bly figure out whether or not pain is ac­tu­ally reg­istered in my brain dur­ing hav­ing my up­com­ing op­er­a­tion un­der anes­the­sia, but the fact that I won’t bother tells me very clearly, that aware­ness of pain is an im­por­tant weight we have to mul­ti­ply in some fash­ion with the ac­tual pain-reg­is­tra­tion in the brain.

That’s just an ad­di­tional con­sid­er­a­tion though, even if we sim­plify it and imag­ine the pain is di­rectly com­pa­rable and has no differ­ence in qual­ity at all, while the to­tal quan­tity of pain is ex­ces­sively higher in the dust-sce­nario com­pared to the tor­ture-sce­nario, it changes noth­ing about my cur­rent choice.

So what does that tell me about the re­la­tion­ship be­tween util­ity and moral­ity? I don’t ac­cept that moral­ity is just about the to­tal lump sums of util­ity and di­su­til­ity, I think we also have to con­sider the dis­tri­bu­tion of those in any given pop­u­la­tion. Why is that I ask my­self and my brain offers the fol­low­ing an­swer to this ques­tion:

If I was the only agent in the en­tire uni­verse and had to pick tor­ture vs dust for my­self (and ob­vi­ously if I was im­mor­tal/​ had a long enough life to ex­pe­rience all those dust specks), I would still pre­fer the larger area un­der the curve over the pain-spike, even if I as­sume di­rect com­pa­ra­bil­ity of the two kinds of pain. I sus­pect the rea­son for this choice is a type of time-dis­count­ing my brain does, I’d rather suffer a lit­tle pain ev­ery day for a trillion years than a big spike for 50 years. Con­sid­er­ing that briefly speak­ing util­ity is (or at least I think should be defined as) a thing that only re­sults from the in­ter­ac­tion of minds and en­vi­ron­ments, my mind and its work­ings are definitely part of the equa­tion that says what has util­ity and what doesn’t. And my mind wants to suffer low di­su­til­ity evenly dis­tributed over a long time-pe­riod rather than suffer great di­su­til­ity for a 50 year spike (as­sum­ing a trillion-year life­time).

• I don’t think the cor­rect moral choice is about min­i­miz­ing the to­tal area un­der the pain-curve at all, it’s about avoid­ing se­vere pain-spikes for any given in­di­vi­d­ual even at the cost of hav­ing a larger area un­der the curve.

If you’re go­ing to say that, you’ll need some thresh­hold, and pain over the thresh­hold makes the whole so­ciety count as worse than pain un­der the thresh­hold. This will mean that any num­ber of peo­ple with pain X is bet­ter than one per­son with pain X + ep­silon, where ep­silon is very small but hap­pens to push it over the thresh­hold.

Alter­nately, you could say that the di­su­til­ity of pain grad­u­ally changes, but that has other prob­lems. I sug­gest you read up on the re­pug­nant con­clu­sion ( http://​plato.stan­ford.edu/​en­tries/​re­pug­nant-con­clu­sion/​ )--de­pend­ing on ex­actly what you mean, what you sug­gest is similar to the pro­posed solu­tions, which don’t re­ally work.

• How bad is the tor­ture op­tion?

Let’s say a hu­man brain can have ten thoughts per sec­ond; or the rate of hu­man aware­ness is ten per­cep­tions per sec­ond. Fifty years of tor­ture means just over one and a half billion tor­tured thoughts, or per­cep­tions of tor­ture.

Let’s say a hu­man brain can dis­t­in­guish twenty log­a­r­ith­mic de­grees of dis­com­fort, with the low­est be­ing “no dis­com­fort at all”, the sec­ond-low­est be­ing a dust speck, and the high­est be­ing tor­ture. In other words, a sin­gle mo­ment of tor­ture is 2^19 = 524288 times worse than a dust speck; and a dust speck is the small­est dis­com­fort pos­si­ble. Let’s call a unit of dis­com­fort a “dol” (from Latin do­lor).

In other words, the tor­ture op­tion means 1.5 billion mo­ments × 2^19 dols; whereas the dust-specks op­tion means 3^^^3 mo­ments × 1 dol.

The as­sump­tions go­ing into this ar­gu­ment are the speed of hu­man thought or per­cep­tion, and the scale of hu­man dis­com­fort or pain. Th­ese are not ac­cu­rately known to­day, but there must ex­ist finite limits — hu­mans do not think or per­ceive in­finitely fast; and the worst un­pleas­ant­ness we can ex­pe­rience is not in­finitely bad. I have as­sumed a log scale for dis­com­fort be­cause we use log scales for other senses, e.g. bright­ness of light and vol­ume of sound. How­ever, all these as­sump­tions can be em­piri­cally cor­rected based on facts about hu­man neu­rol­ogy.

Tor­ture is re­ally, re­ally bad. But it is not in­finitely bad.

That said, there may be other fac­tors in the moral calcu­la­tion of which to pre­fer. For in­stance, the moral bad­ness of caus­ing a par­tic­u­lar level of dis­com­fort may not be lin­ear in the amount of dis­com­fort: caus­ing three dols once may be worse than caus­ing one dol three times. How­ever, this seems difficult to jus­tify. Dis­com­fort is sub­jec­tive, which is to say, it is mea­sured by the be­holder — and the be­holder only has so much brain to mea­sure it with.

• I sus­pect that I would pre­fer the false mem­ory of hav­ing been tor­tured for five min­utes to the false mem­ory of hav­ing been tor­tured for a year, as­sum­ing the mem­o­ries are close repli­cas of what mem­o­ries of the ac­tual event would be like. I would re­lat­edly pre­fer that some­one else ex­pe­rience the former rather than the lat­ter, even if I’m perfectly aware the mem­ory is false. This sug­gests to me that what­ever I’m do­ing to make my moral judg­ments that tor­ture is bad, it’s not just sum­ming the num­ber of per­cep­tion-mo­ments… there are an equal num­ber of per­cep­tion-mo­ments in those two cases, af­ter all. (Speci­fi­cally, none at all.)

That said, this line of think­ing quickly runs aground on the “no knock-on effects” con­di­tion of the ini­tial thought ex­per­i­ment.

• I sus­pect that I would pre­fer the false mem­ory of hav­ing been tor­tured for five min­utes to the false mem­ory of hav­ing been tor­tured for a year, as­sum­ing the mem­o­ries are close repli­cas of what mem­o­ries of the ac­tual event would be like.

Ac­tu­ally, from what I read about re­lated re­search in “Think­ing, Fast and Slow”, it’s not clear that you would (or that the differ­ence would be as large as you might ex­pect, at least). It seems that mem­o­ries of pain de­pend largely on the most in­tense mo­ment of pain and on the fi­nal mo­ment of pain, not nec­es­sar­ily on du­ra­tion.

For ex­am­ple, in one ex­per­i­ment (I read the book a week ago and write from mem­ory), sub­jects were asked to put their hand in a bowl of cold wa­ter (a painful ex­pe­rience) for two min­utes, then they were asked to put their hands in cold wa­ter for two min­utes, fol­lowed by the wa­ter be­ing warmed grad­u­ally over an­other 5 min­utes. (There were rea­son­able con­trols, ob­vi­ously.) Then they were asked which ex­pe­rience to re­peat. The ma­jor­ity chose ex­pe­rience two, even though in­tu­itively it is strictly worse than ex­pe­rience one.

Of course, you’d have to find the ac­tual re­lated pa­per(s), check how high the cor­re­la­tion/​ig­nor­ing-du­ra­tion effect is, check if there’s sig­nifi­cant in­ter-in­di­vi­d­ual vari­a­tion (whether maybe you’re an un­usual per­son who cares about du­ra­tion), but, re­gard­less, there are sig­nifi­cant rea­sons to doubt your in­tu­itions in this sce­nario.

• huh.

I won­der if we might ac­tu­ally value ex­pe­riences this way?

• Daniel Kah­ne­man sug­gests that we do. We re­mem­ber thing im­perfectly and op­ti­mize for the way we re­mem­ber things. Wiki has a quick sum­mary.

• This sug­gests to me that what­ever I’m do­ing to make my moral judg­ments that tor­ture is bad, it’s not just sum­ming the num­ber of per­cep­tion-mo­ments… there are an equal num­ber of per­cep­tion-mo­ments in those two cases, af­ter all. (Speci­fi­cally, none at all.)

True — we need a term for mo­ments of dis­com­fort caused by con­tem­pla­tion, not just ones caused by per­cep­tion.

It seems to me, though, that your brain can only per­ceive a finite num­ber of gra­da­tions of un­pleas­ant con­tem­pla­tion, too. The mem­ory of be­ing tor­tured for five min­utes, the mem­ory of be­ing tor­tured for a year, and the mem­ory of hav­ing got­ten a dust speck in your eye could oc­cupy points on this scale of un­pleas­ant­ness.

• I’ll go ahead and re­veal my an­swer now: Robin Han­son was cor­rect, I do think that TORTURE is the ob­vi­ous op­tion, and I think the main in­stinct be­hind SPECKS is scope in­sen­si­tivity.

Some com­ments:

While some peo­ple tried to ap­peal to non-lin­ear ag­gre­ga­tion, you would have to ap­peal to a non-lin­ear ag­gre­ga­tion which was non-lin­ear enough to re­duce 3^^^3 to a small con­stant. In other words it has to be effec­tively flat. And I doubt they would have said any­thing differ­ent if I’d said 3^^^^3.

If any­thing is ag­gre­gat­ing non­lin­early it should be the 50 years of tor­ture, to which one per­son has the op­por­tu­nity to ac­cli­mate; there is no in­di­vi­d­ual ac­clima­ti­za­tion to the dust specks be­cause each dust speck oc­curs to a differ­ent per­son. The only per­son who could be “ac­cli­mat­ing” to 3^^^3 is you, a by­stan­der who is in­sen­si­tive to the in­con­ceiv­ably vast scope.

Scope in­sen­si­tivity—ex­tremely sub­lin­ear ag­gre­ga­tion by in­di­vi­d­u­als con­sid­er­ing bad events hap­pen­ing to many peo­ple—can lead to mass defec­tion in a mul­ti­player pris­oner’s dilemma even by al­tru­ists who would nor­mally co­op­er­ate. Sup­pose I can go sky­div­ing to­day but this causes the world to get warmer by 0.000001 de­gree Cel­sius. This poses very lit­tle an­noy­ance to any in­di­vi­d­ual, and my util­ity func­tion ag­gre­gates sub­lin­early over in­di­vi­d­u­als, so I con­clude that it’s best to go sky­div­ing. Then a billion peo­ple go sky­div­ing and we all catch on fire. Which ex­act per­son in the chain should first re­fuse?

I may be in­fluenced by hav­ing pre­vi­ously dealt with ex­is­ten­tial risks and peo­ple’s ten­dency to ig­nore them.

• While some peo­ple tried to ap­peal to non-lin­ear ag­gre­ga­tion, you would have to ap­peal to a non-lin­ear ag­gre­ga­tion which was non-lin­ear enough to re­duce 3^^^3 to a small con­stant.

Sum(1/​n^2, 1, 3^^^3) < Sum(1/​n^2, 1, inf) = (pi^2)/​6

So an al­gorithm like, “or­der util­ities from least to great­est, then sum with a weight if 1/​n^2, where n is their po­si­tion in the list” could pick dust specks over tor­ture while recom­mend­ing most peo­ple not go sky div­ing (as their benefit is out­weighed by the detri­ment to those less for­tu­nate).

This would mean that scope in­sen­si­tivity, be­yond a cer­tain point, is a fea­ture of our moral­ity rather than a bias; I am not sure my opinion of this out­come.

That said, while giv­ing an an­swer to the one prob­lem that some seem more com­fortable with, and to the sec­ond that ev­ery­one agrees on, I ex­pect there are clear failure modes I haven’t thought of.

This of course holds for weights of 1/​n^a for any a>1; the most con­vinc­ing defeat of this propo­si­tion would be show­ing that weights of 1/​n (or 1/​(n log(n))) drop off quickly enough to lead to bad be­hav­ior.

• On re­cently en­coun­ter­ing the wikipe­dia page on Utility Mon­sters and thence to the Mere Ad­di­tion Para­dox, it oc­curs to me that this seems to neatly de­fang both.

Edited—rather, com­pletely de­fangs the Mere Ad­di­tion Para­dox, may or may not com­pletely de­fang Utility Mon­sters de­pend­ing on de­tails but at least re­duces their im­pact.

• While some peo­ple tried to ap­peal to non-lin­ear ag­gre­ga­tion, you would have to ap­peal to a non-lin­ear ag­gre­ga­tion which was non-lin­ear enough to re­duce 3^^^3 to a small con­stant. In other words it has to be effec­tively flat. And I doubt they would have said any­thing differ­ent if I’d said 3^^^^3.

And why should they con­sider 3^^^^3 differ­ently, if their func­tion asymp­tot­i­cally ap­proaches a limit? Be­sides, hu­man util­ity func­tion would take the whole, and then per­haps con­sider du­pli­cates, unique­ness (you don’t want your pre­his­toric tribe to lose the last man who knows how to make a stone axe), and so on, rather than eval­u­ate one by one and then sum.

Scope in­sen­si­tivity—ex­tremely sub­lin­ear ag­gre­ga­tion by in­di­vi­d­u­als con­sid­er­ing bad events hap­pen­ing to many peo­ple—can lead to mass defec­tion in a mul­ti­player pris­oner’s dilemma even by al­tru­ists who would nor­mally co­op­er­ate. Sup­pose I can go sky­div­ing to­day but this causes the world to get warmer by 0.000001 de­gree Cel­sius. This poses very lit­tle an­noy­ance to any in­di­vi­d­ual, and my util­ity func­tion ag­gre­gates sub­lin­early over in­di­vi­d­u­als, so I con­clude that it’s best to go sky­div­ing. Then a billion peo­ple go sky­div­ing and we all catch on fire. Which ex­act per­son in the chain should first re­fuse?

The false al­lure of over­sim­plified moral­ity is in ease of in­vent­ing hy­po­thet­i­cal ex­am­ples where it works great.

One could, of course, posit a colder planet. Most of the pop­u­la­tion would pre­fer that planet to be warmer, but if the tem­per­a­ture rise ex­ceeds 5 Cel­sius, the gas hy­drates melt, and ev­ery­one dies. And they all have to de­cide at one day. Or one could posit a planet Lin­ear­ium pop­u­lated en­tirely by peo­ple that re­ally love sky­div­ing, who would want to sky­dive ev­ery­day but that would raise the global tem­per­a­ture by 100 Cel­sius, and they’d rather be al­ive than sky­dive ev­ery day and boil to death. They opt to sky­dive at their birth­days at the ex­pense of 0.3 de­gree global tem­per­a­ture rise, which each one of them finds to be an ac­cept­able price to pay for get­ting to sky­dive at your birth­day.

• I think I un­der­stand why one should de­rive the con­clu­sion to tor­ture one per­son, given these premises.

What I don’t un­der­stand is the premises. In the ar­ti­cle about scope in­sen­si­tivity you linked to, it was very clear that the scope of things made things worse. I don’t un­der­stand why it should be wrong to round down the dust speck, or similar very small di­su­til­ities, to zero—Ba­si­cally, what Scott Clark said: 3^^^3*0 di­su­til­ities = 0 di­su­til­ity.

• Round­ing to zero is odd. In the ab­sence of other con­sid­er­a­tions, you have no prefer­ence whether or not peo­ple get a dust speck in their eye?

It is also in vi­o­la­tion of the struc­ture of the thought ex­per­i­ment—a dust speck was cho­sen as the least bad bad thing that can hap­pen to some­one. If you would round it to zero, then you need to choose slightly worse thing—I can’t imag­ine your in­tu­itions will be any less shocked by prefer­ring tor­ture to that slightly worse thing.

• a dust speck was cho­sen as the least bad bad thing that can hap­pen to some­one.

That was a mis­take, since so many peo­ple round it to zero.

• It seems to have been. Since the crite­ria for the choice was laid out ex­plic­itly, though, I would have hoped that more peo­ple would no­tice that the thought ex­per­i­ment they solved so eas­ily was not ac­tu­ally the one they had been given, and perform the nec­es­sary ad­just­ment. This is ob­vi­ously too op­ti­mistic—but per­haps can serve it­self as some kind of les­son about rea­son­ing.

• I con­ceed that it is rea­son­able within the con­straints of the thought ex­per­i­ment. How­ever, I think it should be noted that this will never be more than a thought ex­per­i­ment and that if real world num­bers and real world prob­lems are used, it be­comes less clear cut, and the in­tu­ition of go­ing against the 50 years tor­ture is a good start­ing point in some cases.

• It’s odd. If you think about it, Eliezer’s Ar­gu­ment is ab­solutely cor­rect. But it seems rather un­in­tu­itive even though I KNOW it’s right. We hu­mans are a bit silly some­times. On the other hand, we did man­age to figure this out, so it’s not that bad.

• I will ad­mit, that was a pretty awe­some les­son to learn. Mar­cello’s rea­son­ing had it click in my head but the kicker that drove the point home was scal­ing it to 3^^^^3 in­stead of 3^^^3.

• I agree with this anal­y­sis pro­vided there is some rea­son for lin­ear ag­gre­ga­tion.

Why should the util­ity of the world be the sum of the util­ities of its in­hab­itants? Why not, for in­stance, the `min` of the util­ities of its in­hab­itants?

I think that’s what my in­tu­ition wants to do any­way: care about how badly off the worst-off per­son is, and try to im­prove that.

U1(world) = min_peo­ple(u(per­son)) in­stead of U2(world) = sum_peo­ple(u(per­son))

so U1(tor­ture) = -big, U1(dust) = -tiny
U2(tor­ture) = -big, U2(dust) = -out­ra­geously massive

Thus, if you use U1, you choose dust be­cause -tiny > -big,
but if you use U2, you choose tor­ture be­cause -big > -out­rage.

But I see no real rea­son to pre­fer one in­tu­ition over the other, so my ques­tion is this:
Why lin­ear ag­gre­ga­tion of util­ities?

• I think that’s what my in­tu­ition wants to do any­way: care about how badly off the worst-off per­son is, and try to im­prove that.

I find it hard to be­lieve that you be­lieve that. Un­der that met­ric, for ex­am­ple, “pick a thou­sand happy peo­ple and kill their dogs” is a com­pletely neu­tral act, along with lots of other ex­tremely strange re­sults.

• Oh, good point, maybe a kind of alpha­bet­i­cal or­der­ing could break ties.

So then, we dis­re­gard ev­ery­one who isn’t af­fected by the pos­si­ble ac­tion and max­i­mize over the util­ities of those who are.

But still, this prefers a mil­lion peo­ple be­ing punched once to any one per­son be­ing punched twice, which seems silly—I’m just try­ing to parse out my in­tu­ition for choos­ing dust specks.

I get other pos­si­ble meth­ods be­ing flawed is a mark for lin­ear ag­gre­ga­tion, but what pos­i­tive rea­sons are there for it?

• Or, for a maybe more dra­matic in­stance: “Find the world’s un­hap­piest per­son and kill them”. Of course to­tal util­i­tar­i­anism might also en­dorse do­ing that (as might quite a lot of peo­ple, hor­rible though it sounds, on con­sid­er­ing just how wretched the lives of the world’s un­hap­piest peo­ple prob­a­bly are) -- but min-util­i­tar­i­anism con­tinues to en­dorse do­ing this even if ev­ery­one in the world—in­clud­ing the soon-to-be-ex-un­hap­piest-per­son—is ex­tremely happy and very much wishes to go on liv­ing.

• The spe­cific prob­lem which causes that is that most ver­sions of util­i­tar­i­anism don’t al­low the fact that some­one de­sires not to be kil­led to af­fect the util­ity calcu­la­tion, since af­ter they have been kil­led, they no longer have util­ity.

• Yes, this is a failure mode of (some forms of?) util­i­tar­i­anism, but not the spe­cific weird­ness I was try­ing to get at, which was that if you ag­gre­gate by min(), then it’s com­pletely morally OK to do very bad things to huge num­bers of peo­ple—in fact, it’s no worse than rad­i­cally im­prov­ing huge num­bers of lives—as long as you avoid af­fect­ing the one per­son who is worst-off. This is a very silly prop­erty for a moral sys­tem to have.

You can at­tempt to miti­gate this prop­erty with too-clever ob­jec­tions, like “aha, but if you kill a happy per­son, then in the mo­ment of their death they are tem­porar­ily the most un­happy per­son, so you have af­fected the met­ric af­ter all”. I don’t think that ac­tu­ally works, but didn’t want it to ob­scure the point, so I picked “kill their dog” as an ex­am­ple, be­cause it’s a clearly bad thing which definitely doesn’t bump any­one to the bot­tom.

• But still, WHY is tor­ture bet­ter? What is even the prob­lem with the speck dusts? Some of the peo­ple who get speck dust in their eyes will die in ac­ci­dents caused by the dust par­ti­cles? Is this why speck dust is so bad? But then, have we con­sid­ered the fact that speck dust may save an equal amount of peo­ple, who would oth­er­wise die? I re­ally don´t get it and it both­ers me alot.

• Okey­maker, I think the ar­gu­ment is this:

Tor­tur­ing one per­son for 50 years is bet­ter than tor­tur­ing 10 per­sons for 40 years.

Tor­tur­ing 10 per­sons for 40 years is bet­ter than tor­tur­ing 1000 per­sons for 10 year.

Tor­tur­ing 1000 per­sons for 10 years is bet­ter than tor­tur­ing 1000000 per­sons for 1 year.

Tor­tur­ing 10^6 per­sons for 1 year is bet­ter than tor­tur­ing 10^9 per­sons for 1 month.

Tor­tur­ing 10^9 per­sons for 1 month is bet­ter than tor­tur­ing 10^12 per­sons for 1 week.

Tor­tur­ing 10^12 per­sons for 1 week is bet­ter than tor­tur­ing 10^15 per­sons for 1 day.

Tor­tur­ing 10^15 per­sons for 1 day is bet­ter than tor­tur­ing 10^18 per­sons for 1 hour.

Tor­tur­ing 10^18 per­sons for 1 hour is bet­ter than tor­tur­ing 10^21 per­sons for 1 minute.

Tor­tur­ing 10^21 per­sons for 1 minute is bet­ter than tor­tur­ing 10^30 per­sons for 1 sec­ond.

Tor­tur­ing 10^30 per­sons for 1 sec­ond is bet­ter than tor­tur­ing 10^100 per­sons for 1 mil­lisec­ond.

Tor­tur­ing for 1 mil­lisec­ond is ex­actly what a dust speck does.

And if you dis­agree with the num­bers, you can add a few mil­lions. There is still plenty of space be­tween 10^100 and 3^^^3.

• Tor­tur­ing a per­son for 1 mil­lisec­ond is not nec­es­sar­ily even a pos­si­bil­ity. It doesn’t make any sense what­so­ever; in 1 mil­lisec­ond no in­ter­est­ing feed­back loops can even close.

If we ac­cept that tor­ture is some class of com­pu­ta­tional pro­cesses that we wish to avoid, the bad­ness definitely could be eat­ing up your 3^^^3s in one way or the other. We have ab­solutely zero rea­son to ex­pect lin­ear­ity when some (how­ever un­known) prop­er­ties of a set of com­pu­ta­tions are in­volvd. And the com­pu­ta­tional pro­cesses are not in­finitely di­visi­ble into smaller lengths of time.

• Okay, here’s a new ar­gu­ment for you (origi­nally pro­posed by James Miller, and which I have yet to see ad­e­quately ad­dressed): as­sume that you live on a planet with a pop­u­la­tion of 3^^^3 dis­tinct peo­ple. (The “planet” part is ob­vi­ously not pos­si­ble, and the “dis­tinct” part may or may not be pos­si­ble, but for the pur­poses of a dis­cus­sion about moral­ity, it’s fine to as­sume these.)

Now let’s sup­pose that you are given a choice: (a) ev­ery­one on the planet can get a dust speck in the eye right now, or (b) the en­tire planet holds a lot­tery, and the one per­son who “wins” (or “loses”, more ac­cu­rately) will be tor­tured for 50 years. Which would you choose?

If you are against tor­ture (as you seem to be, from your com­ment), you will pre­sum­ably choose (a). But now let’s sup­pose you are al­lowed to blink just be­fore the dust speck en­ters your eye. Call this choice (c). See­ing as you prob­a­bly pre­fer not hav­ing a dust speck in your eye to hav­ing one in your eye, you will most likely pre­fer (c) to (a).

How­ever, 3^^^3 just so uni­mag­in­ably enor­mous that blink­ing for even the tiniest frac­tion of a sec­ond in­creases the prob­a­bil­ity that you will be cap­tured by a mad­man dur­ing that blink and tor­tured for 50 years by more than 1/​3^^^3. But since the lot­tery pro­posed in (b) only offers a 1/​3^^^3 prob­a­bil­ity of be­ing picked for the tor­ture, (b) is prefer­able to (c).

Then, by the tran­si­tivity ax­iom, if you pre­fer (c) to (a) and (b) to (c), you must pre­fer (b) to (a).

Q.E.D.

• How­ever, 3^^^3 just so uni­mag­in­ably enor­mous that blink­ing for even the tiniest frac­tion of a sec­ond in­creases the prob­a­bil­ity that you will be cap­tured by a mad­man dur­ing that blink and tor­tured for 50 years by more than 1/​3^^^3.

And the time spent set­ting up a lot­tery and car­ry­ing out the draw­ing also in­creases the prob­a­bil­ity that some­one else gets cap­tured and tor­tured in the in­ter­ven­ing time, far more than blink­ing would. In fact, the prob­a­bil­ity goes up any­way in that frac­tion of a sec­ond, whether you blink or not. You can’t stop time, so there’s no rea­son to pre­fer (c) to (b).

• In fact, the prob­a­bil­ity goes up any­way in that frac­tion of a sec­ond, whether you blink or not.

Ah, sorry; I wasn’t clear. What I meant was that blink­ing in­creases your prob­a­bil­ity of be­ing tor­tured be­yond the nor­mal “baseline” prob­a­bil­ity of tor­ture. Ob­vi­ously, even if you don’t blink, there’s still a prob­a­bil­ity of you be­ing tor­tured. My claim is that blink­ing af­fects the prob­a­bil­ity of be­ing tor­tured so that the prob­a­bil­ity is higher than it would be if you hadn’t blinked (since you can’t see for a frac­tion of a sec­ond while blink­ing, leav­ing you ever-so-slightly more vuln­er­a­ble than you would be with your eyes open), and more­over that it would in­crease by more than 1/​3^^^3. So ba­si­cally what I’m say­ing is that P(tor­ture|blink) > P(tor­ture|~blink) + 1/​3^^^3.

• Let me see if I get this straight:

The choice comes down to dust specks at time T or dust specks at time T + dT, where the in­ter­val dT al­lows you time to blink. The ar­gu­ment is that in the in­ter­val dT, the prob­a­bil­ity of be­ing cap­tured and tor­tured in­creases by an amount greater than your odds in the lot­tery.

It seems to me that the blink­ing is im­ma­te­rial. If the ques­tion were whether to hold the lot­tery to­day or put dust in ev­ery­one’s eyes to­mor­row, the ar­gu­ment should be un­changed. It ap­pears to hinge on the no­tion that as time in­creases, so do the odds of some­thing bad hap­pen­ing, and there­fore you’d pre­fer to be in the pre­sent in­stead of the fu­ture.

The prob­lem I have is that the fu­ture is go­ing to hap­pen any­way. Once the in­ter­val dT passes, the odds of some­one be­ing cap­tured in that time will go up re­gard­less of whether you chose the lot­tery or not.

• How­ever, 3^^^3 just so uni­mag­in­ably enor­mous that blink­ing for even the tiniest frac­tion of a sec­ond in­creases the prob­a­bil­ity that you will be cap­tured by a mad­man dur­ing that blink and tor­tured for 50 years by more than 1/​3^^^3.

This seems pretty un­likely to be true.

• I think you un­der­es­ti­mate the mag­ni­tude of 3^^^3 (and thereby over­es­ti­mate the mag­ni­tude of 1/​3^^^3).

• Both num­bers seem ba­si­cally ar­bi­trar­ily small (prob­a­bil­ity 0).

Since the planet has so many dis­tinct peo­ple, and they blink more than once a day, you are es­sen­tially as­sert­ing that on that planet, mul­ti­ple peo­ple are kid­napped and tor­tured for more than 50 years sev­eral times a day.

• Since the planet has so many dis­tinct peo­ple, and they blink more than once a day, you are es­sen­tially as­sert­ing that on that planet, mul­ti­ple peo­ple are kid­napped and tor­tured for more than 50 years sev­eral times a day.

Well, I mean, ob­vi­ously a sin­gle per­son can’t be kid­napped more than once ev­ery 50 years (as­sum­ing that’s how long each tor­ture ses­sion lasts), and cer­tainly not sev­eral times a day, since he/​she wouldn’t have finished be­ing tor­tured quickly enough to be kid­napped again. But yes, the gen­eral sen­ti­ment of your com­ment is cor­rect, I’d say. The prospect of a planet with daily kid­nap­pings and 50-year-long tor­ture ses­sions may seem strange, but that sort of thing is just what you get when you have a pop­u­la­tion count of 3^^^3.

• I worked it out back of the en­velope, and the prob­a­bil­ity of be­ing kid­napped when you blink is only 1/​5^^^5.

• Well, now I know you’re un­der­es­ti­mat­ing how big 3^^^3 is (and 5^^^5, too). But let’s say some­how you’re right, and the prob­a­bil­ity re­ally is 1/​5^^^5. All I have to do is mod­ify the thought ex­per­i­ment so that the planet has 5^^^5 peo­ple in­stead of 3^^^3. There, prob­lem solved.

So, new ques­tion: would you pre­fer that one per­son be hor­ribly tor­tured for fifty years with­out hope or rest, or that 5^^^5 peo­ple get dust specks in their eyes?

• Agree, hav­ing lived in chronic pain sup­pos­edly worse than un­trained child­birth, I’d say that even an hour has a re­ally se­ri­ously differ­ent pos­si­bil­ity in terms of ca­pac­ity for suffer­ing than a day, and a day differ­ent from a week. For me it breaks down some­where, even when mul­ti­ply­ing be­tween the 10^15 for 1 day and 10^21 for one minute. You can’t re­ally feel THAT much pain in a minute that is com­pa­rable to a day, even or­ders of mag­ni­tude? Its just qual­i­ta­tively differ­ent. In­ter­ested to hear push­back on this

• We could go from a day to a minute more slowly; for ex­am­ple, by in­creas­ing the num­ber of peo­ple by a fac­tor of a googol­plex ev­ery time the tor­ture time de­creases by 1 sec­ond.

I ab­solutely agree that the length of tor­ture in­creases how bad it is in non­lin­ear ways, but this doesn’t mean we can’t find ex­po­nen­tial fac­tors that dom­i­nate it at ev­ery point at least along the “less than 50 years” range.

• Ob­vi­ously. Just im­por­tant to re­mem­ber that ex­trem­ity of suffer­ing is some­thing we fre­quently fail to think well about.

• Ab­solutely. We’re bad at any­thing that we can’t eas­ily imag­ine. Prob­a­bly, for many peo­ple, in­tu­ition for “tor­ture vs. dust specks” imag­ines a guy with a bro­ken arm on one side, and a hun­dred peo­ple say­ing ‘ow’ on the other.

The con­se­quences of our poor imag­i­na­tion for large num­bers of peo­ple (i.e. scope in­sen­si­tivity) are well-stud­ied. We have trou­ble do­ing char­ity effec­tively be­cause our in­tu­ition doesn’t take the num­ber of peo­ple saved by an in­ter­ven­tion into ac­count; we just pic­ture the typ­i­cal effect on a sin­gle per­son.

What, I won­der, are the con­se­quence of our poor imag­i­na­tion for ex­trem­ity of suffer­ing? For me, the prison sys­tem comes to mind: I don’t know how bad be­ing in prison is, but it prob­a­bly be­comes much worse than I imag­ine if you’re there for 50 years, and we don’t think about that at all when ar­gu­ing (or vot­ing) about prison sen­tences.

• My heuris­tic for deal­ing with such situ­a­tions is some­what rem­i­nis­cent of Hofs­tadter’s Law: how­ever bad you imag­ine it to be, it’s worse than that, even when you take the pre­ced­ing state­ment into ac­count. In prin­ci­ple, this re­cur­sion should go on for­ever and lead to you re­gard­ing any suffi­ciently uni­mag­in­ably bad situ­a­tion as in­finitely bad, but in prac­tice, I’ve yet to have it overflow, prob­a­bly be­cause your judg­ment spon­ta­neously re­gresses back to your origi­nal (in­ac­cu­rate) rep­re­sen­ta­tion of the situ­a­tion un­less con­sciously cor­rected for.

• Obli­ga­tory xkcd.

• That would have been a bet­ter comic with­out the com­men­tary in the last panel.

• But the alt text is great X-)

• My feel­ing is that situ­a­tions like be­ing caught for do­ing some­thing hor­ren­dous might or might not be sub­ject to psy­cholog­i­cal ad­just­ment—that many situ­a­tions of suffer­ing are sub­ject to psy­cholog­i­cal ad­just­ment and so might ac­tu­ally be not as bad as we though. But chronic in­tense pain, is liter­ally un­ad­justable to some de­gree—you can ad­just to be­ing in in­tense suffer­ing but that doesn’t make the in­tense suffer­ing go away. That’s why I think its a spe­cial class of states of be­ing—one that in­vokes ac­tion. What do peo­ple think?

• That strikes me as a de­liber­ate set up for a con­tinuum fal­lacy.

Also, why are you so sure that the num­ber of peo­ple in­creases suffer­ing in a lin­ear way for even very large num­bers? What is a num­ber of peo­ple any­way?

I’d much pre­fer to have a [large num­ber of ex­act copies of me] ex­pe­rience 1 sec­ond of headache than for one me to suffer it for a whole day. Be­cause those copies they don’t have any mechanism which could com­pound their suffer­ing. They aren’t even differ­ent sub­jec­tivi­ties. I don’t see any rea­son why a hy­po­thet­i­cal mind up­load of me run­ning on mul­ti­ple re­dun­dant hard­ware should be an util­ity mon­ster, if it can’t even tell sub­jec­tively how re­dun­dant it’s hard­ware is.

Some anaes­thet­ics do some­thing similar, pre­vent­ing any new long term mem­o­ries, peo­ple have no prob­lem with tak­ing those for surgery. Some­thing’s still ex­pe­rienc­ing pain but it’s not com­pound­ing into any­thing re­ally bad (un­less the drugs fail to work, or un­less some form of long term mem­ory still works). A real ex­am­ple of a very strong prefer­ence for N in­de­pen­dent ex­pe­riences of 30 sec­onds of pain over 1 ex­pe­rience of 30*N sec­onds of pain.

• It’s not a con­tinuum fal­lacy be­cause I would ac­cept “There is some pair (N,T) such that (N peo­ple tor­tured for T sec­onds) is worse than (10^100 N peo­ple tor­tured for T-1 sec­onds), but I don’t know the ex­act val­ues of N and T” as an an­swer. If, on the other hand, the com­par­i­son goes the other way for any val­ues of N and T, then you have to ac­cept the tran­si­tive clo­sure of those com­par­i­sons as well.

Also, why are you so sure that the num­ber of peo­ple in­creases suffer­ing in a lin­ear way for even very large num­bers? What is a num­ber of peo­ple any­way?

I’m not sure what you mean by this. I don’t be­lieve in lin­ear­ity of suffer­ing: that would be the claim that 2 peo­ple tor­tured for 1 day is the same as 1 per­son tor­tured for 2 days, and that’s ridicu­lous. I be­lieve in com­pa­ra­bil­ity of suffer­ing, which is the claim that for some value of N, N peo­ple tor­tured for 1 day is worse than 1 per­son tor­tured for 2 days.

Re­gard­ing anaes­thet­ics: I would pre­fer a mem­ory in­hibitor for a painful surgery to the ab­sence of one, but I would still strongly pre­fer to feel less pain dur­ing the surgery even if I know I will not re­mem­ber it one way or the other. Is this prefer­ence un­usual?

• I be­lieve in com­pa­ra­bil­ity of suffer­ing, which is the claim that for some value of N, N peo­ple tor­tured for 1 day is worse than 1 per­son tor­tured for 2 days.

This is where the ar­gu­ment for choos­ing tor­ture falls apart for me, re­ally. I don’t think there is any num­ber of peo­ple get­ting dust specks in their eyes that would be worse than tor­tur­ing one per­son for fifty years. I have to as­sume my util­ity func­tion over other peo­ple is asymp­totic; the amount of di­su­til­ity of choos­ing to let even an in­finity of peo­ple get dust specks in their eyes is still less than the di­su­til­ity of one per­son get­ting tor­tured for fifty years.

I’m not sure what you mean by this. I don’t be­lieve in lin­ear­ity of suffer­ing: that would be the claim that 2 peo­ple tor­tured for 1 day is the same as 1 per­son tor­tured for 2 days, and that’s ridicu­lous.

I think he’s ques­tion­ing the idea that two peo­ple get­ting dust specks in their eyes is twice the di­su­til­ity of one per­son get­ting dust specks, and that is the lin­ear­ity he’s refer­ring to.

Per­son­ally, I think the prob­lem stems from dust specks be­ing such a minor in­con­ve­nience that it’s ba­si­cally be­low the noise thresh­old. I’d al­most be in­differ­ent be­tween choos­ing for noth­ing to hap­pen or choos­ing for ev­ery­one on Earth to get dust specks (as­sum­ing they don’t cause crashes or any­thing).

• There’s the ques­tion of lin­ear­ity- but if you use big enough num­bers you can brute force any non­lin­ear re­la­tion­ship, as Yud­kowsky cor­rectly pointed out some years ago. Take Kindly’s state­ment:

“There is some pair (N,T) such that (N peo­ple tor­tured for T sec­onds) is worse than (10^100 N peo­ple tor­tured for T-1 sec­onds), but I don’t know the ex­act val­ues of N and T”

We can imag­ine a world where this state­ment is true (prob­a­bly for a value of T re­ally close to 1). And we can imag­ine know­ing the cor­rect val­ues of N and T in that world. But even then, if a crit­i­cal con­di­tion is met, it will be true that

“For all val­ues of N, and for all T>1, there ex­ists a value of A such that tor­tur­ing N peo­ple for T sec­onds is bet­ter than tor­tur­ing A*N peo­ple for T-1 sec­onds.”

Sure, the value of A may be larger than 10^100… But then, 3^^^3 is already vastly larger than 10^100. And if it weren’t big enough we could just throw a big­ger num­ber at the prob­lem; there is no up­per bound on the size of con­ceiv­able real num­bers. So if we grant the crit­i­cal con­di­tion in ques­tion, as Yud­kowsky does/​did in the origi­nal post…

Well, you ba­si­cally have to con­cede that “tor­ture” wins the ar­gu­ment, be­cause even if you say that [hugenum­ber] of dust specks does not equate to a half-cen­tury of tor­ture, that is NOT you win­ning the ar­gu­ment. That is just you try­ing to bid up the price of half a cen­tury of tor­ture.

The crit­i­cal con­di­tion that must be met here is sim­ple, and is an un­der­ly­ing as­sump­tion of Yud­kowsky’s origi­nal post: All forms of suffer­ing and in­con­ve­nience are rep­re­sented by some real num­ber quan­tity, with com­men­su­rate units to all other forms of suffer­ing and in­con­ve­nience.

In other words, the “tor­ture one per­son rather than al­low 3^^^3 dust specks” wins, quite pre­dictably, if and only if it is true that that the ‘pain’ com­po­nent of the util­ity func­tion is mea­sured in one and only one di­men­sion.

So the ques­tion is, ba­si­cally, do you mea­sure your util­ity func­tion in terms of a sin­gle in­put vari­able?

If you do, then ei­ther you bury your head in the sand and de­velop a se­vere case of scope in­sen­si­tivity… or you con­clude that there has to be some num­ber of dust specks worse than a sin­gle life­time of tor­ture.

If you don’t, it raises a large com­plex of ad­di­tional ques­tions- but so far as I know, there may well be space to con­struct co­her­ent, ra­tio­nal sys­tems of ethics in that realm of ideas.

• It oc­curred to me to add some­thing to my pre­vi­ous com­ments about the idea of harm be­ing non­lin­ear, or some­thing that we com­pute in mul­ti­ple di­men­sions that are not com­men­su­rate.

One is that any de­on­tolog­i­cal sys­tem of ethics au­to­mat­i­cally has at least two di­men­sions. One for gen­eral-pur­pose “utilons,” and one for… call them “red flags.” As soon as you ac­cu­mu­late even one red flag you are do­ing some­thing cap­i­tal-w Wrong in that sys­tem of ethics, re­gard­less of the num­ber of utilons you’ve ac­cu­mu­lated.

The main ar­gu­ment jus­tify­ing this is, of course, that you may think you have found a clever way to ac­cu­mu­late 3^^^3 utilons in ex­change for a triv­ial amount of harm (tor­ture ONLY one scape­goat!)… but the over­all weighted av­er­age of all hu­man moral rea­son­ing sug­gests that peo­ple who think they’ve done this are usu­ally wrong. There­fore, best to red-flag such meth­ods, be­cause they usu­ally only sound clever.

Ob­vi­ously, one may need to take this ar­gu­ment with a grain of salt, or 3^^^3 grains of salt. It de­pends on how strongly you feel bound to honor con­clu­sions drawn by look­ing at the weighted av­er­age of past hu­man de­ci­sion-mak­ing.

The other ob­ser­va­tion that oc­curred to me is un­re­lated. It is about the idea of harm be­ing non­lin­ear, which as I noted above is just plain not enough to in­val­i­date the tor­ture/​specks ar­gu­ment by it­self due to the abil­ity to keep thwack­ing a non­lin­ear re­la­tion­ship with big­ger num­bers un­til it col­lapses.

Take as a thought-ex­per­i­ment an al­ter­nate Earth where, in the year 1000, pop­u­la­tion growth has sta­bi­lized at an equil­ibrium level, and will rise back to that equil­ibrium level in re­sponse to sud­den pop­u­la­tion de­crease. The equil­ibrium level is as­sumed to be sta­ble in and of it­self.

Imag­ine aliens ar­riv­ing and kil­ling 50% of all hu­mans, cho­sen ap­par­ently at ran­dom. Then they wait un­til the pop­u­la­tion has re­turned to equil­ibrium (say, 150 years) and do it again. Then they re­peat the pro­cess twice more.

The world pop­u­la­tion circa 1000 was about 300 mil­lion (roughly,) so we es­ti­mate that this pro­cess would kill 600 mil­lion peo­ple.

Now con­sider as an al­ter­na­tive, said aliens sim­ply kil­ling ev­ery­one, all at once. 300 mil­lion dead.

Which out­come is worse?

If harm is strictly lin­ear, we would ex­pect that one death plus one death is ex­actly as bad as two deaths. By the same logic, 300 megadeaths is only half as bad as 600 megadeaths, and if we in­oc­u­late our­selves against hy­per­bolic dis­count­ing...

Well, the “lin­ear harm” the­ory smacks into a wall. Be­cause it is very cred­ible to claim that the ex­tinc­tion of the hu­man species is much worse than merely twice as bad as the ex­tinc­tion of ex­actly half the hu­man species. Many ar­gu­ments can be pre­sented, and no doubt have been pre­sented on this very site. The first that comes to mind is that hu­man ex­tinc­tion means the loss of all po­ten­tial fu­ture value as­so­ci­ated with hu­mans, not just the loss of pre­sent value, or even the loss of some por­tion of the po­ten­tial fu­ture.

We are forced to con­clude that there is a “to­tal ex­tinc­tion” term in our calcu­la­tion of harm, one that rises very rapidly in an ‘in­fla­tion­ary’ way. And it would do this as the de­struc­tion wrought upon hu­man­ity reaches and passes a level be­yond which the species could not re­cover- the aliens kil­ling all hu­mans ex­cept one is not no­tice­ably bet­ter than kil­ling all of them, nor is spar­ing any pop­u­la­tion less than a com­plete breed­ing pop­u­la­tion, but once a breed­ing pop­u­la­tion is spared, there is a fairly sud­den drop in the to­tal quan­tity of harm.

Now, again, in it­self this does not strictly in­val­i­date the Tor­ture/​Specks ar­gu­ment. As­sum­ing that the harm as­so­ci­ated with hu­man ex­tinc­tion (or tor­tur­ing one per­son) is any finite amount that could con­ceiv­ably be equalled by adding up a finite num­ber of specks in eyes, then by defi­ni­tion there is some “big enough” num­ber of specks that the aliens would ra­tio­nally de­cide to wipe out hu­man­ity rather than ac­cept that many specks in that many eyes.

But I can’t re­call a similar ar­gu­ment for non­lin­ear harm mea­sure­ment be­ing pre­sented in any of the com­ments I’ve sam­pled, so I wanted to men­tion it.

But I thought it was in­ter­est­ing and couldn’t re­call see­ing it el­se­where.

• I men­tioned du­pli­ca­tion. That in 3^^^3 peo­ple, most have to be ex­act du­pli­cates of one an­other birth to death.

In your ex­tinc­tion ex­am­ple, once you have sub­stan­tially more than the breed­ing pop­u­la­tion, ex­tra peo­ple du­pli­cate some as­pects of your pop­u­la­tion (abil­ity to breed) which causes you to find it less bad.

The other ob­ser­va­tion that oc­curred to me is un­re­lated. It is about the idea of harm be­ing non­lin­ear, which as I noted above is just plain not enough to in­val­i­date the tor­ture/​specks ar­gu­ment by it­self due to the abil­ity to keep thwack­ing a non­lin­ear re­la­tion­ship with big­ger num­bers un­til it col­lapses.

Not ev­ery non-lin­ear re­la­tion­ship can be thwacked with big­ger and big­ger num­bers...

• don’t know the ex­act val­ues of N and T

For one thing N=1 T=1 triv­ially satis­fies your con­di­tion…

I’m not sure what you mean by this.

I mean, sup­pose that you got your­self a func­tion that takes in a de­scrip­tion of what’s go­ing on in a re­gion of space­time, and it spits out a real num­ber of how bad it is.

Now, that func­tion can do all sorts of perfectly rea­son­able things that could make it asymp­totic for large num­bers of peo­ple, for ex­am­ple it could be count­ing dis­tinct sub­jec­tive ex­pe­riences in there (oth­er­wise a mind up­load on very mul­ti­ple re­dun­dant hard­ware is an util­ity mon­ster, de­spite hav­ing an iden­ti­cal sub­jec­tive ex­pe­rience to same up­load run­ning one time. That’s much sillier than the usual util­ity mon­ster, which feels much stronger feel­ings). This would im­pose a finite limit (for brains of finite com­plex­ity).

One thing that func­tion can’t do, is to have a gen­eral prop­erty that f(a union b)=f(a)+f(b) , be­cause then we just sub­di­vide our space into in­di­vi­d­ual atoms none of which are feel­ing any­thing.

• For one thing N=1 T=1 triv­ially satis­fies your con­di­tion...

Ob­vi­ously I only meant to con­sider val­ues of T and N that ac­tu­ally oc­cur in the ar­gu­ment we were both talk­ing about.

• Well I’m not sure what’s the point then. What you’re try­ing to in­duct from it.

• Yes, if this is the case (would be nice if Eliezer con­firmed it) I can see where the logic halts from my per­spec­tive :)

Ex­plana­tory ex­am­ple if some­one care:

Tor­tur­ing 10^21 per­sons for 1 minute is bet­ter than tor­tur­ing 10^30 per­sons for 1 sec­ond.

I dis­agree. From my moral stand­point AND from my util­ity func­tion whereas I am a by­stan­der and per­ceive all hu­mans as a co­op­er­at­ing sys­tem and want to min­i­mize the dam­ages to it, I think that it is bet­ter for 10^30 per­sons to put up with 1 sec­ond of in­tense pain com­pared to a sin­gle one who have to sur­vive a whole minute. It is much, much more easy to re­cover from one sec­ond of pain than from be­ing tor­tured for a minute.

And spec dust is vir­tu­ally harm­less. The po­ten­tial harm it may cause should at least POSSIBLY be out­weighted by the benefits, e.g. some­one not be­ing run over by a car be­cause he stopped and scratched his eye.

• Okay, so let’s zoom in here. What is prefer­able?

Tor­tur­ing 1 per­son for 60 seconds

Tor­tur­ing 100 per­son for 59 seconds

Tor­tur­ing 10000 per­son for 58 seconds

etc.

Kind of a para­dox of the heap. How many sec­onds of tor­ture are still tor­ture?

And 10^30 is re­ally a lot of peo­ple. That’s what Eliezer meant with “scope in­sen­si­tivity”. And all of them would be re­ally grate­ful if you spared them their sec­ond of pain. Could be worth a minute of pain?

• The po­ten­tial harm it may cause should at least POSSIBLY be out­weighted by the benefits, e.g. some­one not be­ing run over by a car be­cause he stopped and scratched his eye.

That’s fight­ing the hy­po­thet­i­cal. As­sume that the speck is such that the harm caused by the spec slightly out­weighs the benefits.

• Or the benefits could slightly out­weigh the harm.

You have to treat this op­tion as a net win of 0 then, be­cause you have no more info to go on so the probs. are 5050. Op­tion A: Tor­ture. Net win is nega­tiv. Op­tion B: Spec dust. Net win is zero. Make you choice.

• In the Least Con­ve­nient Pos­si­ble World of this hy­po­thet­i­cal, ev­ery dust speck causes a con­stant small amount of harm with no knock-on effects(no avoid­ing buses, no crash­ing cars...)

• I thought the origi­nal point was to fo­cus just on the in­con­ve­nience of the dust, rather than sim­ply propo­si­tion­ing that out of 3^^^3 peo­ple who were dust­specked, one per­son would’ve got­ten some­thing worse than 50 years of tor­ture as a con­se­quence of the dust speck. The lat­ter is not even an eth­i­cal dilemma, it’s merely an (en­tirely base­less but some­what plau­si­ble) as­ser­tion about the con­se­quences of dust specks in the eyes.

• ex­actly! No knock-on effects. Per­haps you meant to com­ment on the grand­par­ent(great-grand­par­ent? do I mea­sure from this post or your post?) in­stead?

• yeah, clicked wrong but­ton.

• It’s not (nec­es­sar­ily) about dust specks ac­ci­den­tally lead­ing to ma­jor ac­ci­dents. But if you think that hav­ing a dust speck in your eye may be even slightly an­noy­ing (whether you con­sciously know that or not), the cost you have from hav­ing it fly into your eye is not zero.

Now some­thing not zero mul­ti­plied by a suffi­ciently large num­ber will nec­es­sar­ily be larger than the cost of one hu­man be­ing’s life in tor­ture.

• Now you are get­ting it copletely wrong. You can´t add up harm on spec dust if it is hap­pen­ing to differ­ent peo­ple. Every in­di­vi­d­ual has a ca­pa­bil­ity to re­cover from it. Think about it. With that logic it is worse to rip a hair from ev­ery liv­ing be­ing in the uni­verse than to nuke New York. If peo­ple in charge rea­soned that way we might have har­maged­don in no time.

• If

1. Each hu­man death has only finite cost. We sure act this way in our ev­ery­day lives, ex­chang­ing hu­man lives for the con­ve­nience of driv­ing around with cars etc.

2. By our uni­verse you do not mean only the ob­serv­able uni­verse, but in­clude the level I multiverse

then yes, that is the whole point. A tiny amount of suffer­ing mul­ti­plied by a suffi­ciently large num­ber ob­vi­ously is even­tu­ally larger than the fixed cost of nuk­ing New York.

Un­less you can tell my why my model for the costs of suffer­ing dis­tributed over mul­ti­ple peo­ple is wrong, I don’t see why I should change it. “I don’t like the con­clu­sions!!!” is not a valid ob­jec­tion.

If peo­ple in charge rea­soned that way we might have har­maged­don in no time.

If they ever jus­tifi­able start to rea­son that way, i.e. if they ac­tu­ally have the power to rip a hair from ev­ery liv­ing hu­man be­ing, I think we’ll have larger prob­lems than the po­ten­tial nuk­ing of New York.

• Okey, I was try­ing to learn from this post but now I see that I have to try to ex­plain stuff my­self in or­der for this com­mu­ni­ca­tion to be­come use­ful. When It comes to pain it is hard to ex­plain why one per­son´s great suffer­ing is worse than many suffer­ing very very lit­tle if you don´t un­der­stand it by your­self. So let us change the cur­rency from pain to money.

Let´s say that you and me need to fund a large plan­tage of al­gae in or­der to let the Earth´s pop­u­la­tion es­cape star­va­tion due to lack of food. This pro­ject is of great im­por­tence for the whole world so we can force any­one to be­come a spon­sor and this is good be­cause we need the money FAST. We work for the whole world (read: Earth) and we want to min­imze the dam­ages from our ac­tions. This pro­ject is re­ally ex­pen­sive how­ever… Should we:

a) Take one dol­lar from ev­ery per­son around the world with a min­i­mum wage that can still af­ford house, food etc. even if we take that one dol­lar?

or should we

b) Take all the money (in­stantly) from Den­mark and watch it break down in bakruptcy?

Ask­ing me it is ob­vi­ous that we don´t want Den­mark to go bankrupt just be­cause it may an­noy some peo­ple that they have to sac­ri­face 1 dol­lar.

• Ask­ing me it is ob­vi­ous that we don´t want Den­mark to go bankrupt just be­cause it may an­noy some peo­ple that they have to sac­ri­face 1 dol­lar.

The trou­ble is that there is a con­tin­u­ous se­quence from

Take \$1 from everyone

Take \$1.01 from al­most everyone

Take \$1.02 from al­most al­most everyone

...

Take a lot of money from very few peo­ple (Den­mark)

If you think that tak­ing \$1 from ev­ery­one is okay, but tak­ing a lot of money from Den­mark is bad, then there is some point in the mid­dle of this se­quence where your opinion changes even though the num­bers only change slightly. You will have to say, for in­stance, tak­ing \$20 each from 120 the pop­u­la­tion of the world is good, but tak­ing \$20.01 each from slightly less than 110 the pop­u­la­tion of the world is bad. Can you say that?

• You will have to say, for in­stance, tak­ing \$20 each from 120 the pop­u­la­tion of the world is good, but tak­ing \$20.01 each from slightly less than 110 the pop­u­la­tion of the world is bad. (em­pha­sis mine)

Typo here?

• If you think that tak­ing \$1 from ev­ery­one is okay, but tak­ing a lot of money from Den­mark is bad, then there is some point in the mid­dle of this se­quence where your opinion changes even though the num­bers only change slightly.

I think my last re­sponse start­ing with YES got lost some­how, so I will clar­ify here. I don´t fol­low the se­quence be­cause I don´t know where the crit­i­cal limit is. Why? Be­cause the crit­i­cal limit is de­pend­ing on other fac­tors which i can´t fore­see. Read up on ba­sic global econ­omy. But YES, in the­ory I can take lit­tle money from ev­ery­one with­out ru­in­ing a sin­gle one of them since it bal­ances out, but if I take alot of money form one per­son I make him poor. That is how eco­nomics work, you can re­cover from small losses eas­ily while some are too big to ever re­cover form, hence why some banks go bankrupt some­times. And pain is similar since I can re­cover from a dust speck in my eye, but not from be­ing tor­tured for 50 years. The dust specks are not per­ma­nent sac­ri­faces. If they were, I agree that they could stack up.

• I don´t fol­low the se­quence be­cause I don´t know where the crit­i­cal limit is.

You may not know ex­actly where the limit is, but the point isn’t that the limit is at some ex­act num­ber, the point is that there is a limit. There’s some point where your rea­son­ing makes you go from good to bad even though the change is very small. Do you ac­cept that such a limit ex­ists, even though you may not know ex­actly where it is?

• Yes I do.

• So you rec­og­nize that your origi­nal state­ment about \$1 ver­sus bankruptcy also forces you to make the same con­clu­sion about \$20.00 ver­sus \$20.01 (or what­ever the ac­tual num­ber is, since you don’t know it).

But mak­ing the con­clu­sion about \$20.00 ver­sus \$20.01 is much harder to jus­tify. Can you jus­tify it? You have to be able to, since it is im­plied by your origi­nal state­ment.

• No I don´t have to make the same con­clu­sion about 20.00 dol­lar ver­sus 20.01. I left a safety mar­gin when I said 1 dol­lar since I don´t want to fol­low the se­quence but am very, very sure that 1 dol­lar is a safe num­ber. I don´t know ex­actly how much I can risk tak­ing from a ran­dom in­di­vi­d­ual be­fore I risk ru­in­ing him, but if I take only one dol­lar from a per­son who can af­ford a house and food, I am pretty safe.

• No I don´t have to make the same con­clu­sion about 20.00 dol­lar ver­sus 20.01

Yes, you do. You just ad­mit­ted it, al­though the num­ber might not be 20. And whether you ad­mit it or not it log­i­cally fol­lows from what you said up above.

• Maybe I didn´t un­der­stand you the first time.

You will have to say, for in­stance, tak­ing \$20 each from 120 the pop­u­la­tion of the world is good, but tak­ing \$20.01 each from slightly less than 110 the pop­u­la­tion of the world is bad. Can you say that? To an­swer that, well yes it MIGHT be the case, I don´t know, there­fore I only ask for 1 dol­lar. Is that mak­ing it any clearer?

• Your be­lief about \$1 ver­sus bankruptcy log­i­cally im­plies a similar be­lief about \$20.00 ver­sus \$20.01 (or what­ever the ac­tual num­bers are). You can’t just an­swer that that “might” be the case—if your origi­nal be­lief is as de­scribed, that is the case. You have to be will­ing to defend the log­i­cal con­se­quence of what you said, not just defend the ex­act words that you said.

• What do you mean with “what­ever the ac­tual num­bers are”. Num­bers for what? For the amount that takes to ruin some­one? As long as the in­di­vi­d­ual dona­tions doesn´t ruin the dona­tors I ac­cept a higher dona­tion from a smaller pop­u­la­tion. Is that what you mean?

• I just wrote 20 be­cause I have to write some­thing, but there is a num­ber. This num­ber has a value, even if you don’t know it. Pre­tend I put the real num­ber there in­stead of 20.

• Yes, but still, what num­ber? IF it is as I already sug­gested, the num­ber for the amount of money that can be taken with­out ru­in­ing any­one, then I agree that we could take that amount of money in­stead of 1 dol­lar.

• I don’t think you un­der­stand.

Yout origi­nal state­ment about \$1 ver­sus bankruptcy log­i­cally im­plies that there is a num­ber such that that it is okay to take ex­actly that amount of money from a cer­tain num­ber of peo­ple, but wrong to take a very tiny amount more. Even though you don’t know ex­actly what this num­ber is, you know that it ex­ists. Be­cause this num­ber is a log­i­cal con­se­quence of what you said, you must be able to jus­tify hav­ing such a num­ber.

• Yes, in my last com­ment I agreed to it. There is such a num­ber. I don’t think you un­der­stand my rea­sons why, which I already ex­plained. It is wrong to take a tiny amoint more, since that will ruin them. I can’tknow ecactly what that is since global and lo­cal econ­omy isn`t that sta­ble. Tap­ping out.

• the num­ber for the amount of money that can be taken with­out ru­in­ing anyone

So you’re say­ing there ex­ists such a num­ber, such that tak­ing that amount of money from some­one wouldn’t ruin them, but tak­ing that amount plus a tiny bit more (say, 1 cent) would?

• YES be­cause that is how eco­nomics work! You can´t take alot of money from ONE per­son with­out him get­ting poor but you CAN take money from alot of peo­ple with­out ru­in­ing them! Money is a cir­cu­lat­ing re­source and just like pain you can re­cover form small losses af­ter a time.

• If you think that tak­ing \$1 from ev­ery­one is okay, but tak­ing a lot of money from Den­mark is bad, then there is some point in the mid­dle of this se­quence where your opinion changes even though the num­bers only change slightly.

If you think that 100C wa­ter is hot and 0C wa­ter is cold, then there is some point in the mid­dle of this se­quence where your opinion changes even though the num­bers only change slightly.

• My opinion would change grad­u­ally be­tween 100 de­grees and 0 de­grees. Either I would use qual­ifiers so that there is no abrupt tran­si­tion, or else I would con­sider some­thing to be hot in a set of situ­a­tions and the size of that set would de­crease grad­u­ally.

• No, be­cause tem­per­a­ture is (very close to) a con­tinuum, whereas good/​bad is a bi­nary. To see this more clearly, you can re­place the ques­tion, “Is this ac­tion good or bad?” to “Would an om­ni­scient, moral per­son choose to take this ac­tion?“, and you can in­stantly see the an­swer can only be “yes” (good) or “no” (bad).

(Of course, it’s not always clear which choice the an­swer is—hence why so many ar­gue over it—but the an­swer has to be, in prin­ci­ple, ei­ther “yes” or “no”.)

• No, be­cause tem­per­a­ture is (very close to) a con­tinuum, whereas good/​bad is a bi­nary.

First, I’m not talk­ing about tem­per­a­ture, but about cat­e­gories “hot” and “cold”.

Se­cond, why in the world would good/​bad be bi­nary?

“Would an om­ni­scient, moral per­son choose to take this ac­tion?”

I have no idea—I don’t know what an om­ni­scient per­son (aka God) will do, and in any case the an­swer is likely to be “de­pends on which moral­ity we are talk­ing about”.

Oh, and would an om­ni­scient be­ing call that wa­ter hot or cold?

• First, I’m not talk­ing about tem­per­a­ture, but about cat­e­gories “hot” and “cold”.

You’ll need to define your terms for that, then. (And for the record, I don’t use the words “hot” and “cold” ex­clu­sively; I also use terms like “warm” or “cool” or “this might be a great tem­per­a­ture for a swim­ming pool, but it’s hor­rible for tea”.)

Also, if you weren’t talk­ing about tem­per­a­ture, why bother men­tion­ing de­grees Cel­sius when talk­ing about “hot­ness” and “cold­ness”? Clearly tem­per­a­ture has some­thing to do with it, or else you wouldn’t have men­tioned it, right?

Se­cond, why in the world would good/​bad be bi­nary?

Be­cause you can always re­place a ques­tion of good­ness with the ques­tion “Would an om­ni­scient, moral per­son choose to take this ac­tion?“.

I have no idea—I don’t know what an om­ni­scient per­son (aka God) will do,

Just be­cause you have no idea what the an­swer could be doesn’t mean the true an­swer can fall out­side the pos­si­ble space of an­swers. For in­stance, you can’t an­swer the ques­tion of “Would an om­ni­scient moral rea­soner choose to take this ac­tion?” with some­thing like “fish”, be­cause that falls out­side of the an­swer space. In fact, there are only two pos­si­ble an­swers: “yes” or “no”. It might be one; it might be the other, but my origi­nal point was that the an­swer to the ques­tion is guaran­teed to be ei­ther “yes or “no”, and that holds true even if you don’t know what the an­swer is.

the an­swer is likely to be “de­pends on which moral­ity we are talk­ing about”

There is only one “moral­ity” as far as this dis­cus­sion is con­cerned. There might be other “moral­ities” held by aliens or what­ever, but the hu­man CEV is just that: the hu­man CEV. I don’t care about what the Babyeaters think is “moral”, or the Peb­ble­sorters, or any other alien species you care to sub­sti­tute—I am hu­man, and so are the other par­ti­ci­pants in this dis­cus­sion. The an­swer to the ques­tion “which moral­ity are we talk­ing about?” is pre­sup­posed by the con­text of the dis­cus­sion. If this thread in­cluded, say, Clippy, then your an­swer would be a valid one (al­though even then, I’d rather talk game the­ory with Clippy than moral­ity—it’s far more likely to get me some­where with him/​her/​it), but as it is, it just seems like a rather un­sub­tle at­tempt to dodge the ques­tion.

• In fact, there are only two pos­si­ble an­swers: “yes” or “no”

I don’t think so.

You’re mak­ing a cir­cu­lar ar­gu­ment—good/​bad is bi­nary be­cause there are only two pos­si­ble states. I do not agree that there are only two pos­si­ble states.

There is only one “moral­ity” for the par­ti­ci­pants of this dis­cus­sion.

Really? Either I’m not a par­ti­ci­pant in this dis­cus­sion or you’re wrong. See: a bi­nary out­come :-D

but the hu­man CEV is just that: the hu­man CEV

I have no idea what the hu­man CEV is and even whether such a thing is pos­si­ble. I am fa­mil­iar with the con­cept, but I have doubts about it’s re­al­ity.

• You’re mak­ing a cir­cu­lar ar­gu­ment—good/​bad is bi­nary be­cause there are only two pos­si­ble states. I do not agree that there are only two pos­si­ble states.

Name a third al­ter­na­tive that is ac­tu­ally an an­swer, as op­posed to some sort of eva­sion (“it de­pends”), and I’ll con­cede the point.

Also, I’m aware that this isn’t your main point, but… how is the ar­gu­ment cir­cu­lar? I’m not say­ing some­thing like, “It’s bi­nary, there­fore there are two pos­si­ble states, there­fore it’s bi­nary”; I’m just say­ing “There are two pos­si­ble states, there­fore it’s bi­nary.”

Either I’m not a par­ti­ci­pant in this dis­cus­sion or you’re wrong. See: a bi­nary out­come :-D

Are you hu­man? (y/​n)

I have no idea what the hu­man CEV it and even whether such a thing is pos­si­ble. I am fa­mil­iar with the con­cept, but I have doubts about it’s re­al­ity.

Which part do you ob­ject to? The “co­her­ent” part, the “ex­trap­o­lated” part, or the “vo­li­tion” part?

• Name a third al­ter­na­tive that is ac­tu­ally an answer

“Doesn’t mat­ter”.

First of all you’re ig­nor­ing the ex­is­tence of morally neu­tral ques­tions. Should I scratch my butt? Lessee, would an om­ni­scient perfectly moral be­ing scratch his/​her/​its butt? Oh dear, I think we’re in trou­ble now… X-D

Se­cond, you’re as­sum­ing atom­ic­ity of ac­tions and that’s a bad as­sump­tion. In your world ac­tions are very limited—they can be done or not done, but they can­not be done par­tially, they can­not be slightly mod­ified or just done in a few differ­ent ways.

Third, you’re as­sum­ing away the un­cer­tainty of the fu­ture and that also is a bad as­sump­tion. Proper ac­tions for an om­ni­scient be­ing can very well be differ­ent from proper ac­tions for some­one who has to face un­cer­tainty with re­spect to con­se­quences.

Fourth, for the great ma­jor­ity of dilem­mas in life (e.g. “Should I take this job?“, “Should I marry him/​her?“, “Should I buy a new phone?“) the an­swer “what an om­ni­scient moral be­ing would choose” is perfectly use­less.

Which part do you ob­ject to?

The con­cept of CEV seems to me to be the di­rect equiv­a­lent of “God’s will”—hand­wave­able in any di­rec­tion you wish while re­tain­ing enough vague­ness to make spe­cific dis­cus­sions difficult or pretty much im­pos­si­ble. I think my biggest ob­jec­tion is to the “co­her­ent” part while also hav­ing great doubts about the “ex­trap­o­lated” part as well.

• would an om­ni­scient perfectly moral be­ing scratch his/​her/​its butt?

(Side note: this con­ver­sa­tion is tak­ing a rather strange turn, but what­ever.)

If its butt feels itchy, and it would pre­fer for its butt to not feel itchy, and the best way to make its butt not feel itchy is to scratch it, and there are no ex­ter­nal moral con­se­quences to its de­ci­sion (like, say, some­one threat­en­ing to kill 3^^^3 peo­ple iff it scratches its butt)… well, it’s in­creas­ing its own util­ity by scratch­ing its butt, isn’t it? If it in­creases its own util­ity by do­ing so and doesn’t de­crease net util­ity el­se­where, then that’s a net in­crease in util­ity. Scratch away, I say.

Se­cond, you’re as­sum­ing atom­ic­ity of ac­tions and that’s a bad as­sump­tion. In your world ac­tions are very limited—they can be done or not done, but they can­not be done par­tially, they can­not be slightly mod­ified or just done in a few differ­ent ways.

Sure. I agree I did just hand­wave a lot of stuff with re­spect to what an “ac­tion” is… but would you agree that, con­di­tional on hav­ing a good defi­ni­tion of “ac­tion”, we can eval­u­ate “ac­tions” morally? (Mo­ral by hu­man stan­dards, of course, not Peb­ble­sorter stan­dards.)

Third, you’re as­sum­ing away the un­cer­tainty of the fu­ture and that also is a bad as­sump­tion. Proper ac­tions for an om­ni­scient be­ing can very well be differ­ent from proper ac­tions for some­one who has to face un­cer­tainty with re­spect to con­se­quences.

Agreed, but if you come up with a way to make good/​moral de­ci­sions in the ideal­ized situ­a­tion of om­ni­science, you can gen­er­al­ize to un­cer­tain situ­a­tions sim­ply by ap­ply­ing prob­a­bil­ity the­ory.

Fourth, for the great ma­jor­ity of dilem­mas in life (e.g. “Should I take this job?“, “Should I marry him/​her?“, “Should I buy a new phone?“) the an­swer “what an om­ni­scient moral be­ing would choose” is perfectly use­less.

Again, I agree… but then, knowl­edge of the Banach-Tarski para­dox isn’t of much use to most peo­ple.

The con­cept of CEV seems to me to be the di­rect equiv­a­lent of “God’s will”—hand­wave­able in any di­rec­tion you wish while re­tain­ing enough vague­ness to make spe­cific dis­cus­sions difficult or pretty much im­pos­si­ble. I think my biggest ob­jec­tion is to the “co­her­ent” part while also hav­ing great doubts about the “ex­trap­o­lated” part as well.

Fair enough. I don’t have enough do­main ex­per­tise to re­ally an­a­lyze your po­si­tion in depth, but at a glance, it seems rea­son­able.

• it’s in­creas­ing its own utility

The as­sump­tion that moral­ity boils down to util­ity is a rather huge as­sump­tion :-)

would you agree that, con­di­tional on hav­ing a good defi­ni­tion of “ac­tion”, we can eval­u­ate “ac­tions” morally?

Con­di­tional on hav­ing a good defi­ni­tion of “ac­tion” and on hav­ing a good defi­ni­tion of “morally”.

you can gen­er­al­ize to un­cer­tain situ­a­tions sim­ply by ap­ply­ing prob­a­bil­ity theory

I don’t think so, at least not “sim­ply”. An om­ni­scient be­ing has no risk and no risk aver­sion, for ex­am­ple.

isn’t of much use to most people

Mo­ral­ity is sup­posed to be use­ful for prac­ti­cal pur­poses. Heated dis­cus­sions over how many an­gels can dance on the head of a pin got a pretty bad rap over the last few cen­turies… :-)

• The as­sump­tion that moral­ity boils down to util­ity is a rather huge as­sump­tion :-)

It’s not an as­sump­tion; it’s a nor­ma­tive state­ment I choose to en­dorse. If you have some other sys­tem, feel free to en­dorse that… but then we’ll be dis­cussing moral­ity, and not meta-moral­ity or what­ever sys­tem origi­nally pro­duced your ob­jec­tion to Jiro’s dis­tinc­tion be­tween good and bad.

on hav­ing a good defi­ni­tion of “morally”

Agree.

An om­ni­scient be­ing has no risk and no risk aver­sion, for ex­am­ple.

Well, it could have risk aver­sion. It’s just that risk aver­sion never comes into play dur­ing its de­ci­sion-mak­ing pro­cess due to its om­ni­science. Strip away that om­ni­science, and risk aver­sion very well might rear its head.

Mo­ral­ity is sup­posed to be use­ful for prac­ti­cal pur­poses. Heated dis­cus­sions over how many an­gels can dance on the head of a pin got a pretty bad rap over the last few cen­turies… :-)

I dis­agree. Take the fol­low­ing two state­ments:

1. Mo­ral­ity, prop­erly for­mal­ized, would be use­ful for prac­ti­cal pur­poses.

2. Mo­ral­ity is not cur­rently prop­erly for­mal­ized.

There is no con­tra­dic­tion in these two state­ments.

• There is no con­tra­dic­tion in these two state­ments.

But they have a con­se­quence: Mo­ral­ity cur­rently is not use­ful for prac­ti­cal pur­poses.

That’s… an in­ter­est­ing po­si­tion. Are you will­ing to live with it? X-)

You can, of course define moral­ity in this par­tic­u­lar way, but why would you do that?

• To see this more clearly, you can re­place the ques­tion, “Is this ac­tion good or bad?” to “Would an om­ni­scient, moral per­son choose to take this ac­tion?“, and you can in­stantly see the an­swer can only be “yes” (good) or “no” (bad).

By that defi­ni­tion, al­most all ac­tions are bad.

Also, why the heck do you think there ex­ist words for “bet­ter” and “worse”?

• By that defi­ni­tion, al­most all ac­tions are bad.

True. I’m not sure why that mat­ters, though. It seems triv­ially ob­vi­ous to me that a ran­dom ac­tion se­lected out of the set of all pos­si­ble ac­tions would have an over­whelming prob­a­bil­ity of be­ing bad. But most agents don’t se­lect ac­tions ran­domly, so that doesn’t seem to be a prob­lem. After all, the key as­pect of in­tel­li­gence is that it al­lows you to it ex­tremely tiny tar­gets in con­figu­ra­tion space; the fact that most con­figu­ra­tions of par­ti­cles don’t give you a car doesn’t pre­vent hu­man en­g­ineers from mak­ing cars. Why would the fact that most ac­tions are bad pre­vent you from choos­ing a good one?

Also, why the heck do you think there ex­ist words for “bet­ter” and “worse”?

Those are rel­a­tive terms, meant to com­pare one ac­tion to an­other. That doesn’t mean you can’t clas­sify an ac­tion as “good” or “bad”; for in­stance, if I de­cided to ran­domly se­lect and kill 10 peo­ple to­day, that would be a unilat­er­ally bad ac­tion, even if it would the­o­ret­i­cally be “worse” if I de­cided to kill 11 peo­ple in­stead of 10. The differ­ence be­tween the two is like the differ­ence be­tween ask­ing “Is this num­ber big­ger than that num­ber?” and “Is this num­ber pos­i­tive or nega­tive?“.

• In this case I do not dis­agree with you. The num­ber of peo­ple on earth is sim­ply not large enough.

But if you asked me whether to take money from 3^^^3 peo­ple com­pared to throw­ing Den­mark into bankruptcy, I would choose the lat­ter.

Math should over­ride in­tu­ition. So un­less you give me a model that you can con­vince me of that is more rea­son­able than adding up costs/​util­ities, I don’t think you will change my mind.

• Now I see what is fun­da­men­tally wrong with the ar­ti­cle and you´re rea­son­ing from MY per­spec­tive. You don´t seem to un­der­stand the differ­ence be­tween a per­ma­nent sac­ri­face and a tem­po­rary.

If we sub­si­tute the spec dust with in­dex fingers for ex­am­ple, I agree that it is rea­son­able to think that kil­ling one per­son is far bet­ter than to have 3 billion (we don´t need 3^^^3 for this one) per­sons lose their in­dex fingers. Be­cause that is a per­ma­nent sac­ri­face. At least for now we can´t have fingers grow out just like that. To get dust in your eye at the other hand, is only tem­po­rary. You will get over it real quick and for­get all about it. But 50 years of tor­ture is some­thing that you will never fully heal from and it will ruin a per­sons life and cause per­ma­nent dam­age.

• That’s ridicu­lous. So mild pains don’t count if they’re done to many differ­ent peo­ple?

Let’s give a more ob­vi­ous ex­am­ple. It’s bet­ter to kill one per­son than to am­pu­tate the right hands of 5000 peo­ple, be­cause the to­tal pain will be less.

Scal­ing down, we can say that it’s bet­ter to am­pu­tate the right hands of 50,000 peo­ple than to tor­ture one per­son to death, be­cause the to­tal pain will be less.

Keep re­peat­ing this in your head(see how con­sis­tent it feels, how it makes sense).

Now just ex­trap­o­late to the in­stance that it’s bet­ter to have 3^^^3 peo­ple have dust specks in their eyes than to tor­ture one per­son to death be­cause the to­tal pain will be less. The hair-rip­ping ar­gu­ment isn’t good enough be­cause pain.[ (peo­ple on earth) (pain from hair rip) ] < pain.[(peo­ple in New York) (pain of be­ing nuked) ]. The math doesn’t add up in your straw man ex­am­ple, un­like with the ac­tual ex­am­ple given.

As a side note, you are also ap­peal­ing to con­se­quences.

• [ (peo­ple on earth) (pain from hair rip) ] < pain.[(peo­ple in New York) (pain of be­ing nuked) ]

I think Okey­maker was ac­tu­ally refer­ring to all the peo­ple in the uni­verse. While the num­ber of “peo­ple” in the uni­verse (defin­ing a “per­son” as a con­scious mind) isn’t a known num­ber, let’s do as blos­som does and as­sume Okey­maker was refer­ring to the Level I mul­ti­verse. In that case, the calcu­la­tion isn’t nearly as clear-cut. (That be­ing said, if I were con­sid­er­ing a hy­po­thet­i­cal like that, I would sim­ply modus po­nens Okey­maker’s modus tol­lens and re­ply that I would pre­fer to nuke New York.)

• Now, do you have any ac­tual ar­gu­ment as to why the ‘bad­ness’ func­tion com­puted over a box con­tain­ing two per­sons with a dust speck, is ex­actly twice the bad­ness of a box con­tain­ing one per­son with a dust speck, all the way up to very large num­bers (when you may even have ex­hausted the num­ber of pos­si­ble dis­tinct peo­ple) ?

I don’t think you do. This is why this stuff strikes me as pseu­do­math. You don’t even state your premises let alone jus­tify them.

• You’re right, I don’t. And I do not re­ally need it in this case.

What I need is a cost func­tion C(e,n) - e is some event and n is the num­ber of peo­ple be­ing sub­jected to said event, i.e. ev­ery­one gets their own—where for ε > 0: C(e,n+m) > C(e,n) + ε for some m. I guess we can limit e to “tor­ture for 50 years” and “dust specks” so this gen­er­ally makes sense at all.

The rea­son why I would want to have such a cost func­tion is be­cause I be­lieve that it should be more than in­finites­i­mally worse for 3^^^^3 peo­ple to suffer than for 3^^^3 peo­ple to suffer. I don’t think there should ever be a point where you can go “Meh, not much of a big deal, no mat­ter how many more peo­ple suffer.”

If how­ever the num­ber of pos­si­ble dis­tinct peo­ple should be finite—even af­ter tak­ing into ac­count level II and level III mul­ti­verses—due to dis­crete­ness of space and dis­crete­ness of per­mit­ted phys­i­cal con­stants, then yes, this is all null and void. But I cur­rently have no par­tic­u­lar rea­son to be­lieve that there should be such a bound, while I do have rea­son to be­lieve that per­mit­ted phys­i­cal con­stants should be from a non-dis­crete set.

• Well, within the 3^^^3 peo­ple you have ev­ery sin­gle pos­si­ble brain repli­cated a gazillion times already (there’s only that many ways you can ar­range the atoms in the vol­ume of hu­man head, suffi­ciently dis­tinct as to be com­put­ing some­thing sub­jec­tively differ­ent, af­ter all, and the num­ber of such ar­range­ments is uni­mag­in­ably smaller than 3^^^3 ).

I don’t think that e.g. I must mas­sively pri­ori­tize the hap­piness of a brain up­load of me run­ning on mul­ti­ple re­dun­dant hard­ware (which sub­jec­tively feels the same as if it was run­ning in one in­stance; it doesn’t feel any stronger be­cause there’s more ‘copies’ of it run­ning in perfect uni­son, it can’t even tell the differ­ence. It won’t af­fect the sub­jec­tive ex­pe­rience if the CPUs run­ning the same com­pu­ta­tion are slightly phys­i­cally differ­ent).

edit: also again, pseu­do­math, be­cause you could have C(dust­speck, n) = 1-1/​(n+1) , your prop­erty holds but it is bounded, so if the c(tor­ture, 1)=2 then you’ll never ex­ceed it with dust specks.

Se­ri­ously, you peo­ple (LW crowd in gen­eral) need to take more calcu­lus or some­thing be­fore your math­e­mat­i­cal in­tu­itions be­come in any way rele­vant to any­thing what­so­ever. It does feel in­tu­itively that with your ep­silon it’s go­ing to keep grow­ing with­out a limit, but that’s sim­ply not true.

• I con­sider en­tities in com­pu­ta­tion­ally dis­tinct uni­verses to also be dis­tinct en­tities, even if the ar­range­ments of their neu­rons are the same. If I have an in­finite (or suffi­ciently large) set of phys­i­cal con­stants such that in those uni­verses hu­man be­ings could emerge, I will also have enough hu­man be­ings.

edit: also again, pseu­do­math, be­cause you could have C(dust­speck, n) = 1-1/​(n+1) , your prop­erty holds but it is bounded, so if the c(tor­ture, 1)=2 then you’ll never ex­ceed it with dust specks.

No. I will always find a larger num­ber which is at least ε greater. I fixed ε be­fore I talked about n,m. So I find num­bers m_1,m_2,… such that C(dust­speck,m_j) > jε.

Be­sides which, even if I had some­how messed up, you’re not here (I hope) to score easy points be­cause my math­e­mat­i­cal for­mal­iza­tion is flawed when it is perfectly ob­vi­ous where I want to go.

• Well, in my view, some de­tails of im­ple­men­ta­tion of a com­pu­ta­tion are to­tally in­dis­cernible ‘from the in­side’ and thus make no differ­ence to the sub­jec­tive ex­pe­riences, qualia, and the like.

I definitely don’t care if there’s 1 me, 3^^^3 copies of me, or 3^^^^3, or 3^^^^^^3 , or the ac­tual in­finity (as the physics of our uni­verse would sug­gest), where the copies are what thinks and per­ceives ev­ery­thing ex­actly the same over the life­time. I’m not sure how count­ing copies as dis­tinct would cope with an in­finity of copies any­way. You have a tor­ture of inf per­sons vs dust specks in inf*3^^^3 per­sons, then what?

Albeit it would be quite hilar­i­ous to see if some­one here picks up the idea and starts ar­gu­ing that be­cause they’re ‘im­por­tant’, there must be a lot of copies of them in the fu­ture, and thus they are right­fully an util­ity mon­ster.

• What is even the prob­lem with the speck dusts?

If I told you that a dust speck was about to float into your left eye in the next sec­ond, would you (a) take it full in the eye, or (b) blink to keep it out? If you say you would blink, you are im­plic­itly ac­knowl­edg­ing that you pre­fer not get­ting specked to get­ting specked, and thereby con­ced­ing that get­ting specked is worse than not get­ting specked. If you would take it full in the eye, well… you’re weird.

• Con­sider the flip side of the ar­gu­ment: would you rather get a dust speck in your eye or have a 1 in 3^^^3 chance of be­ing tor­tured for 50 years?

We take much greater risks with­out a mo­ment’s thought ev­ery time we cross the street. The chance that a car comes out of nowhere and hits you in just the right way to both par­a­lyze you and cause in­cred­ible pain to you for the rest of your life may be very small; but it’s prob­a­bly not smaller than 1 in 10^100, let alone than 1 in 3^^^3.

• Wow. The ob­vi­ous an­swer is TORTURE, all else equal, and I’m pretty sure this is ob­vi­ous to Eliezer too. But even though there are 26 com­ments here, and many of them prob­a­bly know in their hearts tor­ture is the right choice, no one but me has said so yet. What does that say about our abil­ities in moral rea­son­ing?

• Oh, just had a thought. A less ex­treme yet quite re­lated real world situ­a­tion/​ques­tion would be this: What is ap­pro­pri­ate pun­ish­ment for spam­mers?

Yes, I un­der­stand there’re a few ad­di­tional is­sues here, that would make it more analo­gous to, say, if the po­ten­tial tor­turee was plan­ning on de­liber­ately caus­ing all those peo­ple a DSE (Dust Speck Event)

But still, the spam­mer is­sue gives us a more con­crete ver­sion, in­volv­ing quan­tities that don’t make our brains ex­plode, so con­sid­er­ing that may help work out the prin­ci­ples by which these sorts of ques­tions can be dealt with.

• I think this all re­volves around one ques­tion: Is “di­su­til­ity of dust speck for N peo­ple” = N*“di­su­til­ity of dust speck for one per­son”?

This, of course, de­pends on the prop­er­ties of one’s util­ity func­tion.

How about this… Con­sider one per­son get­ting, say, ten dust specks per sec­ond for an hour vs 106060 = 36,000 peo­ple get­ting a sin­gle dust speck each.

This is prob­a­bly a bet­ter way to probe the is­sue at its core. Which of those situ­a­tions is prefer­able? I would prob­a­bly con­sider the sec­ond. How­ever, I sus­pect one per­son get­ting a billion dust specks in their eye per sec­ond for an hour would be prefer­able to 1000 peo­ple get­ting a mil­lion per sec­ond for an hour.

Suffer­ing isn’t lin­ear in dust specks. Well, ac­tu­ally, I’m not sure sub­jec­tive states in gen­eral can be viewed in a lin­ear way. At least, if there is a po­ten­tially valid “lin­ear qualia the­ory”, I’d be sur­prised.

But as far as the dust specks vs tor­ture thing in the origi­nal ques­tion? I think I’d go with dust specks for all.

But that’s one per­son vs bun­cha peo­ple with dust­specks.

• Eliezer, are you sug­gest­ing that de­clin­ing to make up one’s mind in the face of a ques­tion that (1) we have ex­cel­lent rea­son to mis­trust our judge­ment about and (2) we have no ac­tual need to have an an­swer to is some­how dis­rep­utable?

Yes, I am.

Re­gard­ing (1), we pretty much always have ex­cel­lent rea­son to mis­trust our judg­ments, and then we have to choose any­way; in­ac­tion is also a choice. The null plan is a plan. As Rus­sell and Norvig put it, re­fus­ing to act is like re­fus­ing to al­low time to pass.

Re­gard­ing (2), when­ever a tester finds a user in­put that crashes your pro­gram, it is always bad—it re­veals a flaw in the code—even if it’s not a user in­put that would plau­si­bly oc­cur; you’re still sup­posed to fix it. “Would you kill Santa Claus or the Easter Bunny?” is an im­por­tant ques­tion if and only if you have trou­ble de­cid­ing. I’d definitely kill the Easter Bunny, by the way, so I don’t think it’s an im­por­tant ques­tion.

Fol­lowup dilem­mas:

For those who would pick SPECKS, would you pay a sin­gle penny to avoid the dust specks?

For those who would pick TORTURE, what about Vas­sar’s uni­verses of ag­o­nium? Say a googol­plex-per­sons’ worth of ag­o­nium for a googol­plex years.

• Un­less the 3^^^3 peo­ple are form­ing a hive mind, I pick the specks.

I’m ter­ribly in­ex­pe­rienced in trans­lat­ing eth­i­cal prefer­ences into money, but in that sce­nario I wouldn’t pay the penny. A penny can be bet­ter used in buy­ing more util­ity than re­mov­ing specks from 3^^^3 eye­balls.

• Robin is ab­solutely wrong, be­cause differ­ent in­stances of hu­man suffer­ing can­not be added to­gether in any mean­ingful way. The cu­mu­la­tive effect when placed on one per­son is far greater than the sum of many tiny nui­sances ex­pe­rienced by many. Whereas small ir­ri­tants such as a dust mote do not cause “suffer­ing” in any stan­dard sense of the word, the sum to­tal of those motes con­cen­trated at one time and placed into one per­son’s eye could cause se­ri­ous in­jury or even blind­ness. Dispers­ing the dust (ei­ther over time or across many peo­ple) miti­gates the effect. If the dis­per­sion is suffi­cient, there is ac­tu­ally no suffer­ing at all. To ex­tend the ex­am­ple, you could di­vide the dust mote into even smaller par­ti­cles, un­til each in­di­vi­d­ual would not even be aware of the im­pact.

So the ques­tion be­comes, would you rather live in a world with lit­tle or no suffer­ing (caused by this par­tic­u­lar event) or a world where one per­son suffers badly, and those around him or her sit idly by, even though they reap very lit­tle or no benefit from the situ­a­tion?

The no­tion of shift­ing hu­man suffer­ing onto one un­lucky in­di­vi­d­ual so that the rest of so­ciety can avoid minor in­con­ve­niences is morally rep­re­hen­si­ble. That (I hope) is why no one has stood up and shouted yeay for tor­ture.

• Robin is ab­solutely wrong, be­cause differ­ent in­stances of hu­man suffer­ing can­not be added to­gether in any mean­ingful way.

The prob­lem with this claim is that you can con­struct a se­ries of over­lap­ping com­par­i­sons in­volv­ing ex­pe­riences that differ but slightly in how painful they are. Then, pro­vided that the se­ries has suffi­ciently many el­e­ments, you’ll reach the con­clu­sion that an ex­pe­rience of pain, no mat­ter how in­tense, is prefer­able to ar­bi­trar­ily many in­stances of the mildest pain imag­in­able.

(Strictly speak­ing, you could ac­tu­ally avoid this con­clu­sion by as­sum­ing that painful ex­pe­riences of a given in­ten­sity have diminish­ing marginal value and that this value con­verges to a finite quan­tity. Then if the limit­ing value of a very mild pain is less than the value of a sin­gle ex­tremely painful ex­pe­rience, the con­ti­nu­ity ar­gu­ment wouldn’t work. How­ever, I see no in­de­pen­dent mo­ti­va­tion for em­brac­ing a the­ory of value of this sort. More­over, such a the­ory would have in­cred­ible im­pli­ca­tions, e.g., that to de­ter­mine how bad some­one’s pain is one needs to con­sider whether sen­tient be­ings have already ex­pe­rienced pains of that in­ten­sity in re­mote re­gions of space­time.)

• you could ac­tu­ally avoid this con­clu­sion by as­sum­ing that painful ex­pe­riences of a given in­ten­sity have diminish­ing marginal value and that this value con­verges to a finite quan­tity.

Yeah, this is a com­mon at­tempt to avoid this par­tic­u­lar re­pug­nant con­clu­sion. This ap­proach leads to con­clu­sions like that a 3^^^3 mildly stabbed toes are bet­ter than a sin­gle mod­er­ately stabbed one. (Be­cause if not, we can con­struct an un­bro­ken chain of com­pa­rable pain ex­pe­riences from specks to tor­ture.)

How­ever, I see no in­de­pen­dent mo­ti­va­tion for em­brac­ing a the­ory of value of this sort.

The mo­ti­va­tion is there, to make dust specks and tor­ture in­com­pa­rable. Un­for­tu­nately, this ap­proach doesn’t work, as it re­sults in in­finitely many ar­bi­trar­ily defined dis­con­ti­nu­ities.

• J Thomas: You’re ne­glect­ing that there might be some pos­i­tive-side effects for a small frac­tion of the peo­ple af­fected by the dust specks; in fact, there is some prece­dent for this. The re­sult­ing av­er­age effect is hard to es­ti­mate, but (con­sid­er­ing that dust specks seem to mostly add en­tropy to the thought pro­cesses of the af­fected per­sons), would likely still be nega­tive.

Copy­ing g’s as­sump­tion that higher-or­der effects should be ne­glected, I’d take the tor­ture. For each of the 3^^^3 per­sons, the choice looks as fol­lows:

1.) A 1/​(3^^^3) chance of be­ing tor­tured for 50 years. 2.) A 1 chance of get­ting a dust speck.

I’d definitely pre­fer the former. That prob­a­bil­ity is so close to zero that it vastly out­weighs the differ­ences in di­su­til­ity.

• I was very sur­prised to find that a sup­porter of the Com­plex­ity of Value hy­poth­e­sis and the au­thor who warns against sim­ple util­ity func­tions ad­vo­cates tor­ture us­ing sim­ple pseudo-sci­en­tific util­ity calcu­lus.

My util­ity func­tion has con­straints that pre­vent me from do­ing awful things to peo­ple, un­less it would pre­vent equally awful things done to other peo­ple. That this is a widely shared moral in­tu­ition is demon­strated by the re­ac­tion in the com­ments sec­tion. Since you rec­og­nize the com­plex­ity of hu­man value, my widely-shared prefer­ences are pre­sum­ably valid.

In fact, the men­tal dis­com­fort caused by peo­ple who heard of the tor­ture would swamp the di­su­til­ity from the dust specks. Which brings us to an in­ter­est­ing ques­tion—is moral­ity car­ried by events or by in­for­ma­tion about events? If no­body else knew of my choice, would that make it bet­ter?

For a util­i­tar­ian, the an­swer is clearly that the in­for­ma­tion about morally sig­nifi­cant events is what mat­ters. I imag­ine so-called friendly AI bots built on util­i­tar­ian prin­ci­ples do­ing lots of awful things in se­cret to achieve its ends.

Also, I’m in­ter­ested to hear how many tor­tur­ers would change their mind if we kill the guy in­stead of just tor­tur­ing him. How far does your “util­ity is all that mat­ters” philos­o­phy go?

• There’s some­thing re­ally odd about char­ac­ter­iz­ing “tor­ture is prefer­able to this ut­terly un­re­al­iz­able thing” as “ad­vo­cat­ing tor­ture.”

It’s not ob­vi­ously wrong… I mean, some­one who wanted to ad­vo­cate tor­ture could start out from that kind of po­si­tion, and then once they’d brought their au­di­ence along swap it out for sim­ply “tor­ture is prefer­able to al­ter­na­tives”, us­ing the same kind of rhetor­i­cal tech­niques you use here… but it doesn’t seem es­pe­cially jus­tified in this case. Mostly, it seems like you want to ar­gue that tor­ture is bad whether or not any­one dis­agrees with you.

Any­way, to an­swer your ques­tion: to a to­tal util­i­tar­ian, what mat­ters is to­tal util­ity-change. That in­cludes knock-on effects, in­clud­ing men­tal dis­com­fort due to hear­ing about the tor­ture, and the way tor­tur­ing in­creases the like­li­hood of fu­ture tor­ture of oth­ers, and all kinds of other stuff. So trans­mit­ting in­for­ma­tion about events is it­self an event with moral con­se­quences, to be eval­u­ated by its con­se­quences. It’s pos­si­ble that keep­ing the tor­ture a se­cret would have net pos­i­tive util­ity; it’s pos­si­ble it would have net nega­tive util­ity.

All of which is why the origi­nal thought ex­per­i­ment ex­plic­itly left the knock-on effects out, al­though many peo­ple are un­will­ing or un­able to fol­low the rules of that thought ex­per­i­ment and end up dis­cussing more real-world plau­si­ble var­i­ants of it in­stead (as you do here).

For a util­i­tar­ian, the an­swer is clearly that the in­for­ma­tion about morally sig­nifi­cant events is what mat­ters.

Well, in some bizarre sense that’s true. I mean, if I’m be­ing tor­tured right now, but no­body has any in­for­ma­tion from which the fact of that tor­ture can be de­duced (not even me) a util­i­tar­ian pre­sum­ably con­cludes that this is not an event of moral sig­nifi­cance. (It’s de­cid­edly un­clear in what sense it’s an event at all.)

I imag­ine so-called friendly AI bots built on util­i­tar­ian prin­ci­ples do­ing lots of awful things in se­cret to achieve its ends.

Sure, that seems likely.

I’m in­ter­ested to hear how many tor­tur­ers would change their mind if we kill the guy in­stead of just tor­tur­ing him. How far does your “util­ity is all that mat­ters” philos­o­phy go?

I en­dorse kil­ling some­one over al­low­ing a greater amount of bad stuff to hap­pen, if those are my choices. Does that an­swer your ques­tion? (I also re­ject your im­pli­ca­tion that kil­ling some­one is nec­es­sar­ily worse than tor­tur­ing them for 50 years, in­ci­den­tally. Some­times it is, some­times it isn’t. Given that choice, I would pre­fer to die… and in many sce­nar­ios I en­dorse that choice.)

• There’s some­thing re­ally odd about char­ac­ter­iz­ing “tor­ture is prefer­able to this ut­terly un­re­al­iz­able thing” as “ad­vo­cat­ing tor­ture.”

You know, in nat­u­ral lan­guage “x is bet­ter than y” of­ten has the con­no­ta­tion “x is good”, and peo­ple go at lengths to avoid such word­ings if they don’t want that con­no­ta­tion. For ex­am­ple, “‘light’ cigarettes are no safer than reg­u­lar ones” is log­i­cally equiv­a­lent to “reg­u­lar cigarettes are at least as safe as ‘light’ ones”, but I can’t imag­ine an anti-smok­ing cam­paign say­ing the lat­ter.

• Fair enough. For max­i­mal pre­ci­sion I sup­pose I ought to have said “I re­ject your char­ac­ter­i­za­tion of...” rather than “There’s some­thing re­ally odd about char­ac­ter­iz­ing...,” but I felt some po­lite in­di­rec­tion was called for.

• Well, in some bizarre sense that’s true. I mean, if I’m be­ing tor­tured right now, but no­body has any in­for­ma­tion from which the fact of that tor­ture can be de­duced (not even me) a util­i­tar­ian pre­sum­ably con­cludes that this is not an event of moral sig­nifi­cance. (It’s de­cid­edly un­clear in what sense it’s an event at all.)

Well, as­sum­ing the tor­ture is ar­tifi­cially bounded to ab­solute im­pactless­ness, then yes, it is ir­rele­vant (in fact, it ar­guably doesn’t even ex­ist). How­ever, a good ra­tio­nal­ist util­i­tar­ian will retroac­tively con­sider fu­ture effects of the tor­ture, sup­pos­ing it is not so bounded, and once the fact of the tor­ture can then be de­duced, it does retroac­tively be­come a morally sig­nifi­cant event in a time­less per­spec­tive, if I un­der­stand the the­ory prop­erly.

• The point was not nec­es­sar­ily to ad­vo­cate tor­ture. It’s to take the math se­ri­ously.

In fact, the men­tal dis­com­fort caused by peo­ple who heard of the tor­ture would swamp the di­su­til­ity from the dust specks.

Just how many peo­ple do you ex­pect to hear about the tor­ture? Have you taken se­ri­ously how big a num­ber 3^^^3 is? By how many utilons do you ex­pect their di­su­til­ity to ex­ceed the di­su­til­ity from the dust specks?

• First, I don’t buy the pro­cess of sum­ming utilons across peo­ple as a valid one. Lots of philoso­phers have ob­jected to it. This is a bul­let-bit­ing club, and I get that. I’m just not bit­ing those bul­lets. I don’t think 400 years of crit­i­cism of Utili­tar­i­anism can be solved by bit­ing all the bul­lets. And in Eliezer’s re­cent writ­ings, it ap­pears he is be­gin­ning to un­der­stand this. Which is great. It is re­duc­ing the odds he be­comes a moral mon­ster.

Se­cond, I value things other than max­i­miz­ing utilons. I got the im­pres­sion that Eliezer/​Less Wrong agreed with me on that from the Com­plex Values post and posts about the evils of pa­per­clip max­i­miz­ers. So great evils are qual­i­ta­tively differ­ent to me from small evils, even small evils done to a great num­ber of peo­ple!

I get what you’re try­ing to do here. You’re try­ing to demon­strate that or­di­nary peo­ple are in­nu­mer­ate, and you all are get­ting a util­ity spike from imag­in­ing you’re more ra­tio­nal than them by choos­ing the “right” (naive hy­per-ra­tio­nal util­i­tar­ian-alge­braist) an­swer. But I don’t think it’s that sim­ple when we’re talk­ing about moral­ity. If it were, the philo­soph­i­cal pro­ject that’s lasted 2500 years would fi­nally be over!

• You were the one who claimed that the men­tal dis­com­fort from hear­ing about tor­ture would swamp the di­su­til­ity from the dust specks—I as­sumed from that, that you thought they were com­men­su­rable. I thought it was odd that you thought they were com­men­su­rable but thought the math worked out in the op­po­site di­rec­tion.

I be­lieve Eliezer’s post was not so much di­rected at folks who dis­agree with util­i­tar­i­anism—rather, it’s sup­posed to be about tak­ing the math se­ri­ously, for those who are. If you’re not a util­i­tar­ian, you can freely re­gard it as an­other re­duc­tio.

You don’t have to be any sort of sim­ple or naive util­i­tar­ian to en­counter this prob­lem. As long as goods are in any way com­men­su­rable, you need to ac­tu­ally do the math. And it’s hard to make a case for a util­i­tar­i­anism in which goods are not com­men­su­rable—in prac­tice, we can spend money to­wards any sort of good, and we don’t fa­vor only spend­ing money on the high­est-or­der ones, so that strongly sug­gests com­men­su­ra­bil­ity.

• Eliezer, a prob­lem seems to be that the speck does not serve the func­tion you want it to in this ex­am­ple, at least not for all read­ers. In this case, many peo­ple see a spe­cial penny be­cause there is some thresh­old value be­low which the least bad bad thing is not re­ally bad. The speck is in­tended to be an ex­am­ple of the least bad bad thing, but we give it a bad­ness rat­ing of one minus .9-re­peat­ing.

(This seems to hap­pen to a lot of ar­gu­ments. “Take x, which is y.” Well, no, x is not quite y, so the ar­gu­ment breaks down and the dis­cus­sion fol­lows some tan­gent. The Distributed Repub­lic had a good post on this, but I can­not find it.)

We have a spe­cial penny be­cause there is some amount of eye dust that be­comes no­tice­able and could gen­uinely qual­ify as the least bad bad thing. If ev­ery­one on Earth gets this de­ci­sion at once, and ev­ery­one sud­denly gets >6,000,000,000 specks, that might be enough to crush all our skulls (how much does a speck weigh?). Some­where be­tween that and “one speck, one blink, ever” is a spe­cial penny.

If we can just stipu­late “the small­est unit of suffer­ing (or nega­tive qualia, or your fa­vorite term),” then we can move on to the more in­ter­est­ing parts of the dis­cus­sion.

I also see a qual­i­ta­tive differ­ence if there can be sec­ondary effects or sum­ma­tion causes sec­ondary effects. As noted above, if 3^^^3/​10^20 peo­ple die due to freak­ishly un­likely ac­ci­dents caused by blink­ing, the choice be­comes triv­ial. Similarly, +0.000001C sums some­what differ­ently than specks. 1 speck/​day/​per­son for 3^^^3 days is still not an ex­is­ten­tial risk; 3^^^3 specks at once will kill ev­ery­one.

(I still say Kyle wins.)

• The first thing I thought when I read this ques­tion was that the dust specks were ob­vi­ously prefer­able. Then I re­mem­bered that my in­tu­ition likes to round 3^^^3 down to some­thing around twenty. Ob­vi­ously, the dust specks are prefer­able to the tor­ture for any num­ber at all that I have any sort of in­tu­itive grasp over.

But I found an ar­gu­ment that pretty much con­vinced me that the tor­ture was the cor­rect an­swer.

Sup­pose that in­stead of mak­ing this choice once, you will be faced with the same choice 10^17 times for the next fifty years (This num­ber was cho­sen so that it was more than a mil­lion per sec­ond.) If you have a prob­lem imag­in­ing the abil­ity to make more than a mil­lion choices per sec­ond, imag­ine that you have a dial in front of you which goes from zero to a 10^17. If you set the dial to n, then 10^17-n peo­ple will get tor­tured start­ing now for the next fifty years, and n dust specks will fly into the eyes of each of 3^^^3 peo­ple dur­ing the next fifty years.

The dial starts at zero. For each unit that you turn the dial up, you are sav­ing one per­son from be­ing tor­tured by putting a dust speck in the eyes of each of the 3^^^3 peo­ple, the ex­act choice pre­sented.

So, if you thought the cor­rect an­swer was the dust specks, you’d turn the dial from zero to one right? And then you’d turn it from one to two, right?

But, if you turned the dial all the way up to 10^17, you’d effec­tively be rub­bing the corneas of the 3^^^3 peo­ple with sand­pa­per for fifty years (of course, their corneas would wear through, and their eyes would come apart un­der that sort of abra­sion. It would prob­a­bly take less than a mil­lion dust specks per sec­ond to do that, but let’s be con­ser­va­tive and make them smaller dust specks.) Even if you don’t count the pain in­volved, they’d be blind for­ever. How many peo­ple would you blind in or­der to save one per­son from be­ing tor­tured for fifty years? You prob­a­bly wouldn’t blind ev­ery­one on earth to save that one per­son from be­ing tor­tured, and yet, there are (3^^^3)/​(10^17) >> 7*10^9 peo­ple be­ing blinded for each per­son you
have saved from tor­ture.

So if your an­swer was the dust specks, you’d ei­ther end up turn­ing the knob all the way up to 10^17, or you’d have to stop some­where, be­cause there’s no es­cap­ing that in this sce­nario, there’s a real dial in front of you, and you have to turn it to some n be­tween 0 and a 10^17.

If you left the dial on, say, 10^10, I’d ask “Tell me, what is so spe­cial about the differ­ence be­tween hit­ting some­one with 10^10 dust specs ver­sus hit­ting them with 10^10+1, that wasn’t spe­cial about the differ­ence be­tween hit­ting them with zero ver­sus one?” If any­thing, the more dust specks there are, the less of a differ­ence one more would make.

There are eas­ily 10^17 con­tin­u­ous gra­da­tions be­tween no in­con­ve­nience and hav­ing ones eyes turned to pulp, and I don’t re­ally see what would make any of them ter­ribly differ­ent from each other. Yet n=0 is ob­vi­ously prefer­able to n=10^17, and so, each in­di­vi­d­ual in­cre­ment of n must be bad.

• This has noth­ing to do with the origi­nal ques­tion. You rephrased it so that it now asks if you’d rather tor­ture one per­son or 3^^^3. Of course you rather tor­ture one per­son than 3^^^3. That does not equal tor­tur­ing one per­son or that 3^^^3 peo­ple get dust specks in their eyes for a frac­tion of a sec­ond.

• The rea­son­ing here seems very bro­ken to me (I have no opinion on the con­clu­sion yet):

Look at a ver­sion of the re­verse dial. Say that you start with 3^^^3 peo­ple hav­ing 1000000 dust-specks a sec­ond rubbed in their eye, and 0 peo­ple tor­tured. Each time you turn the dial up by 1, 1 per­son is moved over from the “speck in the eye” list over to the “tor­tured for 50 years” list, and the fre­quency is re­duced by 1 spec/​sec­ond. Would you turn the dial up to 1000000?

• So be­cause there is a con­tinuum be­tween the right an­swer (lots of tor­ture) and the wrong an­swer (3^^^3 hor­ribly blinded peo­ple), you would rather blind those peo­ple?

• Nah, he was pretty clearly challeng­ing the use of in­duc­tion in the above post.

The larger prob­lem is as­sum­ing lin­ear­ity in an ob­vi­ously non­lin­ear situ­a­tion—this also ex­plains why the in­duc­tion ap­pears to work ei­ther way. Ap­ply­ing 1 pound of force to some­one’s kneecap is sim­ply not 1/​10th as bad as ap­ply­ing 10 pounds of force to some­one’s kneecap.

• Kyle wins.

Ab­sent us­ing this to guaran­tee the nigh-end­less sur­vival of the species, my math sug­gests that 3^^^3 beats any­thing. The prob­lem is that the speck rounds down to 0 for me.

There is some min­i­mum thresh­old be­low which it just does not count, like say­ing, “What if we ex­posed 3^^^3 peo­ple to ra­di­a­tion equiv­a­lent to stand­ing in front of a microwave for 10 sec­onds? Would that be worse than nuk­ing a few cities?” I sup­pose there must be some­one in 3^^^3 who is marginally close enough to can­cer for that to mat­ter, but no, that rounds down to 0. For the speck, I am go­ing to blink in the next few sec­onds any­way.

That in no way ad­dresses the in­tent of the ques­tion, since we can just in­crease it to the min­i­mum that does not round down. Be­ing poked with a blunt stick? Still hard, since I think ev­ery hu­man be­ing would take one stick over some poor soul be­ing tor­tured. Do I re­ally get to be the moral agent for 3^^^3 peo­ple?

As oth­ers have said, our moral in­tu­itions do not work with 3^^^3.

• There is some min­i­mum thresh­old be­low which it just does not count, like say­ing, “What if we ex­posed 3^^^3 peo­ple to ra­di­a­tion equiv­a­lent to stand­ing in front of a microwave for 10 sec­onds? Would that be worse than nuk­ing a few cities?” I sup­pose there must be some­one in 3^^^3 who is marginally close enough to can­cer for that to mat­ter, but no, that rounds down to 0.

Why would that round down to zero? That’s a lot more peo­ple hav­ing can­cer than get­ting nuked!

(It would be hilar­i­ous if Zubon could ac­tu­ally re­spond af­ter al­most a decade)

• If even one in a hun­dred billion of the peo­ple is driv­ing and has an ac­ci­dent be­cause of the dust speck and gets kil­led, that’s a tremen­dous num­ber of deaths. If one in a hun­dred quadrillion of them sur­vives the ac­ci­dent but is man­gled and spends the next 50 years in pain, that’s also a tremen­dous amount of tor­ture.

If one in a hun­dred decillion of them is work­ing in a nu­clear power plant and the dust speck makes him have a nu­clear ac­ci­dent....

We just aren’t de­signed to think in terms of 3^^^3. It’s too big. We don’t ha­bit­u­ally think much about one-in-a-mil­lion chances, much less one in a hun­dred decillion. But a hun­dred decillion is a very small num­ber com­pared to 3^^^3.

• That is an in­ter­est­ing ar­gu­ment (I’ve con­sid­ered it be­fore) though I think it misses the point of the thought ex­per­i­ment. As I un­der­stand it, it’s not about any of the pos­si­ble con­se­quences of the dust specks, but about specks as (very minor) in­trin­si­cally bad things them­selves. It’s about whether you’re will­ing to mea­sure the un­pleas­ant­ness of get­ting a dust speck in your eye on the same scale as the un­pleas­ant­ness of be­ing tor­tured, as (vastly) differ­ent in de­gree rather than fun­da­men­tally differ­ent in kind.

• I would say that it is pretty easy to think in terms of 3^^^3. Just as­sume that ev­ery­thing that could hap­pen due to a dust speck in your eye, will hap­pen.

• How do you know that more ac­ci­dents are caused than avoided by dust specks?

(Of course I re­al­ize I’m say­ing “you” to a 5-year-old com­ment but you get the pic­ture.)

• Yes the an­swer is ob­vi­ous. The an­swer is that this ques­tion ob­vi­ously does not yet have mean­ing. It’s like an ink blot. Any mean­ing a per­son might think it has is com­pletely in­side his own mind. Is the inkblot a bunny? Is the inkblot a Grate­ful Dead con­cert? The right an­swer is not merely un­known, be­cause there is no pos­si­ble right an­swer.

A se­ri­ous per­son—one who take moral dilem­mas se­ri­ously, any­way—must learn more be­fore pro­ceed­ing.

The ques­tion is an inkblot be­cause too many cru­cial vari­ables have been left un­speci­fied. For in­stance, in or­der for this to be an in­ter­est­ing moral dilemma I need to know that it is a situ­a­tion that is phys­i­cally pos­si­ble, or else analo­gous to some­thing that is pos­si­ble. Other­wise, I can’t know what other laws of physics or logic ap­ply or don’t ap­ply, and there­fore can’t make an as­sess­ment. I need to know what my po­si­tion is in this uni­verse. I need to know why this power has been in­vested in me. I need to know the na­ture of the tor­ture and who the per­son is who will be tor­tured. I need to con­sider such fac­tors as what the tor­ture may mean to other peo­ple who are aware of it (such as the peo­ple do­ing the tor­ture). I need to know some­thing about the costs and benefits in­volved. Will the per­son be­ing tor­tured know they are be­ing tor­tured? Or can it be ar­ranged that they are born into the tor­ture and con­sider it a nor­mal part of their life. Will the per­son be­ing tor­tured have vol­un­teered to have been tor­tured? Will the dust motes have pep­pered the eyes of all those peo­ple any­way? Will the tor­ture have hap­pened any­way? Will choos­ing tor­ture save other peo­ple from be­ing tor­tured?

It would seem that tor­ture is bad. On the other hand, just be­ing al­ive is a form of tor­ture. Each of us has a Sword of Damo­cles hang­ing over us. It’s called mor­tal­ity. Some peo­ple con­sider it tor­ture when I keep tel­ling them they haven’t finished ask­ing their ques­tion...

• Anon, I de­liber­ately didn’t say what I thought, be­cause I guessed that other peo­ple would think a differ­ent an­swer was “ob­vi­ous”. I didn’t want to prej­u­dice the re­sponses.

• So what do you think?

• He gives his an­swer here.

• Thank you!

• Ex­actly, if Eliz­ier had went out and said what he thought, noth­ing good would come out of it. The point is to make you think.

• Does this anal­y­sis fo­cus on pure, mono­tone util­ity, or does it in­clude the huge rip­ple effect putting dust specks into so many peo­ple’s eyes would have? Are these peo­ple with nor­mal lives, or cre­ated speci­fi­cally for this one ex­pe­rience?

• The rip­ple effect is real, but as in Pas­cal’s Wager, for ev­ery pos­si­ble situ­a­tion where the timing is crit­i­cal and some­thing bad will hap­pen if you are dis­tracted for a mo­ment, there’s a coun­ter­bal­anc­ing situ­a­tion where the timing is crit­i­cal and some­thing bad will hap­pen un­less you are dis­tracted for a mo­ment, so those prob­a­bly bal­ance out into noise.

• I doubt this.

• Why?

• I think you can be al­lowed to imag­ine that any rip­ple effect caused by some­one get­ting a barely-no­tice­able dust speck in their eyes (per­haps it makes some­one mad enough to beat his dog) would be about the same as that of the tor­ture (per­haps the tor­tur­ers go home and beat their dogs be­cause they’re so de­sen­si­tized to tor­tur­ing).

• I won­der if some peo­ple’s aver­sion to “just an­swer­ing the ques­tion” as Eliezer notes in the com­ments many times has to do with the per­ceived cost of sig­nal­ling agree­ment with the premises.

It’s straight­for­ward to me that an­swer­ing should take the ques­tion at face value; it’s a thought ex­per­i­ment, you’re not be­ing asked to com­mit to a course of ac­tion. And go­ing by the ques­tion as asked the an­swer for any util­i­tar­ian is “tor­ture”, since even a very small in­cre­ment of suffer­ing mul­ti­plied by a large enough num­ber of peo­ple (or an in­finite num­ber) will out­weigh a great amount of suffer­ing by one per­son.

Sig­nal­ling that would be highly prob­le­matic for some peo­ple be­cause of what might be read into our an­swer—does Eliezer ex­pect that sig­nal­ling as­sent here means sig­nal­ling as­sent to other, as-yet-un­known con­clu­sions he’s made about (what­ever is­sue where that bears some re­sem­blance)? Does Eliezer in­tend to cod­ify the terms of this premise into the ba­sis for a de­ci­sion the­ory un­der­ly­ing the cog­ni­tive ar­chi­tec­ture of a pu­ta­tive Friendly AI? Does Eliezer think that the real world, in short, maps to his gedanken­ex­per­i­ment suffi­ciently well that the terms of this sce­nario can mean­ingfully stand in for de­ci­sions made in that do­main by real ac­tors (hu­man or oth­er­wise)?

For my own part I’d be very, very hes­i­tant to sig­nal any of that. Hence I find it difficult to an­swer the ques­tion as asked. It’s analo­gous to my dis­com­fort with the Tick­ing Time Bomb sce­nario—by a straight read­ing of the premise you should trade a finite chance of find­ing and dis­abling the bomb, thereby sav­ing a mil­lion lives, for the act of tor­tur­ing the per­son who planted it. The logic is in­ter­nally-con­sis­tent, but it doesn’t map to any real-world situ­a­tion I can plau­si­bly imag­ine (where tor­ture is not ter­ribly effec­tive in so­lic­it­ing con­fes­sions, and the sce­nario of a “tick­ing time bomb with a sin­gle sus­pect un­will­ing to talk mere min­utes be­fore­hand” has AFAIK never hap­pened as pre­sented, and would be ex­tremely difficult to set up).

I rec­og­nize the in­ter­nal con­sis­tency, yet I’m trou­bled by my un­cer­tainty about what the au­thor thinks I’m sign­ing up for when I re­ply.

• I’d take it.
I find your choice/​in­tu­ition com­pletely baf­fling, and I would guess that far less than 1% of peo­ple would agree with you on this, for what­ever that’s worth (surely it’s worth some­thing.) I am a con­se­quen­tial­ist and have stud­ied con­se­quen­tial­ist philos­o­phy ex­ten­sively (I would not call my­self an ex­pert), and you seem to be cling­ing to a very crude form of util­i­tar­i­anism that has been aban­doned by pretty much ev­ery util­i­tar­ian philoso­pher (not to men­tion those who re­ject util­i­tar­i­anism!). In fact, your ar­gu­ment reads like a re­duc­tio ad ab­sur­dum of the point you are try­ing to make. To wit: if we think of things in equiv­a­lent, ad­di­tive util­ity units, you get this re­sult that tor­ture is prefer­able. But that is ab­surd, and I think al­most ev­ery­one would be able to ap­pre­ci­ate the ab­sur­dity when faced with the 3^^^3 lives sce­nario. Even if you gave ev­ery­one a one week lec­ture on scope in­sen­si­tivity.

So… I don’t think I want you to be one of the peo­ple to ini­tially pro­gram AI that might in­fluence my life...

• Wow. Peo­ple sure are com­ing up with in­ter­est­ing ways of avoid­ing the ques­tion.

• The hard­ships ex­pe­rienced by a man tor­tured for 50 years can­not com­pare to a triv­ial ex­pe­rience mas­sively shared by a large num­ber of in­di­vi­d­u­als—even on the scale that Eli de­scribes. There is no ac­cu­mu­la­tion of ex­pe­riences, and it can­not be con­flated into a larger meta dust-in-the-eye ex­pe­rience; it has to be an­a­lyzed as a se­ries of dis­creet ex­pe­riences.

As for larger so­cial im­pli­ca­tions, the nega­tive con­se­quence of so many dust specked eyes would be neg­ligible.

• Robin, could you ex­plain your rea­son­ing. I’m cu­ri­ous.

Hu­mans get barely no­tice­able “dust speck equiv­a­lent” events so of­ten in their lives that the num­ber of peo­ple in Eliezer’s post is ir­rele­vant; it’s sim­ply not go­ing to change their lives, even if it’s a gazillion lives, even with a num­ber big­ger than Eliezer’s (even con­sid­er­ing the “but­terfly effect”, you can’t say if the dust speck is go­ing to change them for the bet­ter or worse—but with 50 years of tor­ture, you know it’s go­ing to be for the worse).

Sub­jec­tively for these peo­ple, it’s go­ing to be lost in the static and prob­a­bly won’t even be re­mem­bered a few sec­onds af­ter the event. Tor­ture won’t be lost in static, and it won’t be for­got­ten (if sur­vived).

The al­ter­na­tive to tor­ture is so mild and in­con­se­quen­tial, even if ap­plied to a mind-bog­gling num­ber of peo­ple, that it’s al­most like ask­ing: Would you rather tor­ture that guy or not?

• It seems ob­vi­ous to me to choose the dust specks be­cause that would mean that the hu­man species would have to ex­ist for an awfully long time for the to­tal num­ber of peo­ple to equal that num­ber and that min­i­mum amount of an­noy­ance would be some­thing they were used to any­way.

• Cook­ing some­thing for two hours at 350 de­grees isn’t equiv­a­lent to cook­ing some­thing at 700 de­grees for one hour.

Cale­do­nian has made a great anal­ogy for the point that is be­ing made on ei­ther side. May I over-work it?

They are not equiv­a­lent, but there is some length of time at 350 de­grees that will burn as badly as 700 de­grees. In 3^^^3 sec­onds, your lasagna will be … okay, en­tropy will have con­sumed your lasagna by then, but it turns into a cloud of smoke at some point.

Cor­rect me if I am wrong here, but I don’t think there is any length of time at 75 or 100 de­grees that will burn as badly as one hour at 700 de­grees. It just will not cook at all. Your food will sit there and rot, rather than burn­ing.

There must be some min­i­mum tem­per­a­ture at which var­i­ous things can burn. Given enough time at that tem­per­a­ture, it is the equiv­a­lent of just set­ting it on fire. Below that tem­per­a­ture, it is qual­i­ta­tively differ­ent. You do not get bronze no mat­ter how long you leave cop­per and tin at room tem­per­a­ture.

(Or maybe I am wrong there. Maybe a cou­ple of molecules will move prop­erly at room tem­per­a­ture over a few cen­tures, so the whole mass be­comes bronze in less than 3^^^3 sec­onds. I as­sume that any­thing phys­i­cally pos­si­ble will hap­pen at some point in 3^^^3 sec­onds.)

Are there any SPECKS ad­vo­cates who say we should pick two peo­ple tor­tured for 49.5 years rather than one for 50 years? If there is any de­gree of sum­ma­tion pos­si­ble, 3^^^3 will get us there.

But, SPECKS can re­ply, there can be lev­els across with sum­ma­tion is not pos­si­ble. If lasagna phys­i­cally can­not burn at 75 de­grees, even let­ting it “cook” for 33^^^^33 sec­onds, then it will never be as badly burned as one hour at 700 de­grees.

“Did I say 75?” TORTURE replies. “I meant what­ever the min­i­mum pos­si­ble is for lasagna to burn, plus 1/​3^^3 de­grees.” SPECKS must grant vic­tory in that case, but wins at 2/​3^^3 de­grees lower.

Which just re­turns the whole thing back to the pri­mor­dial ques­tion-beg­ging on ei­ther side, whether specks can ever sum to tor­ture. If any num­ber of be­ings need­ing to blink ever adds to 10 sec­onds of tor­ture, TORTURE is in a very strong po­si­tion, un­less you are again ar­gu­ing that 10 sec­onds of TORTURE is like 75 de­grees, and there is some magic penny some­where.

(Am I com­pletely wrong? Aren’t physics and chem­istry full of magic pen­nies like es­cape ve­loc­i­ties and tem­per­a­tures needed for phys­i­cal re­ac­tions?)

TORTURE must ar­gue that yes, it is the sort of thing that adds. SPECKS must ar­gue that it is like ask­ing how many blades of grades you must add to get a bat­tle­ship. “Mu.”

• Michael Vas­sar:
Well, in the prior com­ment, I was com­ing at it as an ego­ist, as the ex­am­ple de­mands.
It’s to­tally clear to me that a sec­ond of tor­ture isn’t a billion billion billion times worse than get­ting a dust speck in my eye, and that there are only about 1.5 billion sec­onds in a 50 year pe­riod. That leaves about a 10^10 : 1 prefer­ence for the tor­ture.
I re­ject the no­tion that each (time,util­ity) event can be calcu­lated in the way you sug­gest. Suc­ces­sive speck-type ex­pe­riences for an in­di­vi­d­ual (or 1,000 suc­ces­sive dust specks for 1,000,000 in­di­vi­d­u­als) over the time pe­riod we are talk­ing about would eas­ily over­take 50 years of tor­ture. It doesn’t make sense to tally (to­tal hu­man di­su­til­ity of tor­ture (1 per­son/​50 years in this case))(some quan­tifi­ca­tion of the di­su­til­ity of a time unit of tor­ture) vs. (to­tal hu­man speck di­su­til­ity)(some quan­tifi­ca­tion of a unit of speck di­su­til­ity).
The uni­verse is made up of dis­tinct be­ings (an­i­mals in­cluded), not the sum of util­ities (which is just a use­ful con­truct.)
All of this is to say:
If we are to choose for our­selves be­tween these sce­nar­ios, I think it is in­cred­ibly bizarre to pre­fer 3^^^3 satis­fy­ing lives and one in­de­scrib­ably hor­rible life to 3^^^3 in­finites­i­mally bet­ter lives than the al­ter­na­tive 3^^^3 lives. I think do­ing so ig­nores ba­sic hu­man psy­chol­ogy, from whence our prefer­ences arise.

• g: that’s ex­actly what I’m say­ing. In fact, you can show some­thing stronger than that.

Sup­pose that we have an agent with ra­tio­nal prefer­ences, and who is min­i­mally eth­i­cal, in the sense that they always pre­fer fewer peo­ple with dust specks in their eyes, and fewer peo­ple be­ing tor­tured. This seems to be some­thing ev­ery­one agrees on.

Now, be­cause they have ra­tio­nal prefer­ences, we know that a bounded util­ity func­tion con­sis­tent with their prefer­ences ex­ists. Fur­ther­more, the fact that they are min­i­mally eth­i­cal im­plies that this func­tion is mono­tone in the num­ber of peo­ple be­ing tor­tured, and mono­tone in the num­ber of peo­ple with dust specks in their eyes. The com­bi­na­tion of a bound on the util­ity func­tion, plus the mono­ton­ic­ity of their prefer­ences, means that the util­ity func­tion has a well-defined limit as the num­ber of peo­ple with specks in their eyes goes to in­finity. How­ever, the ex­is­tence of the limit doesn’t tell you what it is—it may be any value within the bounds.

Con­cretely, we can sup­ply util­ity func­tions that jus­tify ei­ther choice, and are con­sis­tent with min­i­mal ethics. (I’ll as­sume the bound is the [0,1] in­ter­val.) In par­tic­u­lar, all di­su­til­ity func­tions of the form:

U(T, S) = A(T/​(T+1)) + B(S/​(S+1))

satisfy min­i­mal ethics, for all pos­i­tive A and B such that A plus B is less than one. Since A and B are free pa­ram­e­ters, you can choose them to make ei­ther specks or tor­ture preferred.

Like­wise, Robin and Eliezer seem to have an im­plicit di­su­til­ity func­tion of the form

U_ER(T, S) = AT + BS

If you nor­mal­ize to get [0,1] bounds, you can make some­thing up like

U’(T, S) = (AT + BS)/​(AT + BS + 1).

Now, note U’ also satis­fies min­i­mal ethics, in that if T is set to 1, then in the limit as S goes to in­finity, U’ will still always go to one and ex­ceed A/​(A+1). So that’s why they tend to have the in­tu­ition that tor­ture is the right an­swer. (In­ci­den­tally, this dis­proves my sug­ges­tion that bounded util­ity func­tions vi­ti­ate the force of E’s ar­gu­ment—but the bounds proved helpful in the end by let­ting us use limit anal­y­sis. So my fo­cus on this point was ac­ci­den­tally cor­rect!)

Now, con­sider yet an­other di­su­til­ity func­tion,

U″(T,S) = (ST + S)/​ (ST + S + 1)

This is also min­i­mally eth­i­cal, and doesn’t have any of the free pa­ram­e­ters that Tom didn’t like. But this func­tion also always im­plies a prefer­ence for any num­ber of dust specks to even a sin­gle in­stance of tor­ture.

Ba­si­cally, if you think the an­swer is ob­vi­ous, then you have to make some ad­di­tional as­sump­tions about the struc­ture of the ag­gre­gate prefer­ence re­la­tion.

• It’s truly amaz­ing the con­tor­tions many peo­ple have gone through rather than ap­pear to en­dorse tor­ture. I see many at­tempts to re­define the ques­tion, cat­e­gor­i­cal an­swers that ba­si­cally ig­nore the scalar, and what Eliezer called “mo­ti­vated con­tinu­a­tion”.

One type of dodge in par­tic­u­lar caught my at­ten­tion. Paul Gow­der phrased it most clearly, so I’ll use his text for refer­ence:

…de­pends on the fol­low­ing three claims:

a) you can un­prob­le­mat­i­cally ag­gre­gate plea­sure and pain across time, space, and in­di­vi­d­u­al­ity,

“Un­prob­le­mat­i­cally” vastly over­states what is re­quired here. The ques­tion doesn’t re­quire un­prob­le­matic ag­gre­ga­tion; any slight ten­dency of ag­gre­ga­tion will do just fine. We could stipu­late that pain ag­gre­gates as the hun­dredth root of N and the ques­tion would still have the same an­swer. That is an in­sanely mod­est as­sump­tion, ie that it takes 2^100 peo­ple hav­ing a dust mote be­fore we can be sure there is twice as much suffer­ing as for one per­son hav­ing a dust mote.

“b” is ac­tu­ally in­ap­pli­ca­ble to the stated ques­tion and it’s “a” again any­ways—just add “type” or “mode” to the sec­ond con­junc­tion in “a”.

c) it is a moral fact that we ought to se­lect the world with more plea­sure and less pain.

I see only three pos­si­bil­ities for challeng­ing this, none of which af­fects the ques­tion at hand.

• Fa­vor a desider­a­tum that roughly al­igns with “plea­sure” but not quite, such as “health”. Not a prob­lem.

• Fo­cus on some spe­cial situ­a­tion where pain­ing oth­ers is ar­guably de­sir­able, such as de­ter­rence, “nega­tive re­in­force­ment”, or re­tribu­tive jus­tice. ISTM that’s already been ideal­ized away in the ques­tion for­mu­la­tion.

• Just don’t care about oth­ers’ util­ity, eg Rand-style self­ish­ness.

• The “Rand-style self­ish­ness” mars an oth­er­wise sound com­ment.

• I think one of the rea­sons I fi­nally chose specks is be­cause the un­like im­plied, the suffer­ing does not sim­ply “add up”: 3^^^3 peo­ple get­ting one dust speck in their eye is most cer­tainly not equal to one per­son get­ting 3^^^3 dust specks in his eyes. It’s not “3^^^3 units of di­su­til­ity, to­tal”, it’s one unit of di­su­til­ity per per­son.

That still doesn’t re­ally an­swer the “one per­son for 50 years or two peo­ple for 49 years” ques­tion, though—by my rea­son­ing, the sec­ond op­tion would be preferrable, while ob­vi­ously the first op­tion is the preferrable one. I might need to come up with a guideline stat­ing that only ex­pe­riences of suffer­ing within a few or­ders of mag­ni­tude are di­rectly com­pa­rable with each other, or some such, but it does feel like a crude hack. Ah well.

If statis­tics are be­ing gath­ered, I’m a sec­ond year cog­ni­tive sci­ence stu­dent.

• Eliezer, it’s the com­bi­na­tion of (1) to­tally un­trust­wor­thy brain ma­chin­ery and (2) no im­me­di­ate need to make a choice that I’m sug­gest­ing means that with­hold­ing judge­ment is rea­son­able. I com­pletely agree that you’ve found a bug; con­grat­u­la­tions, you may file a bug re­port and add it to the many other bug re­ports already on file; but how do you get from there to the con­clu­sion that the right thing to do is to make a choice be­tween these two op­tions?

When I read the ques­tion, I didn’t go into a coma or be­come psy­chotic. I didn’t even join a crazy re­li­gion or start beat­ing my wife. If for some rea­son I ac­tu­ally had to make such a choice, I still wouldn’t go nuts. So I think analo­gies with crash­ing soft­ware are in­ap­pro­pri­ate. (Again, I don’t deny that there’s a valid bug re­port. I’m just ques­tion­ing its sever­ity.)

So what we have here is an ar­chi­tec­tural prob­lem with the soft­ware, which pro­duces a failure mode in which in­put rad­i­cally differ­ent from any that will ever ac­tu­ally be sup­plied pro­vokes a small user-in­ter­face glitch. It would be nice to fix it, but it doesn’t strike me as un­rea­son­able if it doesn’t make it through some peo­ple’s triage.

(Santa Claus ver­sus the Easter Bunny is much nearer to be­ing a re­al­is­tic ques­tion, and so far as I can tell there isn’t any­thing in my men­tal ma­chin­ery that fun­da­men­tally isn’t equipped to con­sider it. Kill the bunny.)

• Robin’s an­swer hinges on “all else be­ing equal.” That con­di­tion can tie up a lot of loose ends, it smooths over plenty of rough patches. But those ends un­ravel pretty quickly once you start to con­sider all the ways in which ev­ery­thing else is in­her­ently un­equal. I hap­pen to think the dust speck is a 0 on the di­su­til­ity me­ter, my­self, and 3^^^3*0 di­su­til­ities = 0 di­su­til­ity.

• What if it were a re­peat­able choice?

Sup­pose you choose dust specks, say, 1,000,000,000 times. That’s a con­sid­er­able amount of tor­ture in­flicted on 3^^^3 peo­ple. I sus­pect that you could find the num­ber of times equiv­a­lent to tor­tur­ing each of thoes 3^^^3 peo­ple 50 years, and that num­ber would be smaller than 3^^^3. In other words, choose the dust speck enough times, and more peo­ple would be tor­tured effec­tu­ally for longer than if you chose the 50-year tor­ture an equiv­a­lent num­ber of times.

If that math is cor­rect, I’d have to go with the tor­ture, not the dust specks.

• Like­wise, if this was iter­ated 3^^^3+1 times(ie 3^^^3 plus the reader),it could eas­ily be 50*3^^^3 (ie > 3^^^3+1) peo­ple tor­tured. The odds are if it’s pos­si­ble for you to make this choice, un­less you have rea­son to be­lieve oth­er­wise they may too, mak­ing this an im­plicit pris­oner’s dilemma of sorts. On the other side, 3^^^3 specks could pos­si­bly crush you, and/​or your lo­cal cluster of galax­ies into a black hole, so there’s that to con­sider if you con­sider the life within mean­ingful dis­tance of of ev­ery one of those 3^^^3 peo­ple valuable.

• I’m not sure I fol­low your ar­gu­ment.

I’m go­ing to as­sume that for a sin­gle per­son, 3^^3 dust specks = 50 years of tor­ture. (My ear­lier figure seems wrong, but 3^^3 dust specks over 50 years is a lit­tle un­der 5,000 dust specks per sec­ond.) I’m go­ing to ig­nore the +1 be­cause these are big num­bers already.

If this were iter­ated 3^^^3 times, then we have the choice be­tween:

TORTURE: 3^^^3 peo­ple are each tor­tured for 50 years, once.

DUST SPECKS: 3^^^3 peo­ple are tor­tured for 50 years, re­peated (3^^^3)/​(3^^3)=3^(3^^3-3^3) times.

• The prob­a­bil­ity I’m the only per­son per­son se­lected out of 3^^^3 for such a de­ci­sion p(i) is less than any rea­son­able es­ti­mate of how many peo­ple could be se­lected, imho. Let’s say well be­low 700dB against. The chances are much greater that some prob­a­bil­ity fo those about to be dust specked or tor­tured also gets this choice (p(k)). p(k)*3^^^3 > p(i) ⇒ 3^^^3 > p(i)/​p(k) ⇒ true for any rea­son­able p(i)/​p(k)

So this means that the effec­tive num­ber of dust par­ti­cles given to each of us is go­ing to be roughly (1-p(i))p(k)3^^^3.

I’m go­ing to as­sume any amount of dust larger in mass than a few or­ders of mag­ni­tude above the Chan­drasekhar limit (1e33 kg) is go­ing to re­sult in a black hole. I can even as­sume a sig­nifi­cant er­ror mar­gin in my un­der­stand­ing of how black holes work, and the reuslts do not change.

The small­est dust par­ti­cle is prob­a­bly a sin­gle hy­dro­gen atom(re­ally ev­ery­thing re­soles to hy­dro­gen at small enough quan­tities, right?). 1 mol of hy­dro­gen weighs about 1 gram. So (1-p(i))(p(k)3^^^3 (1 gram/​mol)(6e-23 ‘specks’/​mol) (1e-3 kg/​g) (1e-33 kg/​black hole) = roughly ( 3^^^3 ) (~1e-730) = roughly 3^^^3 black holes.

ie 3^(3_1^3_2^3_3^...^3_7e13 −730) = roughly 3^(3_1^3_2^3_3^...^3_7e13)

ie 3_1^3_2^3_3^...^3_7e13 − 730 = roughly 3_1^3_2^3_3^...^3_7e13.

In con­clu­sion, I think at this level, I would choose ‘can­cel’ /​ ‘de­fault’ /​ ‘roll a dice and de­ter­mine the choice ran­domly/​not choose’ BUT would woe­fully up­date my con­cept of the sizee of the uni­verse to con­tain enough mass to even sup­port a rea­son­ably in­fentes­si­mal prob­a­bil­ity of some pro­por­tion of 3^^^3 specks of dust, and 3^^^3 peo­ple or at least some rea­son­able pro­por­tion thereof.

The ques­tion I have now is how is our model of the uni­verse to up­date given this moral dillema? What is the new ra­dius of the uni­verse given this situ­a­tion? It can’t be big enough for 3^^^3 dust specks piled on the edge of our uni­verse out­side of our light cone some­where. Either way I think the new ra­dius ought to be termed the “Yud­kowsky Ra­dius”.

• I don’t re­ally care what hap­pens if you take the dust speck liter­ally; the point is to ex­em­plify an ex­tremely small di­su­til­ity.

• I sup­pose you could view the util­ity as a mean­in­ful ob­ject in this frame and ab­stract away the dust, too, but in the end the dust-util­ity sys­tem is go­ing to en­com­paps both any­way so solv­ing the prob­lem on ei­ther level is go­ing to solve it on both.

• Let me at­tempt to shut up and mul­ti­ply.

Let’s make the as­sump­tion that a sin­gle sec­ond of tor­ture is equiv­a­lent to 1 billion dust specks to the eye. Since that many dust specks is enough to sand­blast your eye, it seems rea­son­able ap­prox­i­ma­tion.

This means that 50 years of this tor­ture is equiv­a­lent to giv­ing 1 sin­gle per­son (50 365.25 24 60 60 * 1,000,000,000) dust specks to the eye.

Ac­cord­ing to Google’s calcu­la­tor,

(50 365.25 24 60 60 1,000,000,000)/​(3^39) = 0.389354356 (50 365.25 24 60 60 1,000,000,000)/​(3^38) = 1.16806307

Ergo, If some­one con­vinces you 50 years of Tor­ture, or 3^^3(3^27) peo­ple get Specks, pick Specks.

But If some­one con­vinces you 50 years of Tor­ture, or (3^50) peo­ple get Specks, pick Tor­ture,

This ap­pears to be a fair at­tempt to shut up and mul­ti­ply.

How­ever, 3^^^3 is in­com­pre­hen­si­bly big­ger than any of that.

You could turn ev­ery atom into the ob­serv­able uni­verse into a speck of dust. At wikipe­dia’s al­most 10^80 atoms, that is still not enough dust. http://​en.wikipe­dia.org/​wiki/​Ob­serv­able_universe

You could turn ev­ery planck length in the ob­ser­avble uni­verse into a speck of dust. At An­swerbag’s 2.5 x 10^184 cu­bic planck lengths, that’s still not enough dust. http://​www.an­swerbag.com/​q_view/​33135

At this point, I thought maybe that an­other uni­verse made of 10^80 com­pu­tro­n­ium atoms is run­ning uni­verses like ours as simu­la­tion on in­di­vi­d­ual atoms. That means 10^80 2.5 x 10^184 cu­bic planck lengths of dust. But that’s still not enough dust. Again. 2.510^264 specks of dust is still WAY less than 3^^^3

At this point, I con­sid­ered check­ing if I could get enough dust specks if I liter­ally con­verted ev­ery­thing in all Everett branches since the big bang be­gin­ning of time into dust, but my math abil­ities fail me. I’ll try com­ing back to this later.

Edit: My mul­ti­pli­ca­tion sym­bols were get­ting turned into Ital­ics. Should be fixed now.

• “Fol­low­ing your heart and not your head—re­fus­ing to mul­ti­ply—has also wrought plenty of havoc on the world, his­tor­i­cally speak­ing. It’s a ques­tion­able as­ser­tion (to say the least) that con­don­ing ir­ra­tional­ity has less dam­ag­ing side effects than con­don­ing tor­ture.”

I’m not re­ally con­vinced that mul­ti­pli­ca­tion of the dust-speck effect is rele­vant. Sub­jec­tive ex­pe­rience is re­stricted to in­di­vi­d­u­als, not col­lec­tives. To me, this spe­cific ex­er­cise re­duces to a sim­pler ques­tion: Would it be bet­ter (more eth­i­cal) to tor­ture in­di­vi­d­ual A for 50 years, or in­flict a dust speck on in­di­vi­d­ual B?

If the goal is to be a util­i­tar­ian ethi­cist with the well-be­ing of hu­man­ity as your high­est pri­or­ity; then some­thing may be wrong with your model when the vast ma­jor­ity of hu­mans would choose the op­tion that you wouldn’t. (As I sus­pect they would). Utility isn’t all that mat­ters to most peo­ple. Is util­i­tar­i­anism the only “real” ethics?

My crit­i­cisms can some­times come across the wrong way. (And I know that you ac­tu­ally do care about hu­man­ity, Eli.) I don’t mean to judge here, just strongly dis­agree. Not that I re­tract what I wrote; I don’t.

• ...ex­cept that, if I’m right about the bi­ases in­volved, the Speck­ists won’t be hor­rified at each other.

If you trade off thirty sec­onds of wa­ter­board­ing for one per­son against twenty sec­onds of wa­ter­board­ing for two peo­ple, you’re not visi­bly tread­ing on a “sa­cred” value against a “mun­dane” value. It will rouse no moral in­dig­na­tion.

In­deed, if I’m right about the bias here, the Speck­ists will never be able to iden­tify a dis­crete jump in util­ity across a sin­gle neu­ron firing, even though the tran­si­tion from dust speck to tor­ture can be bro­ken up into a se­ries of such jumps. There’s no differ­ence of a sin­gle neu­ron firing that leads to the feel­ing of a com­par­i­son be­tween a sa­cred and an un­sa­cred value. The feel­ing of sa­cred­ness, it­self, is quan­ti­ta­tive and comes upon you in grad­ual in­cre­ments of neu­rons firing—even though it sup­pos­edly de­scribes a util­ity cliff with a slope higher than 3^^^3.

The pro­hi­bi­tion against tor­ture is clearly very sa­cred, and a dust speck is clearly very un­sa­cred, so there must be a cliff sharper than 3^^^3 be­tween them. But the dis­tinc­tion be­tween one dust speck and two dust specks doesn’t seem to in­volve a com­par­i­son be­tween a sa­cred and mun­dane value, and the dis­tinc­tion be­tween 50 and 49.99 years of tor­ture doesn’t seem to in­volve a com­par­i­son be­tween a sa­cred and a mun­dane value...

So we’re left with cycli­cal pre­frences. The one will trade 3 peo­ple suffer­ing 49.99 years of tor­ture for 1 per­son suffer­ing 50 years of tor­ture; af­ter hav­ing pre­vi­ously traded 9 peo­ple suffer­ing 49.98 years of tor­ture for 3 peo­ple suffer­ing 49.99 years of tor­ture; and so on back to the start­ing point where it’s bet­ter for 3^999999999 peo­ple to feel two dust specks than for 3^1000000000 peo­ple to feel one dust speck; right af­ter, a mo­ment be­fore, hav­ing traded one per­son suffer­ing 50 years of tor­ture for 3^1000000000 peo­ple feel­ing one dust speck.

• I think it’s worst for 3^999999999 peo­ple to feel two dust specks than for 3^1000000000 peo­ple to feel one dust speck. After all the next step is that it is worst for 3^1000000000 peo­ple to feel one dust speck than for 3^1000000001 peo­ple to feel less than one dust speck, which seem right.

• I think that we “speck­ists” see in­juries as poi­sons: they can de­stroy peo­ple lives only if they reach a cer­tain con­cen­tra­tion. So a greater but far more diluted pain can be less dan­ger­ous than a smaller but more con­cen­trated one. 50 and 49 years of tor­ture are still far over the thresh­old. One or two dust specks, on the other hand, are far be­low.

• Zubon, we could for­mal­ize this with a tiered util­ity func­tion (one not or­der-iso­mor­phic to the re­als, but con­tain­ing sev­eral strata each or­der-iso­mor­phic to the re­als).

But then there is a magic penny, a sin­gle sharp di­vide where no mat­ter how many googols of pieces you break it into, it is bet­ter to tor­ture 3^^^3 peo­ple for 9.99 sec­onds than to tor­ture one per­son for 10.01 sec­onds. There is a price for de­part­ing the sim­ple util­ity func­tion, and rea­sons to pre­fer cer­tain kinds of sim­plic­ity. I’ll ad­mit you can’t slice it down fur­ther than the es­sen­tially digi­tal brain; at some point, neu­rons do or don’t fire. This rules out di­vi­sions of gen­uine googol­plexes, rather than sim­ple billions of fine gra­da­tions. But if you ad­mit a tiered util­ity func­tion, it will sooner or later come down to one neu­ron firing.

And I’ll bet that most Speck­ists dis­agree on which neu­ron firing is the mag­i­cal one. So that for all their hor­ror at us Un­speck­ists, they will be just as hor­rified at each other, when one of them claims that thirty sec­onds of wa­ter­board­ing is bet­ter than 3^^^3 peo­ple poked with nee­dles, and the other dis­agrees.

• If any­thing is ag­gre­gat­ing non­lin­early it should be the 50 years of tor­ture, to which one per­son has the op­por­tu­nity to ac­cli­mate; there is no in­di­vi­d­ual ac­clima­ti­za­tion to the dust specks be­cause each dust speck oc­curs to a differ­ent person

I find this rea­son­ing prob­le­matic, be­cause in the dust specks there is effec­tively noth­ing to ac­cli­mate to… the amount of in­con­ve­nience to the in­di­vi­d­ual will always be smaller in the speck sce­nario (ex­clud­ing sec­ondary effects, such as the in­di­vi­d­ual be­ing dis­tracted and end­ing up in a car crash, of course).

Which ex­act per­son in the chain should first re­fuse?

Now, this is con­sid­er­ably bet­ter rea­son­ing—how­ever, there was no clue to this be­ing a de­ci­sion that would be se­lected over and over by countless of peo­ple. Had it been worded “you among many have to make the fol­low­ing choice...“, I could agree with you. But the cur­rent word­ing im­plied that it was once-a-uni­verse sort of choice.

• “For those who would pick SPECKS, would you pay a sin­gle penny to avoid the dust specks?”

Yes. Note that, for the ob­vi­ous next ques­tion, I can­not think of an amount of money large enough such that I would rather keep it than use it to save a per­son from tor­ture. As­sum­ing that this is post-Sin­gu­lar­ity money which I can­not spend on other life-sav­ing or tor­ture-stop­ping efforts.

“You prob­a­bly wouldn’t blind ev­ery­one on earth to save that one per­son from be­ing tor­tured, and yet, there are (3^^^3)/​(10^17) >> 7*10^9 peo­ple be­ing blinded for each per­son you have saved from tor­ture.”

This is cheat­ing, to put it bluntly- my util­ity func­tion does not as­sign the same value to blind­ing some­one and putting six billion dust specks in ev­ery­one’s eye, even though six billion specks are enough to blind peo­ple if you force them into their eyes all at once.

“I’d still take the former. (10(10100))/​(3^^^3) is still so close to zero that there’s no way I can tell the differ­ence with­out get­ting a larger uni­verse for stor­ing my mem­ory first.”

The prob­a­bil­ity is effec­tively much greater than that, be­cause of com­plex­ity com­pres­sion. If you have 3^^^^3 peo­ple with dust specks, al­most all of them will be iden­ti­cal copies of each other, greatly re­duc­ing abs(U(specks)). abs(U(tor­ture)) would also get re­duced, but by a much smaller fac­tor, be­cause the num­ber is much smaller to be­gin with.

• Peo­ple are be­ing tor­tured, and it wouldn’t take too much money to pre­vent some of it. Ob­vi­ously, there is already a price on tor­ture.

• Fas­ci­nat­ing ques­tion. No mat­ter how small the nega­tive util­ity in the dust speck, mul­ti­ply­ing it with a num­ber such as 3^^^3 will make it way worse than tor­ture. Yet I find the ob­vi­ous an­swer to be the dust speck one, for rea­sons similar to what oth­ers have pointed out—the nega­tive util­ity rounds down to zero.

But that doesn’t re­ally solve the prob­lem, for what if the harm in ques­tion was slightly larger? At what point does it cease round­ing down? I have no mean­ingful crite­ria to give for that one. Ob­vi­ously there must be a point where it does cease do­ing so, for it cer­tainly is much bet­ter to tor­ture one per­son for 50 years than 3^^^3 peo­ple for 49 years.

It is quite coun­ter­in­tu­itive, but I sup­pose I should choose the tor­ture op­tion. My other al­ter­na­tives would be to re­ject util­i­tar­i­anism (but I have no bet­ter sub­sti­tutes for it) or to mod­ify my eth­i­cal sys­tem so that it solves this prob­lem, but I cur­rently can­not come up with an un­prob­le­matic way of do­ing so.

Still, I can’t quite bring my­self to do so. I choose specks, and ad­mit that my eth­i­cal sys­tem is not con­sis­tent yet. (Not that it would be a sur­prise—I’ve no­ticed that all my at­tempts at build­ing en­tirely con­sis­tent eth­i­cal sys­tems tend to cause un­wanted re­sults at one point or the other.)

For those who would pick SPECKS, would you pay a sin­gle penny to avoid the dust specks?

A sin­gle penny to avoid one dust speck, or to avoid 3^^^3 dust specks? No to the first one. To the sec­ond one, de­pends on how of­ten they oc­cured—if I some­how could live for 3^^^3 years, get­ting one dust speck in my eye per year, then no. If they ac­tu­ally in­con­ve­nienced me, then yes—a penny is just a penny.

• The ob­vi­ous an­swer is TORTURE, all else equal, and I’m pretty sure this is ob­vi­ous to Eliezer too.

That is the straight­for­ward util­i­tar­ian an­swer, with­out any ques­tion. How­ever, it is not the com­mon in­tu­ition, and even if Eliezer agrees with you he is ev­i­dently aware that the com­mon in­tu­ition dis­agrees, be­cause oth­er­wise he would not bother blog­ging it. It’s the con­tra­dic­tion be­tween in­tu­ition and philo­soph­i­cal con­clu­sion that makes it an in­ter­est­ing topic.

• Triv­ial an­noy­ances and tor­ture can­not be com­pared in this quan­tifi­able man­ner. Tor­ture is not only suffer­ing, but lost op­por­tu­nity due to im­pris­on­ment, per­ma­nent men­tal hard­ship, ac­ti­va­tion of pain and suffer­ing pro­cesses in the mind, and a myr­iad of other un­con­sid­ered things.

And even if the tor­ture was ‘to have flecks of dust dropped in your eyes’, you still can’t com­pare a ‘tor­tur­ous amount’ ap­plied to one per­son, to sub­stan­tial num­ber dropped in the eyes of many peo­ple: We aren’t talk­ing about cpu cy­cles here—we are try­ing to quan­tify qual­ifi­ables.

If you re­vised the ques­tion, and speci­fied stated ex­actly how the tor­ture would af­fect the in­di­vi­d­ual, and how they would re­act to it, and the same for each of the ‘dust in the eyes’ peo­ple (what if one goes blind? what of their men­tal ca­pac­ity to deal with the hard­ship? what of the ac­tual level of mois­ture in their eyes, and con­se­quently the dis­com­fort be­ing felt?) then, maybe then, we could de­ter­mine which was the worse out­come, and by how much.

There are sim­ply too many as­sump­tions that we have to make in this, mor­tal, world to de­ter­mine the an­swer to such ques­tions: you might as well as how many an­gels dance on the head of a pin. Or you could start more sim­ply and ask: if you were to tor­ture two peo­ple in ex­actly the same way, which one would suffer more, and by how much?

And you no­tice, I haven’t even started to think about the eth­i­cal side of the ques­tion...

• Can you com­pare ap­ples and or­anges? You cer­tainly don’t seem to have much trou­ble when you de­cide how to spend your money at the gro­cery store.

It was rather clear from the con­text that the “dust in the eye” was a very, very minor tor­ture. Peo­ple are not go­ing blind. They are perfectly ca­pa­ble of deal­ing with it. It’s just not 3^^^3 times as minor as the tor­ture.

If you were to tor­ture two peo­ple in ex­actly the same way, they’d suffer about equally. Why do you im­ply that’s some sort of unan­swer­able ques­tion?

If you weren’t talk­ing about the eth­i­cal side, what were you talk­ing about? He wasn’t try­ing to com­pare ev­ery­thing about the two choices, just which was more eth­i­cal. It would be im­pos­si­ble if he didn’t limit it like that.

• And you no­tice, I haven’t even started to think about the eth­i­cal side of the ques­tion...

I’m pretty sure the ques­tion it­self re­volves around ethics, as far as I can tell the ques­tion is: given these 2 choices, which would you con­sider, eth­i­cally speak­ing, the ideal op­tion?

• The dust speck is de­scribed as “barely enough to make you no­tice”, so how­ever many peo­ple it would hap­pen to, it seems bet­ter than even some­thing a lot less worse than 50 years of hor­rible tor­ture. There are so many ir­ri­tat­ing things that a hu­man barely no­tices in his/​her life, what’s an ex­tra dust speck?

I think I’d trade the dust specks for even a kick in the groin.

But hey, maybe I’m miss­ing some­thing here...

• If 3^^^3 peo­ple get dust in their eye, an ex­traor­di­nary num­ber of peo­ple will die. I’m not think­ing even 1% of those af­fected will die, but per­haps 0.000000000000001% might, if that. But when deal­ing with num­bers this huge, I think the death toll would mea­sure greater than 7 billion. Know­ing this, I would take the tor­ture.

• If 3^^^3 peo­ple get dust in their eye, an ex­traor­di­nary num­ber of peo­ple will die.

The premise as­sumes it’s “barely enough to make you no­tice”, which was sup­posed to rule out any other un­pleas­ant side-effects.

• No, I’m pretty sure it makes you no­tice. It’s “enough”. “barely enough”, but still “enough”. How­ever, that doesn’t seem to be what’s re­ally im­por­tant. If I con­sider you to be cor­rect in your in­ter­pre­ta­tion of the dilemma, in that there are no other side effects, then yes, the 3^^^3 peo­ple get­ting dust in their eyes is a much bet­ter choice.

• [T]he 3^^^3 peo­ple get­ting dust in their eyes is a much bet­ter choice.

Can you ex­plain a bit about your moral or de­ci­sion the­ory that would lead you to con­clude that?

• Yes. I be­lieve that be­cause any suffer­ing caused by the 3^^^3 dust specks is spread across 3^^^3 peo­ple, it is of lesser evil than tor­tur­ing a man for 50 years. As­sum­ing there to be no side effects to the dust specks.

• When I par­ti­ci­pated in this de­bate, this post con­vinced me that a util­i­tar­ian must be­lieve that dust specks cause more over­all suffer­ing (or what­ever bad­ness mea­sure you pre­fer). Since I already wasn’t a util­i­tar­ian, this didn’t bother me.

• As a util­i­tar­ian (in broad strokes), I agree, and this doesn’t bother me be­cause this ex­am­ple is so far out of the range of what is pos­si­ble that I don’t ob­ject to say­ing, “yes, some­where out there tor­ture might be a bet­ter choice.” I don’t have to worry about that chang­ing what the an­swer is around these parts.

• That’s not quite what I meant by “ex­plain”—I had un­der­stood that to be your po­si­tion, and was try­ing to get in­sight into your rea­son­ing.

Draw­ing an anal­ogy to math­e­mat­ics, would you say that this is an ax­iom, or a the­o­rem?

If an ax­iom, it clearly must be pro­duced by a schema of some sort (as you clearly don’t have 3^^^3 in­com­press­ible rules in your head). Can you ex­plore some­what the na­ture of that schema?

If a the­o­rem, what sort of ax­ioms, and how ar­ranged, pro­duce it?

• That’s not gen­eral enough to mean very much: it fits a num­ber of de­on­tolog­i­cal moral the­o­ries and a few util­i­tar­ian ones (what the right an­swer within virtue ethics is is far too de­pen­dent on as­sump­tions to mean much), and seems to fit a num­ber of oth­ers if you don’t look too closely. Its val­idity de­pends greatly on which you’ve picked.

As best I can tell the most com­mon util­i­tar­ian ob­jec­tion to TvDS is to deny that Specks are in­di­vi­d­u­ally of moral sig­nifi­cance, which seems to me to miss the point rather badly. Another is to treat var­i­ous kinds of di­su­til­ity as in­com­men­su­rate with each other, which is at least con­sis­tent with the spirit of the ar­gu­ment but leads to some rather weird con­se­quences around the edge cases.

• No-one asked for a gen­eral ex­pla­na­tion.

The best term I have found, the one that seems to de­scribe the way I eval­u­ate situ­a­tions the most ac­cu­rately, is con­se­quen­tial­ism. How­ever, that may still be in­ac­cu­rate. I don’t have a fully re­li­able way to de­ter­mine what con­se­quen­tial­ism en­tails; all I have is Wikipe­dia, at the mo­ment.

I tend to just use cost-benefit anal­y­sis. I also have a men­tal, and quite ar­bi­trary, scale of what things I do and don’t value, and to what de­gree, to avoid situ­a­tions where I am pre­sented with mul­ti­ple, equally benefi­cial choices. I also have a few heuris­tics. One of them es­sen­tially says that given a choice be­tween a loss that is spread out amongst many, and an equal loss di­vided amongst the few, the former is the more moral choice. Does that help?

• It helps me un­der­stand your rea­son­ing, yes. But if you aren’t ar­gu­ing within a fairly con­sis­tent util­i­tar­ian frame­work, there’s not much point in try­ing to con­vince oth­ers that the in­tu­itive op­tion is cor­rect in a dilemma de­signed to illus­trate coun­ter­in­tu­itive con­se­quences of util­i­tar­i­anism.

So far it sounds like you’re tel­ling us that Specks is in­tu­itively more rea­son­able than Tor­ture, be­cause the losses are so small and so widely dis­tributed. Well, yes, it is. That’s the point.

• At what point is util­i­tar­i­anism not com­pletely ar­bi­trary?

• I’m not a moral re­al­ist. At some point it is com­pletely ar­bi­trary. The meta-ethics here are way out­side the scope of this dis­cus­sion; suffice it to say that I find it at­trac­tive as a first ap­prox­i­ma­tion of eth­i­cal be­hav­ior any­way, be­cause it’s a sim­ple way of satis­fy­ing some ba­sic ax­ioms with­out go­ing com­pletely off the rails in situ­a­tions that don’t re­quire Knuth up-ar­row no­ta­tion to de­scribe.

But that’s all a sideline: if the choice of moral the­ory is ar­bi­trary, then ar­gu­ing about the con­se­quences of one you don’t ac­tu­ally hold makes less sense than it oth­er­wise would, not more.

• I be­lieve I sug­gested ear­lier that I don’t know what moral the­ory I hold, be­cause I am not sure of the ter­minol­ogy. So I may, in fact, be a util­i­tar­ian, and not know it, be­cause I have not the vo­cab­u­lary to say so. I asked “At what point is util­i­tar­i­anism not com­pletely ar­bi­trary?” be­cause I wanted to know more about util­i­tar­i­anism. That’s all.

• Ah. Well, in­for­mally, if you’re in­ter­ested in piss­ing the fewest peo­ple off, which as best I can tell is the main point where moral ab­strac­tions in­ter­sect with phys­i­cal re­al­ity, then it makes sense to eval­u­ate the moral value of ac­tions you’re con­sid­er­ing ac­cord­ing to the de­gree to which they piss peo­ple off. That loosely cor­re­sponds to prefer­ence util­i­tar­i­anism: speci­fi­cally nega­tive prefer­ence util­i­tar­i­anism, but ex­tend­ing it to the gen­eral ver­sion isn’t too tricky. I’m not a perfect prefer­ence util­i­tar­ian ei­ther (peo­ple are rather bad at know­ing what they want; I think there are situ­a­tions where what they ac­tu­ally want trumps their stated prefer­ence; but cor­re­spon­dence with stated prefer­ence is it­self a prefer­ence and I’m not sure ex­actly where the in­flec­tion points lie), but that ought to suffice as an out­line of mo­ti­va­tions.

• Thank you.

• The thought ex­per­i­ment is, 3^^^3 bad events, each just so bad that you no­tice their bad­ness. Con­sid­er­ing con­se­quences of the par­tic­u­lar bad thing means that in fact there are other things as well that are de­pend­ing on your choice, and that’s a differ­ent thought ex­per­i­ment.

• That is in no way what was said. Also, the idea of an event that some­how man­ages to have no effect aside from be­ing bad is… in­sanely con­trived. More con­trived than the dilemma it­self.

How­ever, let’s say that in­stead of 3^^^3 peo­ple get­ting dust in their eye, 3^^^3 peo­ple ex­pe­rience a sin­gle nano-sec­ond of de­spair, which is im­me­di­ately erased from their mem­ory to pre­vent any psy­cholog­i­cal dam­age. If I had a choice be­tween that and tor­tur­ing a per­son for 50 years, then I would prob­a­bly choose the former.

• That is in no way what was said. Also, the idea of an event that some­how man­ages to have no effect aside from be­ing bad is… in­sanely con­trived. More con­trived than the dilemma it­self.

The no­tion of 3^^^3 events of any sort is far more con­trived than the elimi­na­tion of knock-on effects of an event. There isn’t enough mat­ter in the uni­verse to make that many dust specks, let alone the eyes to be hit and ner­vous sys­tems to ex­pe­rience it. Of course it’s con­trived. It’s a thought ex­per­i­ment. I don’t as­sert that the origi­nal for­mu­la­tion makes it en­tirely clear; my point is to keep the fo­cus on the ac­tual rele­vant bit of the ex­per­i­ment—if you wan­der, you’re an­swer­ing a less in­ter­est­ing ques­tion.

• I don’t agree. The ex­is­tence 3^^^3 peo­ple, or 3^^^3 dust specks, is im­pos­si­ble be­cause there isn’t enough mat­ter, as you said. The ex­is­tence of an event that has only effects that are tai­lored to fit a par­tic­u­lar per­son’s idea of ‘bad’ does not fit my model of how causal­ity works. That seems like a worse in­frac­tion, to me.

How­ever, all of that is ir­rele­vant, be­cause I an­swered the more “in­ter­est­ing ques­tion” in the com­ment you quoted. To be blunt, why are we still talk­ing about this?

• I don’t agree. The ex­is­tence 3^^^3 peo­ple, or 3^^^3 dust specks, is im­pos­si­ble be­cause there isn’t enough mat­ter, as you said. The ex­is­tence of an event that has only effects that are tai­lored to fit a par­tic­u­lar per­son’s idea of ‘bad’ does not fit my model of how causal­ity works. That seems like a worse in­frac­tion, to me.

I’m not sure I agree, but “which im­pos­si­ble thing is more im­pos­si­ble” does seem an odd thing to be ar­gu­ing about, so I’ll not go into the rea­sons un­less some­one asks for them.

How­ever, all of that is ir­rele­vant, be­cause I an­swered the more “in­ter­est­ing ques­tion” in the com­ment you quoted. To be blunt, why are we still talk­ing about this?

I meant a more gen­er­al­ized you, in my last sen­tence. You in par­tic­u­lar did in­deed an­swer the more in­ter­est­ing ques­tion.

• Since there was a post about what seems ob­vi­ous to the speaker might not be to the listener in this blog a few days ago, I thought I would point out that : It was NOT AT ALL ob­vi­ous to me what should be preferred, tor­ture 1 man for 50 years or speck of dust in 3^^^3 peo­ple. Can you please plase clar­ify/​up­date what the point of the post was?

• “...Some may think these trifling mat­ters not worth mind­ing or re­lat­ing; but when they con­sider that tho’ dust blown into the eyes of a sin­gle per­son, or into a sin­gle shop on a windy day, is but of small im­por­tance, yet the great num­ber of the in­stances in a pop­u­lous city, and its fre­quent rep­e­ti­tions give it weight and con­se­quence, per­haps they will not cen­sure very severely those who be­stow some at­ten­tion to af­fairs of this seem­ingly low na­ture. Hu­man felic­ity is pro­duc’d not so much by great pieces of good for­tune that sel­dom hap­pen, as by lit­tle ad­van­tages that oc­cur ev­ery day.”

--Ben­jamin Franklin

• Bravo, Eliezer. Any­one who says the an­swer to this is ob­vi­ous is ei­ther WAY smarter than I am, or isn’t think­ing through the im­pli­ca­tions.

Sup­pose we want to define Utility as a func­tion of pain/​dis­com­fort on the con­tinuum of [dust speck, tor­ture] and in­clud­ing the num­ber of peo­ple af­flicted. We can choose what­ever desider­ata we want (e.g. pos­i­tive real val­ued, mono­tonic, com­mu­ta­tive un­der ad­di­tion).

But what if we choose as one desider­a­tum, “There is no num­ber n large enough such that Utility(n dust specks) > Utility(50 yrs tor­ture).” What does that im­ply about the func­tion? It can’t be an­a­lytic in n (even if n were con­tin­u­ous). That rules out mul­ta­plica­tive func­tions triv­ially.

Would it have sin­gu­lar­i­ties? If so, how would we com­bine util­ity func­tions at sin­gu­lar val­ues? Take limits? How, ex­actly?

Or must dust specks and tor­ture live in differ­ent spaces, and is there no ba­sis that can be used to map one to the other?

The bot­tom line: is it pos­si­ble to con­sis­tently define util­ity us­ing the above desider­a­tum? It seems like it must be so, since the an­swer is ob­vi­ous. It seems like it must not be so, be­cause of the im­pli­ca­tions for the util­ity func­tion as the ar­gu­ments change.

Edit: After dis­cussing with my lo­cal meetup, this is some­what re­solved. The above desider­ata re­quire the util­ity to be bounded in the num­ber of peo­ple, n. For ex­am­ple, it could be a stau­rat­ing ex­po­nen­tial func­tion. This is self-con­sis­tent, but in­con­sis­tent with the no­tion that be­cause ex­pe­rience is in­de­pen­dent, util­ities should add.

In­ter­est­ingly, it puts strict math­e­mat­i­cal rules on how util­ity can scale with n.

• I don’t see that it’s nec­es­sary—or pos­si­ble, for that mat­ter—for me to as­sign dust specks and tor­ture to a sin­gle, con­tin­u­ous util­ity func­tion. On a scale of di­su­til­ity that in­cludes such events as “be­ing hor­ribly tor­tured,” the di­su­til­ity of a mo­men­tary ir­ri­ta­tion such as a dust speck in the eye has a value of pre­cisely zero—not 0.000...0001, but just plain 0, and of course, 0 x 3^^^3 = 0.

Fur­ther­more, I think the “minor ir­ri­ta­tions” scale on which dust specks fall might in­crease lin­early with the time of ex­po­sure, and would cer­tainly in­crease lin­early with num­ber of in­di­vi­d­u­als ex­posed to it. On the other hand, the di­su­til­ity of tor­ture, given my un­der­stand­ing of how mem­ory and an­ti­ci­pa­tion af­fect peo­ple’s ex­pe­rience of pain, would in­crease ex­po­nen­tially over time from a range of a few microsec­onds to a few days, then level off to some­thing less than a lin­ear in­crease with ac­clima­ti­za­tion over the range of days to years. It would in­crease lin­early with the num­ber of peo­ple suffer­ing a given de­gree of pain for a given amount of time. (All other things be­ing equal, of course. Peo­ple’s pain tol­er­ance varies with age, ex­pe­rience, and ge­net­ics; it would be much worse to in­flict any given amount of pain on a young child than on an adult who’s already gone through, say, Navy S.E.A.L. train­ing, and thus demon­strated a far higher-than-av­er­age pain tol­er­ance.)

Thus, it would be enor­mously worse to in­flict X amount of pain on one in­di­vi­d­ual for sixty min­utes than on 60 in­di­vi­d­u­als for one minute each, which in turn would be much worse than in­flict­ing the same pain on 3600 in­di­vi­d­u­als for one sec­ond each—and if we could spread it out to a microsec­ond each for 36,000,000 peo­ple, the di­su­til­ity might van­ish al­to­gether as the “ex­pe­rience” be­comes too brief for the hu­man ner­vous sys­tem to reg­ister at all, and thus ceases to be an ex­pe­rience. How­ever, once we get past where ac­clima­ti­za­tion in­flects the curve, it would be much worse to tor­ture 52 peo­ple for one week each than to tor­ture one per­son for an en­tire year. It might even be worse to tor­ture ten peo­ple for one week each than one for an en­tire year—I’m not sure of the pre­cise val­ues in­volved in this util­ity func­tion, and hap­pily, at the fine scale, I’ll prob­a­bly never need to work them out (the em­piri­cal test is pos­si­ble in prin­ci­ple, of course, but could only be performed in prac­tice by a fiend like Josef Men­gele).

There’s also the fact that know­ing many peo­ple can and have en­dured a par­tic­u­lar pain seems to make it more en­durable for oth­ers who are aware of that fact. As Spi­der Robin­son says, “Shared joy is in­creased, shared pain is less­ened”—I don’t know if that re­ally “re­futes en­tropy,” but both of those clauses are true in­di­vi­d­u­ally. That’s part of the rea­son egal­i­tar­i­anism, as other com­menters have pointed out, has pos­i­tive util­ity value.

• If dust specks have a value of 0, then what’s the small­est amount of dis­com­fort that has a nonzero value in­stead? Use that as your re­place­ment dust speck.

And of course, the di­su­til­ity of tor­ture cer­tainly in­creases in non­lin­ear ways with time. The 3^^^3 is there to make up for that. 50 years of tor­ture for one per­son is prob­a­bly not as bad as 25 years of tor­ture for a trillion peo­ple. This in turn is prob­a­bly not as bad as 12.5 years of tor­ture for a trillion trillion peo­ple (sorry my large num­ber vo­cab­u­lary is lack­ing). If we keep do­ing this (halv­ing the tor­ture length, mul­ti­ply­ing the num­ber of peo­ple by a trillion) then are we always go­ing from bad to worse? And do we ever get to the point where each in­di­vi­d­ual per­son tor­tured ex­pe­riences about as much dis­com­fort as our re­place­ment dust speck?

• If dust specks have a value of 0, then what’s the small­est amount of dis­com­fort that has a nonzero value in­stead?

I don’t know ex­actly where I’d make the qual­i­ta­tive jump from the “dis­com­fort” scale to the “pain” scale. There are so many differ­ent kinds of un­pleas­ant stim­uli, and it’s difficult to com­pare them. For elec­tric shock, say, there’s prob­a­bly a par­tic­u­lar curve of voltage, am­per­age and du­ra­tion be­low which the shock would qual­ify as dis­com­fort, with a zero value on the pain scale, and above which it be­comes pain (I’ll even go so far as to say that for short pe­ri­ods of con­tact, the voltage and am­per­age val­ues lies be­tween those of a vi­o­let wand and those of a stun gun). For lo­cal­ized heat, I think it would have to be at least enough to cause a small first-de­gree burn; for lo­cal­ized cold, enough to cause the be­gin­nings of frost­bite (i.e. a few liv­ing cells lysed by the for­ma­tion of ice crys­tals in their cy­to­plasm). For heat and cold over the whole body, it would have to be enough to over­come the body’s nat­u­ral ther­mo­stat, ini­ti­at­ing hy­pother­mia or heat­stroke.

It oc­curs to me that I’ve pur­pose­fully en­dured lev­els of dis­com­fort I would prob­a­bly re­gard as pain with a non-zero value on the tor­ture scale if it was in­flicted on me in­vol­un­tar­ily, as a re­sult of work­ing out at the gym (which has an ex­pected pay­off in health and ap­pear­ance, of course), and from wear­ing an IV for two 36-hour pe­ri­ods in a phar­ma­coki­netic study for which I’d vol­un­teered (it paid \$500); I would cer­tainly do so again, for the same in­duce­ments. Choice makes a big differ­ence in our sub­jec­tive ex­pe­rience of an un­pleas­ant stim­u­lus.

50 years of tor­ture for one per­son is prob­a­bly not as bad as 25 years of tor­ture for a trillion peo­ple.

Of course not; by the scale I posited above, 50 years for one per­son isn’t even as bad as 25 years for two peo­ple.

If we keep do­ing this (halv­ing the tor­ture length, mul­ti­ply­ing the num­ber of peo­ple by a trillion) then are we always go­ing from bad to worse?

No, but the length has to get pretty tiny (prob­a­bly some­where be­tween a mil­lisec­ond and a microsec­ond) be­fore we re­verse the di­rec­tion.

And do we ever get to the point where each in­di­vi­d­ual per­son tor­tured ex­pe­riences about as much dis­com­fort as our re­place­ment dust speck?

Yes, we do; in fact, we even­tu­ally get to a point where each per­son “tor­tured” ex­pe­riences no dis­com­fort at all, be­cause the ner­vous sys­tem is not in­finitely fast nor in­finitely sen­si­tive. If you’re us­ing tem­per­a­ture for your tor­ture, heat trans­fer hap­pens at a finite speed; no mat­ter how hot or cold the ma­te­rial that touches your skin, there’s a pos­si­ble time of con­tact short enough that it wouldn’t change your skin tem­per­a­ture enough to cause any dis­com­fort at all. Even an elec­tric shock could be brief enough not to reg­ister.

• The idea that the util­ity should be con­tin­u­ous is math­e­mat­i­cally equiv­a­lent to the idea that an in­finites­i­mal change on the dis­com­fort/​pain scale should give an in­finites­i­mal change in util­ity. If you don’t use that ax­iom to de­rive your util­ity funci­ton, you can have sharp jumps at ar­bi­trary pain thresh­olds. That’s perfectly OK—but then you have to choose where the jumps are.

• I think that’s prob­a­bly more prac­ti­cal than try­ing to make it con­tin­u­ous, con­sid­er­ing that our ner­vous sys­tems are in­ca­pable of per­ceiv­ing in­finites­i­mal changes.

• Yes, we are run­ning on cor­rupted hard­ware at about 100 Hz, and I agree that defin­ing broad cat­e­gories to make first-cut de­ci­sions is nec­es­sary.

But if we were de­sign­ing a moral­ity pro­gram for a su­per-in­tel­li­gent AI, we would want to be as math­e­mat­i­cally con­sis­tent as pos­si­ble. As shminux im­plies, we can con­struct patholog­i­cal situ­a­tions that ex­ploit the par­tic­u­lar choice of dis­con­ti­nu­ities to yield un­wanted or in­con­sis­tent re­sults.

• then you have to choose where the jumps are

It could be worse than that: there might not be a way to choose the jumps con­sis­tently, say, to in­clude differ­ent kinds of dis­com­fort, some re­lated to phys­i­cal pain and oth­ers not (tick­ling? itch­ing? an­guish? en­nui?)

• In other words, it fol­lows that 1 per­son be­ing tor­tured for 50 years is bet­ter than 3^^^3 peo­ple be­ing tor­tured for a mil­lisec­ond.

You’re well on your way to the dark side.

• I might have to bring it up to a minute or two be­fore I’d give you that—I per­ceive the ex­po­nen­tial growth in di­su­til­ity for ex­treme pain over time dur­ing the first few min­utes/​hours/​days as very, very steep. Now, if we posit that the peo­ple in­volved are im­mor­tal, that would change the equa­tion quite a bit, be­cause fifty years isn’t pro­por­tion­ally that much more than fifty sec­onds in a life that lasts for billions of years; but as­sum­ing the pre­sent hu­man lifes­pan, fifty years is the bulk of a per­son’s life. What du­ra­tion of tor­ture qual­ifies as a literal fate worse than (im­me­di­ate) death, for a hu­man with a life ex­pec­tancy of eighty years? I’ll posit that it’s more than five years and less than fifty, but be­yond that I wouldn’t care to try to choose.

Let’s step away from out­right tor­ture and look at some­thing differ­ent: soli­tary con­fine­ment. How long does a per­son have to be locked in a room against his or her will be­fore it rises to a level that would have a non-zero di­su­til­ity you could mul­ti­ply by 3^^^3 to get a higher di­su­til­ity than that of a sin­gle per­son (with a typ­i­cal, pre­sent-day hu­man lifes­pan) locked up that way for fifty years? I’m think­ing, off the top of my head, that non-zero di­su­til­ity on that scale would arise some­where be­tween 12 and 24 hours.

• If get­ting hit by a dust speck has u = 0, then air pres­sure great enough to crush you has u = 0.

• Nope, that doesn’t fol­low; mul­ti­pli­ca­tion isn’t the only pos­si­ble op­er­a­tion that can be ap­plied to this scale.

• If asked in­de­pen­dently whether or not I would take an eye­ball speck in the eye to spare a stranger 50 years of tor­ture, i would say “sure”. I sus­pect most peo­ple would if asked in­de­pen­dently. It should make no differ­ence to each of those 3^^^3 dust speck vic­tims that there are an­other (3^^^3)-1 peo­ple that would also take the dust speck if asked.

It seems then that there are thresh­olds in hu­man value. Hu­man value might be bet­ter mod­eled by sure­als than re­als. In such a sys­tem we could rep­re­sent the util­ity of 50 years of tor­ture as -Ω and rep­re­sent the util­ity of a dust speck in one’s eye as −1. This way, no mat­ter how many dust specks end up in eyes, they don’t add up to tor­tur­ing some­one for 50 years. How­ever we would still min­i­mize tor­ture, and min­i­mize dust specks.

The greater prob­lem is to ex­hibit a gen­eral pro­ce­dure for when we should treat one fate as be­ing in­finitely worse than an­other, vs. treat­ing it as merely be­ing some finite amount worse.

• That’s a fairly ma­nipu­la­tive way of ask­ing you to make that de­ci­sion, though. If I were asked whether or not I would take a hard punch in the arm to spare a stranger a bro­ken bone, I would an­swer “sure”, and I sus­pect most peo­ple would, as well. How­ever, it is pretty much clear to me that 3^^^3 peo­ple get­ting punched is much much worse than one per­son break­ing a bone.

• It should make no differ­ence to each of those 3^^^3 dust speck vic­tims that there are an­other (3^^^3)-1 peo­ple that would also take the dust speck if asked.

That rests on the as­sump­tion that each per­son only cares about their own dust speck and the pos­si­ble tor­ture vic­tim. If peo­ple are al­lowed to care about the ag­gre­gate quan­tity of suffer­ing, then this choice might rep­re­sent an Abilene para­dox.

• Here’s a sug­ges­tion: if some­one go­ing through a fate A, is in­ca­pable of notic­ing whether or not they’re go­ing through fate B, then fate A is in­finitely worse than fate B.

• Peo­ple who choose tor­ture, if the ques­tion was in­stead framed as the fol­low­ing would you still choose tor­ture?

“As­sum­ing you know your lifes­pan will be at least 3^^^3 days, would you choose to ex­pe­rience 50 years worth of tor­ture, in­flicted a day at a time at in­ter­vals spread evenly across your life span start­ing to­mor­row, or one dust speck a day for the next 3^^^3 days of your life?”

• Clever, but not, I think, very illu­mi­nat­ing -- 3^^^3 is just as fan­tas­ti­cally, in­tu­ition-break­ingly huge as it ever was, and us­ing the word “to­mor­row” adds a nasty hy­per­bolic dis­count­ing ex­ploit on top of that. All the ba­sic logic of the origi­nal still seems to ap­ply, and so does the con­clu­sion: if a dust speck is in any way com­men­su­rate with tor­ture (a con­di­tion as­sumed by the OP, but de­nied by enough ob­jec­tions that I think it’s worth point­ing out ex­plic­itly), pick Tor­ture, oth­er­wise pick Specks.

One of the frus­trat­ing things about the OP is that most of the ob­jec­tions to it are based on more or less clever in­tu­ition pumps, while the post it­self is es­sen­tially mak­ing a util­i­tar­ian case for ig­nor­ing your in­tu­itions. Tends to lead to a lot of peo­ple talk­ing past each other.

• I’ve heard this rephras­ing be­fore but it means less than you might think. Hu­man in­stinct tells us to post­pone the bad as much as pos­si­ble. Put aside the dust­speck is­sue for the mo­ment: let’s com­pare tor­ture to tor­ture. I’d be tempted to choose a 1000 years of tor­ture over a sin­gle year of tor­ture, if the 1000 years are a few mil­lions of years in the fu­ture, but the sin­gle year had to start now.

Does this fact mean I need con­cede 1000 years of tor­ture are less bad than a sin­gle year? Surely not. It just illus­trates hu­man hy­per­bolic dis­count­ing.

• I would al­most un­doubt­edly choose a dust speck a day for the rest of my life. So would most peo­ple.

The ques­tion re­mains whether that would be the right choice… and, if so, how to cap­ture the prin­ci­ples un­der­ly­ing that choice in a gen­er­al­iz­able way.

For ex­am­ple, in terms of hu­man in­tu­ition, it’s clear that the differ­ence be­tween suffer­ing for a day and suffer­ing for five years plus one day is not the same as the differ­ence be­tween suffer­ing for fifty years and suffer­ing for fifty-five years, nor be­tween zero days and five years. The num­bers mat­ter.

But it’s not clear to me how to pro­ject the prin­ci­ples un­der­ly­ing that in­tu­ition onto num­bers that my in­tu­ition chokes on.

• I would al­most un­doubt­edly choose a dust speck a day for the rest of my life. So would most peo­ple.

Could it be that in the 50 years worth of tor­ture would also amount to more than a dust spec of daily dis­com­fort caused by hav­ing been psy­cholog­i­cally trau­ma­tized by the tor­ture, for the re­main­ing 3^^^3 days?

What if the 50 years of tor­ture come at the end of the lifes­pan?

Istill would rather just take the dust speck now and then though. Noth­ing for­bids me from hav­ing a func­tion more non­lin­ear than 3^^^^[n] 3 , as a messy wired neu­ral net­work i can eas­ily im­ple­ment im­pre­cise alge­bra on the num­bers that are far be­yond any up ar­row no­ta­tion, or even num­bers x,y,z… that are such that any finite in­te­ger x < y , any finite in­te­ger y < z , and so on . In­fini­ties are not hard to im­ple­ment at all. Con­sider com­par­i­sons on ar­rays made like ab[1] . I’m us­ing strings when I need that prop­erty in soft­ware, so that i can always make some value that will have prece­dence.

edit: Note that one could think of the com­par­i­son be­tween real val­ues in above ex­am­ple as com­par­i­sons be­tween a[1]*big num­ber + a[2] , which may seem sen­si­ble, and then learn of the up­ar­rows, get mind bog­gled, and rea­son that the up-ar­rows in a[2] will be larger than big num­ber. But they never will change out­come of the com­par­i­son as per the ac­tual logic where a[1] always mat­ters more than a[2] .

• Sure, if I fac­tor in the knock-on effects of 50 years of tor­ture (or oth­er­wise ig­nore the origi­nal thought ex­per­i­ment and sub­sti­tute my own) I might come to differ­ent re­sults.

Leav­ing that aside, though, I agree that the na­ture of my util­ity func­tion in suffer­ing is ab­solutely rele­vant here, and it’s en­tirely pos­si­ble for that func­tion to be such that BIGNUMBER x SMALLSUFFERING is worth less than SMALLNUMBER x BIGSUFFERING even if BIGNUMBER >>>>>> SMALLNUMBER.

The key word here is pos­si­ble though. I don’t re­ally know that it is.

• Choos­ing TORTURE is mak­ing a de­ci­sion to con­demn some­one to fifty years of tor­ture, while know­ing that 3^^^3 peo­ple would not want you to do so, would beg you not to, would re­act with hor­ror and re­vul­sion if/​when they knew you did it. And you must do it for the sake of some global prin­ci­ple or some­thing. I’d say it puts one at least into Well-in­ten­tioned Ex­trem­ist /​ KnightTem­plar cat­e­gory, if not out­right villain.

If an AI had made a choice like that, against known wishes of prac­ti­cally ev­ery­one, I’d say it was rather un­friendly.

• I spent quite a while think­ing about this one, and here is my “an­swer”.

My first line of ques­tion­ing is “can we just mul­ti­ply and com­pare the suffer­ings ?” Well, no. Our util­ity func­tions are com­pli­cated. We don’t even fully know them. We don’t ex­actly know what are ter­mi­nal val­ues, and what are in­ter­me­di­ate val­ues in them. But it’s not just “max­i­mize to­tal hap­piness” (with suffer­ing be­ing nega­tive hap­piness). My util­ity func­tion also val­ues things like fair­ness (it may be be­cause I’m a pri­mate, but still, I value it). The “hap­piness” part of my util­ity func­tion will be higher for tor­ture, the “fair­ness” part of it, lower. Since I don’t know the ex­act co­effi­cient of those two parts, I can’t re­ally “shut up and mul­ti­ply”.

But… well… 3^^^3 is well… re­ally a lot. I can’t get out this way, even adding cor­rect­ing terms, even if it’s not to­tally lin­ear, even tak­ing into ac­count fair­ness, well, 3^^^3 is still go­ing to trump over 1.

So for any re­al­is­tic com­pu­ta­tion I would make of my util­ity func­tion, it seems that “tor­ture” will score higher than “dust speck”. So I should chose tor­ture ? Well, not sure yet. For I’ve eth­i­cal rules. What’s an eth­i­cal rule ? It’s an in­ter­nal law (some­how, a cached thought, from my own com­pu­ta­tion, or com­ing from out­side) that says “don’t ever do that”. It in­cludes “do not tor­ture !” it in­cludes “noth­ing can ever jus­tify tor­tur­ing some­one for 50 years”. Why are those rules for ? There are here to pro­tect my­self from do­ing mis­takes. Be­cause I can’t trust my­self fully. I’ve bi­ases. I don’t have full knowl­edge. I’ve limited amount of time to take de­ci­sions, and I only run at 100Hz. So I need safe­guards. I need rules, that I’ll fol­low even when my com­pu­ta­tion tells me I shouldn’t. Those rules can be over­rid­den. But they need to be over­rid­den by some­thing with al­most ab­solute cer­ti­tude, and to be of the same (or higher) level. No amount of dust speck can trig­ger an over­ride of the “no tor­ture” rule. I know my his­tory well enough to know that when you al­low your­self to tor­ture, be­cause you’re “sure” that if you don’t some­thing worse will hap­pen, well, you end up be­com­ing the worse. I’ve high ideals, I’ve the will to change the world for bet­ter—there­fore I need rules to pre­vent me from be­com­ing Stalin or the Holy In­qui­si­tion. And that’s typ­i­cally the case. 3^^^3 per­sons will re­ceive dust speck ? Well, too bad. Sure, it’ll be a less op­ti­mal util­ity func­tion than al­low­ing just one per­son to be tor­tured. But I don’t trust my­self to sen­tence that per­son to be tor­tured. So I’ll chose dust specks for me and ev­ery­one.

If you al­low me to ar­gue by fic­tional ev­i­dence, well, that re­minds me to the end of Asi­mov Robot cy­cle (Robots and Em­pire, mostly). Warn­ing: spoilers com­ing. If you didn’t read it, go read it, and skip the rest of that para­graph ;) So… when the two robots, Da­neel and Giskard, re­al­ize the limi­ta­tions of the First Law: « A robot may not in­jure a hu­man be­ing or, through in­ac­tion, al­low a hu­man be­ing to come to harm. », and try to craft the Law Zero: « A robot may not harm hu­man­ity, or, by in­ac­tion, al­low hu­man­ity to come to harm. », they end up fac­ing a very difficult prob­lem—one for which they’ll need Psy­chohis­tory to solve, and even then, only par­tially. It’s rel­a­tively easy to know that a hu­man be­ing is in dan­ger,or suffer­ing, and how to help me. It’s much, much harder to know that hu­man­ity is in dan­ger and how to help it. That’s a deep rea­son be­hind eth­i­cal rules : tor­ture some­one is just plain wrong. I may think it’s good in that situ­a­tion, be­cause it’ll pre­vent a Ter­ror At­tack, or help me win the war against that hor­rible Enemy, or be­cause it’ll de­ter crime, or be­cause it’ll save 3^^^3 per­sons from a dust speck. But I just don’t trust my­self enough to go as far as to tor­ture some­one be­cause I com­puted it would do good over­all.

And the last im­por­tant point on the is­sue is so­cial rules. There is, in XXIest cen­tury west­ern so­cieties at least, a strong taboo on tor­ture. That taboo is a shield. It means than when a pres­i­dent of the USA uses tor­ture, he loses the elec­tions (of course, it’s much more com­pli­cated, but I think it did play a role). It makes us­ing tor­ture a very, very costly strat­egy. We have the same with poli­ti­cal vi­o­lence. When the cops at­tacked a anti-war protest at the Charonne metro sta­tion on Feb 8, 1962, kil­ling 9 demon­stra­tors in­clud­ing a 16 years old boy, that was the end of Alge­ria war. Of course, it wasn’t just that. De Gaulle was already try­ing to stop war, it was lost. But the up­roar (nearly half a mil­lion of peo­ple at­tended their burial) was so high that the poli­ti­cal cost of still sup­port­ing the war was made much big­ger, so that the end of the war was has­tened.

I won’t take the re­spon­si­bil­ity of weak­en­ing those taboo (against tor­ture, against poli­ti­cal vi­o­lence, …) by break­ing them my­self. The con­se­quences on so­ciety, on fur­ther peo­ple us­ing more tor­ture later on, are too scary.

So, to con­clude, I’ll chose dust specks. Not be­cause my util­ity func­tion scores higher on dust speck. But be­cause I can’t trust my­self enough to wield some­thing as hor­rible as tor­ture (I’ve eth­i­cal rules, and I’ll fol­low them, even when my com­pu­ta­tions tell me to do oth­er­wise, for it’s the only safe­guard I know against be­com­ing Stalin) and be­cause I value way too much the so­cietal taboo against tor­ture to take the re­spon­si­bil­ity of low­er­ing it.

Now… I’ve a feel­ing of dis­con­tent for reach­ing that con­clu­sion, be­cause it co­in­cide with my ini­tial gut-level re­ac­tion to the post. It some­how feel like I wrote the bot­tom line, and then the ra­tio­nal­iza­tion. But… I did my best, I did over­come the first “ex­cuse” (non-lin­ear­ity and valu­ing fair­ness) my mind gave me. But I don’t find flaws in the two oth­ers. And well, re­versed stu­pidity is not in­tel­li­gence. Reach­ing the same con­clu­sion that I had in­tu­itively doesn’t always mean it’s a wrong con­clu­sion.

• I doubt any­body’s go­ing to read a com­ment this far down, but what the heck.

Per­haps go­ing from noth­ing to a mil­lion dust specks isn’t a mil­lion times as bad as go­ing from noth­ing to one dust speck. One thing is cer­tain though: go­ing from noth­ing to a mil­lion dust specks is ex­actly as bad as go­ing from noth­ing to one dust speck plus go­ing from one dust speck to two dust specks etc.

If go­ing from noth­ing to one dust speck isn’t a mil­lionth as bad as noth­ing to a mil­lion dust specks, it has to be made up some­where else, like go­ing from 999,999 to a mil­lion dust specks be­ing more than a mil­lionth as bad.

What if the 3^^^3 were also hor­ribly tor­tured for fifty years? Would go­ing from that to that plus a dust speck change ev­ery­thing? It’s now the worst dust speck you’re adding, right?

• Doesn’t “harm”, to a con­se­quen­tial­ist, con­sist of ev­ery cir­cum­stance in which things could be bet­ter, but aren’t ? If a speck in the eye counts, then why not, for ex­am­ple, be­ing in­suffi­ciently en­ter­tained ?

If you ac­cept con­se­quen­tial­ism, isn’t it morally right to tor­ture some­one to death so long as enough peo­ple find it funny ?

• I’m pick­ing on this com­ment be­cause it prompted this thought, but re­ally, this is a per­va­sive prob­lem: con­se­quen­tial­ism is a gi­gan­tic fam­ily of the­o­ries, not just one. They are all still wrong, but for any sin­gle coun­terex­am­ple, such as “it’s okay to tor­ture peo­ple if lots of peo­ple would be thereby amused”, there is gen­er­ally at least one the­ory or sub­fam­ily of the­o­ries that have that coun­terex­am­ple cov­ered.

• Isn’t it para­dox­i­cal to ar­gue against con­se­quen­tial­ism based on its con­se­quences?

The rea­son you can’t tor­ture peo­ple is that those mem­bers of your pop­u­la­tion who aren’t as dumb as bricks will re­al­ize that the same could hap­pen to them. Such anx­iety among the more in­tel­li­gent mem­bers of your so­ciety should out­weigh the fun ex­pe­rienced by the more eas­ily amused.

• I typ­i­cally ar­gue against con­se­quen­tial­ism based on ap­peals to in­tu­ition and its im­pli­ca­tions, which are only “con­se­quences” in the sense used by con­se­quen­tial­ism if you do some fancy equiv­o­cat­ing.

The rea­son you can’t tor­ture peo­ple is that those mem­bers of your pop­u­la­tion who aren’t as dumb as bricks will re­al­ize that the same could hap­pen to them. Such anx­iety among the more in­tel­li­gent mem­bers of your so­ciety should out­weigh the fun ex­pe­rienced by the more eas­ily amused.

Pfft. It is triv­ially easy to come up with thought ex­per­i­ments where this isn’t the case. You can in­crease the ra­tio of bricks-to-brights un­til do­ing the ar­ith­metic leads to the re­sult that you should go ahead and tor­ture folks. You can choose folks to tor­ture on the ba­sis of well-pub­li­cized, un­com­mon crite­ria, so that the vast ma­jor­ity of peo­ple rightly ex­pect it won’t hap­pen to them or any­one they care about. You can out­right lie to the pop­u­la­tion, and say that the peo­ple you tor­ture are all vol­un­teers (pos­si­bly even masochists who are se­cretly en­joy­ing them­selves) con­tribut­ing to the en­ter­tain­ment of so­ciety for al­tru­is­tic rea­sons. Heck, af­ter you’ve tor­tured them for a while, you can prob­a­bly get them to de­liver speeches about how thrilled they are to be mak­ing this sac­ri­fice for the com­mon morale, on the promise that you’ll kill them quicker if they make it con­vinc­ing.

All that hav­ing been said, there are con­se­quen­tial­ist the­o­ries that do not oblige or per­mit the tor­ture of some peo­ple to amuse the oth­ers. Among them are things like side-con­straints rights-based con­se­quen­tial­isms, cer­tain ju­di­cious ap­pli­ca­tions of deferred-he­don/​do­lor con­se­quen­tial­isms, and nega­tive util­i­tar­i­anism (de­pend­ing on how the en­ter­tain­ment of the larger pop­u­la­tion cashes out in the math).

• Fol­low­ing your heart and not your head—re­fus­ing to mul­ti­ply—has also wrought plenty of havoc on the world, his­tor­i­cally speak­ing. It’s a ques­tion­able as­ser­tion (to say the least) that con­don­ing ir­ra­tional­ity has less dam­ag­ing side effects than con­don­ing tor­ture.

• I think you’ve con­structed your util­ity wrong in this in­stance. Without los­ing track of scope, we have 3^^^3 motes of dust in 3^^^3 eyes. And yes, that out­weighs 50 years of tor­ture, if and only if peo­ple have zero tol­er­ance. But peo­ple don’t break down into sob­bing messes at the (liter­ally at least) slight­est provo­ca­tion. There is a small thresh­old of bad­ness that can hap­pen to some­one with­out them car­ing, and as long as all 3^^^3 of them only get ep­silon be­low that, the to­tal suffer­ing for all 3^^^3 of them summed is ex­actly 0. We have 3^^^3 peo­ple, and 3^^^3 motes of dust, but also 3^^^3 sep­a­rate emo­tional shock ab­sorbers that take that speck of dust with­out flinch­ing.

It is non-lin­ear. If you keep adding dust, even­tu­ally it starts break­ing peo­ple’s shock ab­sorbers. And once those 3^^^3 peo­ple start ex­pe­rienc­ing nonzero suffer­ing, it would quickly add up to more than fifty man-years of tor­ture. Then the equa­tion stops fa­vor­ing dust motes. And here I hope I have some other re­course, be­cause “If you ever find your­self think­ing that tor­ture is the right thing to do,” is one of my warn­ings. I hope I can come out clever enough to take a third op­tion where no­body gets tor­tured.

• that can hap­pen to some­one with­out them noticing

But Eliezer’s origi­nal de­scrip­tion said this:

sup­pose a dust speck floated into your eye and ir­ri­tated it just a lit­tle, for a frac­tion of a sec­ond, barely enough to make you no­tice be­fore you blink and wipe away the dust speck.

It’s an es­sen­tial part of the setup that the di­su­til­ity of a “dust speck” is not zero.

• I wish I could up­vote this 3^^^3 times.

• And we should be wary to se­lect some­thing or­tho­dox for fear of pro­vok­ing shock and out­rage. Do you have any rea­son to be­lieve that the peo­ple who say they pre­fer TORTURE to SPECKS are mo­ti­vated by the de­sire to prove their ra­tio­nal­ist cre­den­tials, or that they don’t ap­pre­ci­ate that their de­ci­sions have real con­se­quences?

• Evolu­tion seems to have favoured the ca­pac­ity for em­pa­thy (the specks choice) over the ca­pac­ity for util­ity calcu­la­tion, even though util­ity calcu­la­tion would have been a ‘no brainer’ for the brain ca­pac­ity we have.
The whole con­cept re­minds me of the Tur­ing test. Tur­ing, as a math­e­mat­i­cian, just seems to have com­pletely failed to un­der­stand that we don’t as­sign ra­tio­nal­ity, or sen­tience, to an­other ob­ject by de­duc­tion. We do it by anal­ogy.

• Cook­ing some­thing for two hours at 350 de­grees isn’t equiv­a­lent to cook­ing some­thing at 700 de­grees for one hour.

I’d rather ac­cept one ad­di­tional dust speck per life­time in 3^^^3 lives than have one life­time out of 3^^^3 lives in­volve fifty years of tor­ture.

Of course, that’s me say­ing that, with my sin­gle life. If I ac­tu­ally had that many lives to live, I might be­come so bored that I’d opt for the tor­ture merely for a change of pace.

• A brilli­ant idea, Jef! I vol­un­teer you to test it out. Start blow­ing dust around your house to­day.

• Tom, your claim is false. Con­sider the di­su­til­ity function

D(Tor­ture, Specks) = [10 * (Tor­ture/​(Tor­ture + 1))] + (Specks/​(Specks + 1))

Now, with this func­tion, di­su­til­ity in­creases mono­ton­i­cally with the num­ber of peo­ple with specks in their eyes, satis­fy­ing your “slight ag­gre­ga­tion” re­quire­ment. How­ever, it’s also easy to see that go­ing from 0 to 1 per­son tor­tured is worse than go­ing from 0 to any num­ber of peo­ple get­ting dust specks in their eyes, in­clud­ing 3^^^3.

The ba­sic ob­jec­tion to this kind of func­tional form is that it’s not ad­di­tive. How­ever, it’s wrong to as­sume an ad­di­tive form, be­cause that as­sump­tion man­dates un­bounded util­ities, which are a bad idea, be­cause they are not com­pu­ta­tion­ally re­al­is­tic and ad­mit Dutch books. With bounded util­ity func­tions, you have to con­front the ag­gre­ga­tion prob­lem head-on, and de­pend­ing on how you choose to do it, you can get differ­ent an­swers. De­ci­sion the­ory does not af­fir­ma­tively tell you how to judge this prob­lem. If you think it does, then you’re wrong.

• It is my im­pres­sion that hu­man be­ings al­most uni­ver­sally de­sire some­thing like “jus­tice” or “fair­ness.” If ev­ery­body had the dust speck prob­lem, it would hardly be per­cieved as a prob­lem. If one per­son is beign tor­tured, both the tor­tured per­son and oth­ers per­cieve un­fair­ness, and so­ciety has a prob­lem with this.

Ac­tu­ally, we all DO get dust motes in our eyes from time to time, and this is not a pub­lic policy is­sue.
In fact rel­a­tively small num­bers of peo­ple ARE be­ing tor­tured to­day, and this is a big prob­lem both for the vic­tims and for peo­ple who care about jus­tice.

• Mitchell, I ac­knowl­edge the defen­si­bil­ity of the po­si­tion that there are tiers of in­com­men­su­rable util­ities. But to me it seems that the dust speck is a very, very small amount of bad­ness, yet bad­ness nonethe­less. And that by the time it’s mul­ti­plied to ~3^^^3 life­times of blink­ing, the bad­ness should be­come in­com­pre­hen­si­bly huge just like 3^^^3 is an in­com­pre­hen­si­bly huge num­ber.

One rea­son I have prob­lems with as­sign­ing a hy­per­real in­finites­i­mal bad­ness to the speck, is that it (a) doesn’t seem like a good de­scrip­tion of psy­chol­ogy (b) leads to to­tal loss of that prefer­ence in smarter minds.

(B) If the value I as­sign to the mo­men­tary ir­ri­ta­tion of a dust speck is less than 1/​3^^^3 the value of 50 years’ tor­ture, then I will never even bother to blink away the dust speck be­cause I could spend the thought or the mus­cu­lar move­ment on my eye on some­thing with a bet­ter than 1/​3^^^3 chance of sav­ing some­one from tor­ture.

(A) Peo­ple of­ten also think that money, a mun­dane value, is in­com­men­su­rate with hu­man life, a sa­cred value, even though they very definitely don’t at­tach in­finites­i­mal value to money.

I think that what we’re deal­ing here is more like the ir­ra­tional­ity of try­ing to im­pose and ra­tio­nal­ize com­fortable moral ab­solutes in defi­ance of ex­pected util­ity, than any­one ac­tu­ally pos­sess­ing a con­sis­tent util­ity func­tion us­ing hy­per­real in­finites­i­mal num­bers.

The no­tion of sa­cred val­ues seems to lead to ir­ra­tional­ity in a lot of cases, some of it gross ir­ra­tional­ity like scope ne­glect over hu­man lives and “Can’t Say No” spend­ing.

• I’m not sure why sur­real/​hy­per­real num­bers re­sult in, es­sen­tially, monofo­cus.

Con­sider this scale on the sur­re­als:

• Omega^2: Utility of uni­ver­sal im­mor­tal­ity; dis-util­ity of an ex­is­ten­tial risk. Omega util­ity for po­ten­tially omega peo­ple.

• Omega: Utility of a hu­man life.

• 1: One tra­di­tional utilon.

• Ep­silon: Dust speck in your eye.

Let’s say you’re a perfectly ra­tio­nal hu­man (*cough cough*). You nat­u­rally start on the Omega^2 scale, with a cer­tain finite amount of re­sources. Clearly, the worth of an omega of hu­man lives is worth more than your own, so you do not re­peat do not promptly donate them all to MIRI.

At least, not un­til you first calcu­late the ap­prox­i­mate prob­a­bil­ity that your in­de­pen­dent ex­is­tence will make it more likely that some­one some­where will fi­nally defeat death. Even if you have not the in­tel­li­gence to do it your­self, or the so­cial skills to keep some­one else sta­ble while they at­tack it, there’s still the fact that you can give more to MIRI, over the long run, if you live on just enough to keep your­self psy­cholog­i­cally and phys­iolog­i­cally sound and then donate the rest to MIRI.

This is, es­sen­tially, the “san­ity” term. Most of the calcu­la­tion is done at this step, but be­cause your life, across your lifes­pan, has some chance of solv­ing death, you are not morally obli­gated to have your­self pro­cessed into Soylent Green.

This step in­ter­rupts for one of three rea­sons. One, you have reached a point where spend­ing fur­ther re­sources, ei­ther on your­self or some ex­is­ten­tial-risk or­ga­ni­za­tion, does not pre­dictably af­fect an ex­is­ten­tial risk. Two, all ex­is­ten­tial risks are dealt with, and death it­self has died. (Yay!) Three, part of en­sur­ing your own psy­cholog­i­cal sound­ness re­quires it—re­ally, this just rep­re­sents the fact that some­times, a dol­lar (ap­prox. one utilon) or a speck (ep­silon utilons) can re­sult in your death or sig­nifi­cant mis­ery, but nev­er­the­less such con­cerns should still be re­solved in or­der of de­creas­ing util­ity.

At this point, we break to the Omega step, which works much the same way, bal­anc­ing char­ity dona­tions against your own life and QoL. Si­tu­a­tions where spend­ing money can save lives—say, a hos­pi­tal or a char­ity—should be eval­u­ated at this step.

Then we break to the uni­tary step, which is es­sen­tially en­tirely QoL for your­self or oth­ers.

Hy­po­thet­i­cally, we might then break to the ep­silon step—in prac­tice, since even in a post-scarcity so­ciety you will never finish op­ti­miz­ing your uni­taries, this step is only eval­u­ated when it or some­thing in it is pro­moted by causal de­pen­dence to a higher step.

So, re­turn­ing to the origi­nal prob­lem: Bar­ring all other con­sid­er­a­tions, 3^^^3*ep­silon is still an ep­silon, while 50 years of tor­ture is prob­a­bly some­thing like 34 Omega. With two tiers of differ­ence, the re­sult is ob­vi­ous, and has been re­solved with in­tu­ition.

I’m go­ing to con­clude with some­thing Hermione says in MoR, that I think ap­plies here.

“But the thing that peo­ple for­get some­times, is that even though ap­pear­ances can be mis­lead­ing, they’re usu­ally not.”

• Okay, here’s the data: I choose SPECKS, and here is my back­ground and rea­sons.

I am a cell biol­o­gist. That is per­haps not rele­vant.

My rea­son­ing is that I do not think that there is much mean­ing in adding up in­di­vi­d­ual in­stances of dust specks. Those of you who choose TORTURE seem to think that there is a net di­su­til­ity that you ob­tain by mul­ti­ply­ing ep­silon by 3^^^3. This is ob­vi­ously greater than the di­su­til­ity of tor­tur­ing one per­son.
I re­ject the premise that there is a mean­ingful sense in which these dust specks can “add up”.

You can think in terms of biolog­i­cal in­puts—sim­plify­ing, you can imag­ine a sys­tem with two reg­isters. A dust speck in the eye raises reg­ister A by ep­silon. Register A also re­sets to zero if a minute goes by with­out any dust specks. Tor­ture im­me­di­ately sets reg­ister B to 10. I am morally obliged to in­ter­vene if reg­ister B ever goes above 1. In this scheme reg­ister A is a morally ir­rele­vant reg­ister. It trades in differ­ent units than reg­ister B. No mat­ter how many in­stances of A*ep­silon there are, it does not war­rant in­ter­ven­tion.

You are mak­ing a huge, unar­gued as­sump­tion if you treat both tor­ture and dust-specks in equiv­a­lent terms of “di­su­til­ity”. I ac­cept your ques­tion and ar­gue for “SPECKS” by re­ject­ing your premise of like units (which does make the ques­tion triv­ial). But I sym­pa­thize with peo­ple who re­ject your ques­tion out­right.

• Con­stant, my refer­ence to your quote wasn’t aimed at you or your opinions, but rather at the sort of view which de­clares that the silly calcu­la­tion is some kind of ac­cepted or co­her­ent moral the­ory. Sorry if it came off the other way.

Nick, good ques­tion. Who says that we have con­sis­tent and com­plete prefer­ence or­der­ings? Cer­tainly we don’t have them across peo­ple (con­sider so­cial choice the­ory). Even to say that we have them within in­di­vi­d­ual peo­ple is con­testable. There’s a re­ally in­ter­est­ing liter­a­ture in philos­o­phy, for ex­am­ple, on the in­com­men­su­ra­bil­ity of goods. (The best in­tro­duc­tion of which I’m aware con­sists in the es­says in Ruth Chang, ed. 1997. In­com­men­su­ra­bil­ity, In­com­pa­ra­bil­ity, and Prac­ti­cal Rea­son Cam­bridge: Har­vard Univer­sity Press.)

That be­ing said, it might be pos­si­ble to have com­plete and con­sis­tent prefer­ence or­der­ings with qual­i­ta­tive differ­ences be­tween kinds of pain, such that any amount of tor­ture is worse than any amount of dust-speck-in-eye. And there are even util­i­tar­ian the­o­ries that in­cor­po­rate that sort of differ­ence. (See chap­ter 2 of John Stu­art Mill’s Utili­tar­i­anism, where he ar­gues that in­tel­lec­tual plea­sures are qual­i­ta­tively su­pe­rior to more base kinds. Many in­deed in­ter­pret that chap­ter to sug­gest that any amount of an in­tel­lec­tual plea­sure out­weighs any amount of drink­ing, sex, choco­late, etc.) Which just goes to show that even util­i­tar­i­ans might not find the tor­ture choice “ob­vi­ous,” if they deny b) like Mill.

• Elizer: “It’s wrong when re­peated be­cause it’s also wrong in the in­di­vi­d­ual case. You just have to come to terms with scope sen­si­tivity.”

But de­ter­min­ing whether or not a de­ci­sion is right or wrong in the in­di­vi­d­ual case re­quires that you be able to place a value on each out­come. We de­ter­mine this value in part by us­ing our knowl­edge of how fre­quently the out­comes oc­cur and how much time/​effort/​money it takes to pre­vent or as­suage them. Thus know­ing the fre­quency that we can ex­pect an event to oc­cur is in­te­gral to as­sign­ing it a value in the first place. The rea­son it would be wrong in the in­di­vi­d­ual case to tax ev­ery­one in the first world the penny to save one Afri­can child is that there are so many starv­ing chil­dren that do­ing the same for each one would be­come very ex­pen­sive. It would not be ob­vi­ous, how­ever, if there was only one child in the world that needed res­cu­ing. The value of life would in­crease be­cause we could af­ford it to if peo­ple didn’t die so fre­quently.

Peo­ple in a village might be will­ing to help pay the costs when some­one’s house burns down. If 20 houses in the village burned down, the peo­ple might still con­tribute, but it is un­likely they will con­tribute 20 times as much. If house-burn­ing be­came a ram­pant prob­lem, peo­ple might stop con­tribut­ing en­tirely, be­cause it would seem fu­tile for them to do so. Is this nec­es­sar­ily scope in­sen­si­tivity? Or is it rea­son­able to de­ter­mine val­ues based on fre­quen­cies we can re­al­is­ti­cally ex­pect?

• The di­ag­no­sis of scope in­sen­si­tivity pre­sup­poses that peo­ple are try­ing to perform a util­i­tar­ian calcu­la­tion and failing. But there is an or­di­nary sense in which a suffi­ciently small harm is no wrong. A harm must reach a cer­tain thresh­old be­fore the vic­tim is will­ing to bear the cost of seek­ing re­dress. Harms that fall be­low the thresh­old are shrugged off. And an un­en­forced law is no law. This holds even as the vic­tims mul­ti­ply. A class ac­tion law­suit is pos­si­ble, sum­ming the minus­cule harms, but our moral in­tu­itions are prob­a­bly not based on those.

• I am not con­vinced that this ques­tion can be con­verted into a per­sonal choice where you face the de­ci­sion of whether to take the speck or a 1/​3^^^3 chance of be­ing tor­tured. I would avoid the speck and take my chances with tor­ture, and I think that is in­deed an ob­vi­ous choice.

I think a more ap­po­site ap­pli­ca­tion of that trans­la­tion might be:
If I knew I was go­ing to live for 3^^^3+50*365 days, and I was faced with that choice ev­ery day, I would always choose the speck, be­cause I would never want to en­dure the in­evitable 50 years of tor­ture.

The differ­ence is that fram­ing the ques­tion as a one-off in­di­vi­d­ual choice ob­scures the fact that in the ex­am­ple proffered, the tor­ture is a cer­tainty.

• My al­gorithm goes like this:
there are two vari­ables, X and Y.
Ad­ding a sin­gle ad­di­tional dust speck to a per­son’s eye over their en­tire life­time in­creases X by 1 for ev­ery per­son this hap­pens to.
A per­son be­ing tor­tured for a few min­utes in­creases Y by 1.

I would ob­ject to most situ­a­tions where Y is greater than 1. But I have no prefer­ences at all with re­gard to X.

See? Dust specks and tor­ture are not the same. I do not lump them to­gether as “di­su­til­ity”. To do so seems to me a pre­pos­ter­ous over­sim­plifi­ca­tion. In any case, it has to be ar­gued that they are the same. If you as­sume they’re the same, then you’re just as­sum­ing the tor­ture an­swer when you state the ques­tion—it’s not a prob­lem of eth­i­cal philos­o­phy but a prob­lem of ad­di­tion.

• Why is this a se­ri­ous ques­tion? Given the phys­i­cal un­re­al­ity of the situ­a­tion, the pu­ta­tive ex­is­tence of 3^^^3 hu­mans and the abil­ity to ac­tu­ally cre­ate the op­tion in the phys­i­cal uni­verse—why is this ques­tion taken se­ri­ously while some­thing like is it bet­ter to kill Santa Claus or the Easter Bunny con­sid­ered silly?

• Aver­ag­ing util­ity works only when law of large num­bers starts to play a role. It’s a good gen­eral policy, as stuff sub­ject to it hap­pens all the time, enough to give sen­si­ble re­sults over the hu­man/​civ­i­liza­tion lifes­pan. So, if Eliezer’s ex­per­i­ment is a sin­gu­lar event and similar events don’t hap­pen fre­quently enough, an­swer is 3^^^3 specks. Other­wise, tor­ture (as in this case, similar fre­quent enough choices would lead to a tem­pest of specks in any­one’s eye which is about 3^^^3 times worse then 50 years of tor­ture, for each and ev­ery one of them).

• I’m un­con­vinced that the num­ber is too large for us to think clearly. Though it takes some ma­chin­ery, hu­mans rea­son about in­finite quan­tities all the time and ar­rive at mean­ingful con­clu­sions.

My in­tu­itions strongly fa­vor the dust speck sce­nario. Even if for­get 3^^^^3 and just say that an in­finite num­ber of peo­ple will ex­pe­rience the speck, I’d still fa­vor it over the tor­ture.

• I too see the dust specks as ob­vi­ous, but for the sim­pler rea­son that I re­ject util­i­tar­ian sorts of com­par­i­sons like that. Tor­ture is wicked, pe­riod. If one must go fur­ther, it seems like the suffer­ing from tor­ture is qual­i­ta­tively worse than the suffer­ing from any num­ber of dust specks.

• I too see the dust specks as ob­vi­ous, but for the sim­pler rea­son that I re­ject util­i­tar­ian sorts of com­par­i­sons like that. Tor­ture is wicked, pe­riod.

I think you have mi­s­un­der­stood the point of the thought ex­per­i­ment. Eliezer could have imag­ined that the in­tense and pro­longed suffer­ing ex­pe­rienced by the vic­tim was not in­ten­tion­ally caused, but was in­stead the re­sult of nat­u­ral causes. The “tor­ture is wicked” re­ply can­not be used to re­sist the de­ci­sion to bring about this sce­nario. (There may, of course, be other rea­sons for ob­ject­ing to that de­ci­sion.)

• I think I have to go with the dust specks. To­mor­row, all 3^^^3 of those peo­ple will have for­got­ten en­tirely about the speck of dust. It is an event nearly in­dis­t­in­guish­able from ther­mal noise. Peo­ple, all of them ev­ery­where, get dust specks in their eyes just go­ing about their daily lives with no ill effect.

The tor­ture ac­tu­ally hurts some­one. And in a way that’s rather non-re­cov­er­able. Re­cov­er­abil­ity plays a large part in my moral calcu­la­tions.

But there’s a limit to how many times I can make that trade. 3^^^3 peo­ple is a LOT of peo­ple, and it doesn’t take a sig­nifi­cant frac­tion of THAT at all be­fore I have to stop sav­ing tor­ture vic­tims, lest ev­ery­one ev­ery­where’s lives con­sist of noth­ing but a sand­blaster to the face.

• What you’re do­ing there is posit­ing a “qual­i­ta­tive thresh­old” of sorts where the anti-he­dons from the dust specks cause ab­solutely zero di­su­til­ity what­so­ever. This can be an ac­cept­able real-world eval­u­a­tion within loaded sub­jec­tive con­text.

How­ever, the prob­lem states that the dust specks have non-zero di­su­til­ity. This means that they do have some sort of pre­dicted net nega­tive im­pact some­where. If that im­pact is merely to slow down the brain’s vi­sual recog­ni­tion of one word by even 0.03 sec­onds, in a man­ner that is di­rectly causal and where the dust speck would have avoided this de­lay, then over 3^^^3 peo­ple that is still more man-hours of work lost than the sum of all life­times of all hu­mans on Earth to this day ever. If that is not a tragic loss much more dire than one per­son be­ing tor­tured, I don’t see what could be. And I’m ob­vi­ously be­ing gen­er­ous there with that “0.03 sec­onds” es­ti­mate.

The­o­ret­i­cally, all this ac­cu­mu­lated lost time could mean the differ­ence be­tween the ex­tinc­tion or sur­vival of the hu­man race to a pan-galac­tic su­per-cat­a­clys­mic event, sim­ply by way of throw­ing us off the par­tic­u­lar course of planck-level-ex­actly-timed course of events that would have al­lowed us to find a way to sur­vive just barely by a few (to­tal, rel­a­tively ab­solute) sec­onds too close for com­fort.

That last is as­sum­ing the de­cid­ing agent has the su­per­in­tel­li­gence power to ac­tu­ally com­pute this. If calcu­lat­ing from un­known fu­ture causal util­ities, and the ex­pected util­ity of a dust speck is still nega­tive non-zero, then it is sim­ple ab­strac­tion of the above ex­am­ple and the ra­tio­nal choice is still sim­ply the tor­ture.

• If you ask me the slightly differ­ent ques­tion, where I choose be­tween 50 years of tor­ture ap­plied to one man, or be­tween 3^^^3 specks of dust fal­ling one each into 3^^^3 peo­ple’s eyes and also all hu­man­ity be­ing de­stroyed, I will give a differ­ent an­swer. In par­tic­u­lar, I will ab­stain, be­cause my moral calcu­la­tion would then fa­vor the tor­ture over the de­struc­tion of the hu­man race, but I have a built-in failure mode where I re­fuse to tor­ture some­one even if I some­how think it is the right thing to do.

But that is not the ques­tion I was asked. We could also have the man tor­tured for fifty years and then the hu­man race gets wiped out BECAUSE the pan-galac­tic cat­a­clysm fa­vors civ­i­liza­tions who don’t make the choice to tor­ture peo­ple rather than face triv­ial in­con­ve­niences.

Con­sider this al­ter­nate pro­posal:

Hello Sir and/​or Madam:

I am try­ing to col­lect 3^^^3 sig­na­tures in or­der to pre­vent a man from be­ing tor­tured for 50 years. Would you be will­ing to ac­cept a sin­gle speck of dust into your eye to­wards this goal? Per­haps more? You may sign as many times as you are com­fortable with. I ea­gerly await your re­sponse.

Sincerely,

rkyeun

PS: Do you know any masochists who might en­joy 50 years of tor­ture?

BCC: 3^^^3-1 other peo­ple.

• We did spec­ify no long-term con­se­quences—oth­er­wise the ar­gu­ment in­stantly passes, just be­cause at least 3^^7625597484986 peo­ple would cer­tainly die in car ac­ci­dents due to blink­ing. (3^^^3 is 3 to the power of that.)

• If you still use “^” to re­fer to Knuth’s up-ar­row no­ta­tion, then 3^^^3 != 3^(3^^26).

3^^^3 = 3^^(3^^3) = 3^^(3^27) != 3^(3^^27)

• Fixed.

• I ad­mit the ar­gu­ment of long-term “side effects” like ex­tinc­tion of the hu­man race was gra­tu­itous on my part. I’m just in­tu­itively con­vinced that such pos­si­bil­ities would count to­wards the ex­pected di­su­til­ity of the dust motes in a su­per­in­tel­li­gent perfect ra­tio­nal­ist’s calcu­la­tions. They might even be the only rea­son there is any ex­pected di­su­til­ity at all, for all I know.

Other­wise, my puny tall-mon­key brain wiring has a hard time imag­in­ing how a micro-frac­tional anti-he­don would ac­tu­ally count for any­thing other than ab­solute zero ex­pected util­ity in the calcu­la­tions of any agent with im­perfect knowl­edge.

• Sure. Ad­mit­tedly, when there are 3^^^3 hu­mans around, tor­tur­ing me for fifty years is also such a neg­ligible amount of suffer­ing rel­a­tive to the cur­rent lived hu­man ex­pe­rience that it, too, has an ex­pected cost that rounds to zero in the calcu­la­tions of any agent with im­perfect knowl­edge, un­less they have some par­tic­u­lar rea­son to care about me, which in that world is van­ish­ingly un­likely.

• Heh.

When put like that, my origi­nal post /​ ar­gu­ments sure seem not to have been thought through as much as I thought I had.

Now, rather than think­ing the solu­tion ob­vi­ous, I’m lean­ing more to­wards the idea that this even­tu­ally re­duces to the prob­lem of build­ing a good util­ity func­tion, one that also as­signs the right util­ity value to the ex­pected util­ity calcu­lated by other be­ings based on un­known (or known?) other util­ity func­tions that may or may not ir­ra­tionally as­sign dis­pro­por­tionate di­su­til­ity to re­spec­tive he­don-val­ues.

Other­wise, it’s rather ob­vi­ous that a perfect su­per­in­tel­li­gence might find a way to make the tor­tured vic­tim en­joy the tor­ture and be­come en­hanced by it, while also re­main­ing a pro­duc­tive mem­ber of so­ciety dur­ing all fifty years of tor­ture (or some other com­pletely ideal solu­tion we can’t even re­motely imag­ine) - though this might be in di­rect con­tra­dic­tion with the im­plicit premise of tor­ture be­ing in­her­ently bad, de­pend­ing on in­ter­pre­ta­tion/​defi­ni­tion/​etc.

EDIT: Which, upon read­ing up a bit more of the old com­ments on the is­sue, seems fairly close to the gen­eral con­sen­sus back in late 2007.

• For­give me if this has been cov­ered be­fore. The in­ter­net here is flak­ing out and it makes it hard to search for an­swers.

What is the cor­rect an­swer to the fol­low­ing sce­nario: Is it prefer­able to have one per­son be tor­tured if it gives 3^^^3 peo­ple a minis­cule amount of plea­sure?

The source of this ques­tion was me pon­der­ing the claim, “Pain is tem­po­rary; a good story lasts for­ever.”

• What is the cor­rect an­swer to the fol­low­ing sce­nario: Is it prefer­able to have one per­son be tor­tured if it gives 3^^^3 peo­ple a minis­cule amount of plea­sure?

Yes.

• Is it prefer­able to have one per­son be tor­tured if it gives 3^^^3 peo­ple a minis­cule amount of plea­sure?

Great ques­tion, and if it has been cov­ered be­fore on this site, I haven’t seen it. Philoso­phers have dis­cussed whether or not “sadis­tic” plea­sure from oth­ers’ suffer­ing should be in­cluded in util­i­tar­ian calcu­la­tions, and in fact this is one of the clas­sic ar­gu­ments against (some types of) util­i­tar­i­anism, along with the util­ity mon­ster and the or­gan lot­tery.

One pos­si­ble an­swer is that util­i­tar­i­ans should max­i­mize other ter­mi­nal val­ues be­sides just plea­sure, and that sadis­tic plea­sures like this go against the to­tal of our ter­mi­nal val­ues, so util­i­tar­i­ans shouldn’t al­low these to can­cel out tor­ture.

• The ob­vi­ous an­swer is that tor­ture is prefer­able.

If you have to pick your­self a chance of 1/​3^^^3 of 50 years tor­ture vs the dust spec you will pick the tor­ture.

We ac­tu­ally do this ev­ery day: we eat foods that can poi­son us rather than be hun­gry, we cross the road rather than stay at home, etc.

Imag­ine there is a safety im­prove­ment to your car that will cost 0.0001 cent but will save you from an event that will hap­pen once in 1000 uni­verse life­times would you pay for it?

• I don’t think it’s very con­tro­ver­sial that TORTURE is the right choice if you’re max­i­miz­ing over­all net util­ity (or in your ex­am­ple, max­i­miz­ing ex­pected util­ity). But some of us would still choose SPECKS.

• Jeffrey, on one of the other threads, I vol­un­teered to be the one tor­tured to save the oth­ers from the specks.

As for “Real de­ci­sions have real effects on real peo­ple,” that’s ab­solutely cor­rect, and that’s the rea­son to pre­fer the tor­ture. The util­ity func­tion im­plied by prefer­ring the specks would also pre­fer low­er­ing all the speed limits in the world in or­der to save lives, and ul­ti­mately would ban the use of cars. It would pro­mote rais­ing taxes by a small amount in or­der to re­duce the amount of vi­o­lent crime (in­clud­ing crimes in­volv­ing tor­ture of real peo­ple), and ul­ti­mately would pro­mote rais­ing taxes on ev­ery­one un­til ev­ery­one could barely sur­vive on what re­mains.

Yes, real de­ci­sions have real effects on real peo­ple. That’s why it’s nec­es­sary to con­sider the to­tal effect, not merely the effect on each per­son con­sid­ered as an iso­lated in­di­vi­d­ual, as those who fa­vor the specks are do­ing.

• So, if ad­di­tive util­ity func­tions are naive, does that mean I can swap around your prefer­ences at ran­dom like jerk­ing around a pup­pet on a string, just by hav­ing a sealed box in the next galaxy over where I keep a googol in­di­vi­d­u­als who are already be­ing tor­tured for fifty years, or already get­ting dust specks in their eyes, or already be­ing poked with a stick, etc., which your ac­tions can­not pos­si­bly af­fect one way or the other?

It seems I can ar­bi­trar­ily vary your “non-ad­di­tive” util­ities, and hence your pri­ori­ties, sim­ply by mess­ing with the num­bers of ex­ist­ing peo­ple hav­ing var­i­ous ex­pe­riences in a sealed box in a galaxy a googol light years away.

This seems re­mark­ably rem­i­nis­cent of E. T. Jaynes’s ex­pe­rience with the “so­phis­ti­cated” philoso­phers who sniffed that of course naive Bayesian prob­a­bil­ity the­ory had to be aban­doned in the face of para­dox #239; which para­dox Jaynes would pro­ceed to slice into con­fetti us­ing “naive” Bayesian the­ory but with this time with rigor­ous math in­stead of the var­i­ous mis­takes the “so­phis­ti­cated” philoso­phers had made.

There are rea­sons for prefer­ring cer­tain kinds of sim­plic­ity.

• No Mike, your in­tu­ition for re­ally large num­bers is non-baf­fling, prob­a­bly typ­i­cal, but clearly wrong, as judged by an­other non-Utili­tar­ian con­se­quen­tial­ist (this item is clear even to ego­ists).

Per­son­ally I’d take the tor­ture over the dust specks even if the num­ber was just an or­di­nary in­com­pre­hen­si­ble num­ber like say the num­ber of biolog­i­cal hu­mans who could live in ar­tifi­cial en­vi­ron­ments that could be built in one galaxy. (about 10^46th given a 100 year life span and a 300W (of ter­mi­nal en­tropy dump into a 3K back­ground from 300K, that’s a large bud­get) en­ergy bud­get for each of them). It’s to­tally clear to me that a sec­ond of tor­ture isn’t a billion billion billion times worse than get­ting a dust speck in my eye, and that there are only about 1.5 billion sec­onds in a 50 year pe­riod. That leaves about a 10^10 : 1 prefer­ence for the tor­ture.

The only con­sid­er­a­tions that dull my cer­tainty here is that I’m not con­vinced that my util­ity func­tion can even en­com­pass these sorts of or­di­nary in­com­pre­hen­si­ble num­bers, but it seems to me that there is at least a one-in-a-billion chance that it can.

• I’d take it.

• I have ar­gued in pre­vi­ous com­ments that the util­ity of a per­son should be dis­counted by his or her mea­sure, which may be based on al­gorith­mic com­plex­ity. If this “tor­ture vs specks” dilemma is to have the same force un­der this as­sump­tion, we’d have to re­word it a bit:

Would you pre­fer that the mea­sure of peo­ple hor­ribly tor­tured for fifty years in­creases by x/​3^^^3, or that the mea­sure of peo­ple who get dust specks in their eyes in­creases by x?

I ar­gue that no one, not even a su­per­in­tel­li­gence, can ac­tu­ally face such a choice. Be­cause x is at most 1, x/​3^^^3 is at most 1/​3^^^3. But how can you in­crease the mea­sure of some­thing by more than 0 but no more than 1/​3^^^3? You might, per­haps, gen­er­ate a ran­dom num­ber be­tween 0 and 3^^^3 and do some­thing only if that ran­dom num­ber is 0. But al­gorith­mic in­for­ma­tion the­ory says that for any pro­gram (even a su­per­in­tel­li­gence), there are pseu­do­ran­dom se­quences that it can­not dis­t­in­guish from truly ran­dom se­quences, and the prior prob­a­bil­ity that your ran­dom num­ber gen­er­a­tor is gen­er­at­ing such a pseu­do­ran­dom se­quence is much higher than 1/​3^^^3. There­fore the prob­a­bil­ity of that “ran­dom” num­ber be­ing 0 (or be­ing any other num­ber that you can think of) is ac­tu­ally much larger than 1/​3^^^3.

There­fore, if some­one tells you “mea­sure of … in­creases by x/​3^^^3”, in your mind you’ve got to be think­ing “… in­creases by y” for some y much larger than 1/​3^^^3. I think my the­o­ries ex­plains both those who an­swer SPECKS and those who say no an­swer is pos­si­ble.

• “A brilli­ant idea, Jef! I vol­un­teer you to test it out. Start blow­ing dust around your house to­day.”

Although only one per­son, I’ve already be­gun, and have en­tered in my in­ven­tor’s note­book some ap­par­ently novel think­ing on not only dust, but mites, dog hair, smart eye­drops, and nanobot swarms!

• the ar­gu­ment by googol­plex gra­da­tions seems to me like a much stronger ver­sion of the ar­gu­ments I would have put forth.

You just warmed my heart for the day :-)

But why not use a googol in­stead of a googolplex

Shock and awe tac­tics. I wanted a fea­ture­less big num­ber of fea­ture­less big num­bers, to avoid wig­gle-outs, and scream “your in­tu­ition ain’t from these parts”. In my head, FBNs always carry more weight than reg­u­lar ones. Now you men­tion it, their grav­ity could get light­ened by in­com­pre­hen­si­bil­ity, but we we’re already count­ing to 3^^^3.

Googol is bet­ter. Less read­ers will have to google it.

• Eliezer, both you and Robin are as­sum­ing the ad­di­tivity of util­ity. This is not jus­tifi­able, be­cause it is false for any com­pu­ta­tion­ally fea­si­ble ra­tio­nal agent.

If you have a bounded amount of com­pu­ta­tion to make a de­ci­sion, we can see that the num­ber of dis­tinc­tions a util­ity func­tion can make is in turn bounded. Con­cretely, if you have N bits of mem­ory, a util­ity func­tion us­ing that much mem­ory can dis­t­in­guish at most 2^N states. Ob­vi­ously, this is not com­pat­i­ble with ad­di­tivity of di­su­til­ity, be­cause by pick­ing enough peo­ple you can iden­tify more dis­tinct states than the 2^N dis­tinc­tions your com­pu­ta­tional pro­cess can make.

Now, the rea­son for adopt­ing ad­di­tivity comes from the in­tu­ition that 1) hurt­ing two peo­ple is at least as bad as hurt­ing one, and 2) that peo­ple are morally equal, so that it doesn’t mat­ter which peo­ple are hurt. Note that these in­tu­itions math­e­mat­i­cally only re­quire that harm should be mono­tone in the num­ber of peo­ple with dust specks in their eyes. Fur­ther­more, this re­quire­ment is com­pat­i­ble with the finite com­pu­ta­tion re­qure­ments—it im­plies that there is a finite num­ber of specks be­yond which di­su­til­ity does not in­crease.

If we want to gen­er­al­ize away from the spe­cific num­ber N of bits we have available, we can take an or­der-the­o­retic view­point, and sim­ply re­quire that all in­creas­ing chains of util­ities have limits. (As an aside, this idea lies at the heart of the de­no­ta­tional se­man­tics of pro­gram­ming lan­guages.) This forms a nat­u­ral re­stric­tion on the do­main of util­ity func­tions, cor­re­spond­ing to the idea that util­ity func­tions are bounded.

• For the mo­ment I dis­turbed by the fact that Eliezer and I seem to be in a minor­ity here, but com­forted a bit by the fact that we seem to know de­ci­sion the­ory bet­ter than most. But I’m open to new data on the bal­ance of opinion and the bal­ance of rele­vant ex­per­tize.

It seems like se­lec­tion bias might make this data much less use­ful. (It ap­plied it my case, at least.) The peo­ple who chose TORTURE were likely among those with the most fa­mil­iar­ity with Eliezer’s writ­ings, and so were able to pre­dict that he would agree with them, and so felt less in­clined to re­spond. Also, voic­ing their opinion would be pub­li­cly tak­ing an un­pop­u­lar po­si­tion, which peo­ple in­stinc­tively shy away from.

• Since Robin is in­ter­ested in data… I chose SPECKS, and was shocked by the peo­ple who chose TORTURE on grounds of ag­gre­gated util­ity. I had not con­sid­ered the pos­si­bil­ity that a speck in the eye might cause a car crash (etc) for some of those 3^^^3 peo­ple, and it is the only rea­son I see for re­vis­ing my origi­nal choice. I have no ac­cred­ited ex­per­tise in any­thing rele­vant, but I know what de­ci­sion the­ory is.

I see a wide­spread as­sump­tion that ev­ery­thing has a finite util­ity, and so no mat­ter how much worse X is than Y, there must be a situ­a­tion in which it is bet­ter to have one per­son ex­pe­rienc­ing X, rather than a large num­ber of peo­ple ex­pe­rienc­ing Y. And it looks to me as if this as­sump­tion de­rives from noth­ing more than a par­tic­u­lar for­mal­ism. In fact, it is ex­tremely easy to have a util­ity func­tion in which X un­con­di­tion­ally trumps Y, while still be­ing quan­ti­ta­tively com­men­su­rable with some other op­tion X’. You could do it with delta func­tions, for ex­am­ple. You would use or­di­nary scalars to rep­re­sent the least im­por­tant things to have prefer­ences about, scalar mul­ti­ples of a delta func­tion to rep­re­sent the util­ities of things which are un­con­di­tion­ally more im­por­tant than those, scalar mul­ti­ples of a delta func­tion squared to rep­re­sent things that are even more im­por­tant, and so on.

The qual­i­ta­tive dis­tinc­tion I would ap­peal to here could be dubbed pain ver­sus in­con­ve­nience. A speck of dust in your eye is not pain. Tor­ture, es­pe­cially fifty years of it, is.

• Well as long as we’ve gone to all the trou­ble to col­lect 85 com­ments on this topic, this seems like an great chance for a dis­agree­ment case study. It would be in­ter­est­ing to col­lect stats on who takes what side, and to re­late that to their var­i­ous kinds of rele­vant ex­per­tize. For the mo­ment I dis­turbed by the fact that Eliezer and I seem to be in a minor­ity here, but com­forted a bit by the fact that we seem to know de­ci­sion the­ory bet­ter than most. But I’m open to new data on the bal­ance of opinion and the bal­ance of rele­vant ex­per­tize.

• By “pay a penny to avoid the dust specks” I meant “avoid all dust specks”, not just one dust speck. Ob­vi­ously for one speck I’d rather have the penny.

• So if some­one would pay a penny, they should pick tor­ture if it were 3^^^^3 peo­ple get­ting dust specks, which makes it sus­pect that they un­der­stood 3^^^3 in the first place.

• Would you con­demn one per­son to be hor­ribly tor­tured for fifty years with­out hope or rest, to save ev­ery qualia-ex­pe­rienc­ing be­ing who will ever ex­ist one blink?

Is the ques­tion sig­nifi­cantly changed by this rephras­ing? It makes SPECKS the de­fault choice, and it changes 3^^^3 to “all.” Are we bet­ter able to pro­cess “all” than 3^^^3, or can we re­ally pro­cess “all” at all? Does it change your an­swer if we switch the de­fault?

Would you force ev­ery qualia-ex­pe­rienc­ing be­ing who will ever ex­ist to blink one ad­di­tional time to save one per­son from be­ing hor­ribly tor­tured for fifty years with­out hope or rest?

• What hap­pens if there aren’t 3^^^3 in­stanced peo­ple to get dust specks? Do those specks carry over such that per­son #1 gets a 2nd speck and so on? If so, you would elect to have the per­son tor­tured for 50 years for surely the al­ter­na­tive is to fill our uni­verse with dust and an­nihilate all cul­tures and life.

• Would you pre­fer that one per­son be hor­ribly tor­tured for fifty years with­out hope or rest, or that 3^^^3 peo­ple get dust specks in their eyes?

The square of the num­ber of mil­lisec­onds in 50 years is about 10^21.

Would you rather one per­son tor­tured for a mil­lisec­ond (then no ill effects), or that 3^^^3/​10^21 peo­ple get a dust speck per sec­ond for 50 cen­turies?

OK, so the util­ity/​effect doesn’t scale when you change the times. But even if each 1% added dust/​tor­ture time made things ten times worse, when you re­duce the dust-speck­led pop­u­la­tion to re­flect that it’s still countless uni­verses worth of peo­ple.

• I be­lieve that ideally speak­ing the best choice is the tor­ture, but prag­mat­i­cally, I think the dust speck an­swer can make more sense. Of course it is more in­tu­itive morally, but I would go as far as say­ing that the util­ity can be higher for the dust specks situ­a­tion (and thus our in­tu­ition is right). How? the prob­lem is in this sen­tence: “If nei­ther event is go­ing to hap­pen to you per­son­ally,” the truth is that in the real world, we can’t rely on this state­ment. Even if it is promised to us or made into a law, this type of state­ments of­ten won’t hold up very long. Prece­dents have to be taken into ac­count when we make a de­ci­sion based on util­ity. If we let some­one be tor­tured now, we are build­ing a prece­dent, a tra­di­tion of let­ting peo­ple be­ing tor­tured. This has a very low util­ity for peo­ple liv­ing in the af­fected so­ciety. This is well sum­ma­rized in the say­ing “What goes around comes around”.

If you take the strict ideal­is­tic situ­a­tion de­scribed, the tor­ture is the best choice. But if you in­stead deem the situ­a­tion to be com­pletely un­re­al­is­tic and you pick a similar one by sim­ply not giv­ing a 100% re­li­a­bil­ity on the sen­tence: “If nei­ther event is go­ing to hap­pen to you per­son­ally,” the best choice can be­come the dust specks, de­pend­ing on how much you be­lieve the risk of a tra­di­tion of tor­ture will be es­tab­lished. (and IMO tra­di­tions of tor­ture and vi­o­lence is the kind of thing that spreads eas­ily as it stim­u­lates re­sent­ment and ha­tred in the groups that are more af­fected.) The tor­ture situ­a­tion has much risk of get­ting worst but not the dust speck situ­a­tion.

The sce­nario might have been differ­ent if tor­ture was re­placed by a kind of suffer­ing that is not in­duced by hu­mans. Say… an in­cred­ibly painful and long (but not con­ta­gious) ill­ness.

Is it bet­ter to have the dust specks ev­ery­where all the time or to have the ex­is­tence of this ill­ness once in his­tory?

• Hmm, tricky one.

Do I get to pick the per­son who has to be tor­tured?

• Per­son­ally, I choose C: tor­ture 3^^^3 peo­ple for 3^^^3 years. Why? Be­cause I can.

Ahem. My moral­ity is based on max­i­miz­ing av­er­age welfare, while also avoid­ing ex­treme in­di­vi­d­ual suffer­ing, rather than cu­mu­la­tive welfare.

So tor­tur­ing one man for fifty years is not prefer­able to an­noy­ing any num­ber of peo­ple.

This is differ­ent when the many are also suffer­ing ex­tremely, though—then it may be worth­while to tor­ture one even more to save the rest.

• Would you pre­fer that one per­son be hor­ribly tor­tured for fifty years with­out hope or rest, or that 3^^^3 peo­ple get dust specks in their eyes?

I would pre­fer that 3^^^3 peo­ple get dust specs in their eyes, be­cause that means that we ei­ther figured out how to es­cape the death of our uni­verse, or ex­pand past our ob­serv­able uni­verse. [/​cheat­ing]

• s/​cheat­ing/​EDT/​ :)

• I definitely think it is ob­vi­ous what Eliezer is go­ing for: 3^^^3 peo­ple get­ting dusk specks in their eyes be­ing the fa­vor­able out­come. I un­der­stand his rea­sonig, but I’m not sure I agree with the sim­ple Ben­thamite way of calcu­lat­ing util­ity. Pop­u­lar among mod­ern philoso­phers is prefer­ence util­i­tar­i­anism, where the prefer­ences of the peo­ple in­volved are what con­sti­tute util­ity. Now con­sider that each of those 3^^^3 peo­ple has a prefer­ence that peo­ple not be tor­tured. As­sum­ing that the nega­tive util­ity each in­di­vi­d­ual com­putes for some­one be­ing tor­tured is larger in value than the nega­tive util­ity of a speck of dust in their eyes, then even dis­count­ing the per­son be­ing tor­tured (which of course you might as well given the dis­par­ity in mag­ni­tude, which is more or less Eliezer’s point) you would have higher util­ity with the flecks of dust.

There are in fact nu­mer­ous other ways to calcu­late the util­ity so such that 3^^^3 peo­ple with dust flecks in their eyes is prefer­able to one per­son un­der­go­ing fifty years of tor­ture while still pre­serv­ing the es­sen­tial con­se­quen­tial­ist na­ture of the ar­gu­ment. John Stu­art Mill might ar­gue there is a qual­i­ta­tive differ­ence be­tween tor­ture and dust flecks in your eyes that keeps you from adding them in this way while an ex­is­ten­tial­ist might ar­gue that pain and plea­sure aren’t what should be com­put­ing with the util­ity func­tion but some­thing closer to “hu­man flour­ish­ing” or “eu­daimo­nia” and that in this calcu­la­tion any num­ber of dust flecks has zero util­ity while tor­ture has a large nega­tive util­ity. It all de­pends on how you define your util­ity func­tion.

• In­ci­den­tally, I think that if you pick “dust specks,” you’re as­sert­ing that you would walk away from Ome­las; if you pick tor­ture, you’re as­sert­ing that you wouldn’t.

• The kind of per­son who chooses an in­di­vi­d­ual suffer­ing tor­ture in or­der to spare a large enough num­ber of other peo­ple lesser dis­com­fort en­dorses Ome­las. The kind of in­di­vi­d­ual who doesn’t not only walks away from Ome­las, but wants it not to ex­ist at all.

• This is ex­actly what both­ered me about the story, ac­tu­ally. You can choose to help the child and pos­si­bly doom Ome­las, or you can choose not to, for what­ever rea­son. But walk­ing away doesn’t solve the prob­lem!

• True. On re­flec­tion, it’s patently ob­vi­ous that the Less Wrong way to deal with Ome­las is not to ac­cept that the child’s suffer­ing is nec­es­sary to the city’s welfare, and ded­i­cate one­self to find­ing the third al­ter­na­tive. “Some of them un­der­stand why,” so it’s ob­vi­ously pos­si­ble to know what the con­nec­tion is be­tween the child and the city; know­ing that, one can seek some other way of pro­vid­ing what­ever fac­tor the tor­mented child pro­vides. That does mean al­low­ing the suffer­ing to go on un­til you find the solu­tion, though—if you free the child and ruin Ome­las, it’s likely too late at that point to achieve the goal of sav­ing both.

• Well, it de­pends on the na­ture of the prob­lem I’ve iden­ti­fied. If I en­dorse Ome­las, but don’t wish to par­take of it my­self, walk­ing away solves that prob­lem. (I en­dorse lots of re­la­tion­ships I don’t want to par­ti­ci­pate in.)

• That’s not a moral ob­jec­tion, that’s a per­sonal prefer­ence.

• Yes, that’s true. It’s hard to have a moral ob­jec­tion to some­thing I en­dorse.

• It cer­tainly doesn’t. How­ever, it shows more moral per­cep­tive­ness than most peo­ple have.

• The other day, I got some dirt in my eye, and I thought “That self­ish bas­tard, wouldn’t go and get tor­tured and now we all have to put up with this s#@\$“.

• This “moral dilemma” only has force if you ac­cept strict Ben­tham-style util­i­tar­i­anism, which treats all benefits and harms as vec­tors on a one-di­men­sional line, and cares about noth­ing ex­cept the net to­tal of benefits and harms. That was the state of the art of moral philos­o­phy in the year 1800, but it’s 2012 now.

There are pub­lished moral philoso­phies which han­dle the speck/​tor­ture sce­nario with­out un­due prob­lems. For ex­am­ple if you ac­cepted Rawls-style, risk-averse choice from a po­si­tion where you are un­aware whether you will be one of the speck-vic­tims or the tor­ture vic­tim, you would im­me­di­ately choose the specks. Choos­ing the specks max­imises the welfare of the least well off (they are sub­ject to a speck, not tor­ture) and, if you don’t know which role you will play, elimi­nates the risk you might be the tor­ture vic­tim.

(Ben­tham-style util­ity calcu­la­tions are com­pletely risk-neu­tral and care only about ex­pected re­turn on in­vest­ment. How­ever noth­ing about the uni­verse I’m aware of re­quires you to be this way, as op­posed to be­ing risk-averse).

Or for that mat­ter if you held a mod­ified ver­sion of util­i­tar­i­anism that sub­scribed to some no­tion of “jus­tice” or “what peo­ple de­serve”, and cared about how util­ity was dis­tributed be­tween per­sons in­stead of be­ing solely con­cerned with the strict math­e­mat­i­cal sum of all util­ity and di­su­til­ity, you could just say that you don’t care how many dust specks you pile up, the de­gree of un­fair­ness in a dis­tri­bu­tion where 3^^^3 peo­ple get out of a dust speck and one per­son gets tor­tured makes the tor­ture sce­nario a less prefer­able dis­tri­bu­tion.

I know Eliezer’s on record as ad­vis­ing peo­ple not to read philos­o­phy, but I think this is a case where that ad­vice is mis­guided.

• Rawls’s Wager: the least well-off per­son lives in a differ­ent part of the mul­ti­verse than we do, so we should spend all our re­sources re­search­ing trans-mul­ti­verse travel in a hope­less at­tempt to res­cue that per­son. No­body else mat­ters any­way.

• If this is a prob­lem for Rawls, then Ben­tham has ex­actly the same prob­lem given that you can hy­poth­e­sise the ex­is­tence of a gizmo that cre­ates 3^^^3 units of pos­i­tive util­ity which is hid­den in a differ­ent part of the mul­ti­verse. Or for that mat­ter a gizmo which will in­flict 3^^^3 dust specks on the eyes of the mul­ti­verse if we don’t find it and stop it. Tell me that you think that’s an un­likely hy­poth­e­sis and I’ll just raise the rele­vant util­ity or di­su­til­ity to the power of 3^^^3 again as of­ten as it takes to over­come the de­gree of im­prob­a­bil­ity you place on the hy­poth­e­sis.

How­ever I think it takes a mischievous read­ing of Rawls to make this a prob­lem. Given that the risk of the trans-mul­ti­verse travel pro­ject be­ing hope­less (as you stipu­late) is sub­stan­tial and these hy­po­thet­i­cal choosers are meant to be risk-averse, not al­tru­is­tic, I think you could con­sis­tently ar­gue that the gen­uinely risk-averse choice is not to pur­sue the pro­ject since they don’t know this worse-off per­son ex­ists nor that they could do any­thing about it if that per­son did ex­ist.

That said, di­achronous (cross-time) moral obli­ga­tions are a very deep philo­soph­i­cal prob­lem. Given that the num­ber of po­ten­tial fu­ture peo­ple is un­bound­edly large, and those peo­ple are at least po­ten­tially very badly off, if you try to use moral philoso­phies de­vel­oped to han­dle cur­rent-time prob­lems and ap­ply them to far-fu­ture di­achronous prob­lems it’s very hard to avoid the con­clu­sion that we should ded­i­cate 100% of the world’s sur­plus re­sources and all our free time to do­ing all sorts of strange and po­ten­tially con­tra­dic­tory things to benefit far-fu­ture peo­ple or pro­tect them from pos­si­ble harms.

This isn’t a prob­lem that Ben­tham’s he­do­nis­tic util­i­tar­i­anism, nor Eliezer’s gloss on it, han­dles any more satis­fac­to­rily than any other the­ory as far as I can tell.

• My util­ity func­tion has non-zero terms for prefer­ences of other peo­ple. If I asked each one of the 3^^^3 peo­ple whether they would pre­fer a dust speck if it would save some­one a hor­rible fifty-year tor­ture, they (my simu­la­tion of them) would say YES in 20*3^^^3-feet let­ters.

• If I asked each of a mil­lion peo­ple if they would give up a dol­lar’s worth of value if it would give an ar­bi­trar­ily se­lected per­son ten thou­sand dol­lars’ worth, and they each said yes, would it fol­low that de­stroy­ing a mil­lion dol­lar’s worth of value in ex­change for ten thou­sand dol­lars’ worth was a good idea?

If, ad­di­tion­ally, my util­ity func­tion had non-zero terms for the prefer­ences of other peo­ple, would it fol­low then?

• It wouldn’t fol­lows that it is a good idea, or effi­cient idea. But it would fol­low that it is the preferred idea, as calcu­lated by my util­ity func­tion that has non-zero terms for prefer­ences of other peo­ple.

For­tu­nately, my simu­la­tion of other peo­ple doesn’t sud­denly wish to help an ar­bi­trary per­son by donat­ing a dol­lar with 99% trans­ac­tion cost.

• Hm. As with Maelin’s com­ment above, I seem to agree with ev­ery part of this com­ment, but I don’t un­der­stand where it’s go­ing. Per­haps I missed your origi­nal point al­to­gether.

• My point was that the “SPECKS!!” an­swer to the origi­nal prob­lem, which is in­tu­itively ob­vi­ous to (I think) most peo­ple here, is not nec­es­sar­ily wrong. It can di­rectly fol­low from ex­pected util­ity max­i­miza­tion, if the util­ity func­tion val­ues the choice of peo­ple, even if the choice is “eco­nom­i­cally” sub­op­ti­mal.

• A sub­stan­tial part of talk­ing about util­ity func­tions is to as­sert we are try­ing to max­i­mize some­thing about util­ity (to­tal, av­er­age, or what­not). It seems very strange to say that we can max­i­mize util­ity by be­ing in­effi­cient in our con­ver­sion of other re­sources into util­ity. If your goal is to avoid cer­tain “effi­cient” con­ver­sa­tions for other rea­sons, then it doesn’t make a lot of sense to say that you are re­ally try­ing to im­ple­ment a util­ity func­tion.

In other words, Walzer’s Spheres of Jus­tice con­cept, which states that some trade-offs are morally im­per­mis­si­ble, is not re­ally im­ple­mentable in a util­ity func­tion. To the ex­tent that he (or I) might be mod­eled by a util­ity func­tion, there are in­evitably go­ing to be er­rors in what the func­tion pre­dicts I would want or very strange dis­con­ti­nu­ities in the func­tion.

• But I am try­ing to max­i­mize the to­tal util­ity, just a differ­ent one.

Ok, let me put it this way. I will drop the terms for other peo­ple’s prefer­ences from my util­ity func­tion. It is now en­tirely self-cen­tered. But it still val­ues the good feel­ing I get if I’m al­lowed to par­ti­ci­pate in sav­ing some­one from fifty years of tor­ture. The value of this feel­ing if much more than the minis­cule nega­tive util­ity of a dust speck. Now, as­sume some rea­son­able per­cent of the 3^^^3 peo­ple are like me in this re­spect. Max­i­miz­ing the to­tal util­ity for ev­ery­body re­sults in: SPECKS!!

Now an ob­jec­tion can be stated that by the con­di­tions of the prob­lem, I can­not change the util­ities of the 3^^^3 peo­ple. They are given and are equal to a minis­cule nega­tive value cor­re­spond­ing to the small speck of dust. Evil forces give me the sadis­tic choice and don’t al­low me to share the good news with ev­ery­one. Ok. But I can still imag­ine what the peo­ple would have preferred if given a choice. So I add a term for their prefer­ence to my util­ity func­tion. I’m be­hav­ing like a rep­re­sen­ta­tive of peo­ple in a gov­ern­ment. Or like a Friendly AI try­ing to im­ple­ment their CEV.

In other words, Walzer’s Spheres of Jus­tice con­cept, which states that some trade-offs are morally im­per­mis­si­ble, is not re­ally im­ple­mentable in a util­ity func­tion.

My ar­gu­ments have noth­ing to do with Walzer’s Spheres of Jus­tice con­cept, AFAICT.

• Now, as­sume some rea­son­able per­cent of the 3^^^3 peo­ple are like me in this re­spect. Max­i­miz­ing the to­tal util­ity for ev­ery­body re­sults in: SPECKS!!

The point of pick­ing a num­ber the size of 3^^^3 is that it is so large that this state­ment is false. Even if 99% are like you, I can keep adding ^ and falsify the state­ment. If util­ity is ad­di­tive at all, tor­ture is the bet­ter choice.

My refer­ence to Walzer was sim­ply to note that many in­ter­est­ing moral the­o­ries ex­ist that do not ac­cept that util­ity is ad­di­tive. I don’t ac­cept that util­ity is ad­di­tive.

• Now, as­sume some rea­son­able per­cent of the 3^^^3 peo­ple are like me in this re­spect. Max­i­miz­ing the to­tal util­ity for ev­ery­body re­sults in: SPECKS!!

The point of pick­ing a num­ber the size of 3^^^3 is that it is so large that this state­ment is false.

Why would it ever be false, no mat­ter how large the num­ber?

Let S = negated di­su­til­ity of speck, a small pos­i­tive num­ber. Let F = util­ity of good feel­ing of pro­tect­ing some­one from tor­ture. Let P = the frac­tion of peo­ple who are like me (for whom F is pos­i­tive), 0 < P ⇐ 1. Then the to­tal util­ity for N peo­ple, no mat­ter what N, is N(PF—S), which is >0 as long as P*F > S.

I don’t ac­cept that util­ity is ad­di­tive.

Well, we can agree that util­ity is com­pli­cated. I think it’s pos­si­ble to keep it ad­di­tive by shift­ing com­plex­ities to the de­tails of its calcu­la­tion.

• F = util­ity of good feel­ing of pro­tect­ing some­one from tor­ture.

This knowl­edge among the par­ti­ci­pants is adding to the thought ex­per­i­ment. The origi­nal ques­tion:

Which is worse: (a) 3^^^3 dust specks, or (b) one per­son tor­tured.

Which is worse: (a) 3^^^3 dust specks, or (b) one per­son tor­tured AND 3^^^3 peo­ple em­pathiz­ing with the suffer­ing of that person

No­tice how your for­mu­la­tion has 3^^^3 in both op­tions, while the origi­nal ques­tion does not.

• Yes, I stated and an­swered this ex­act ob­jec­tion two com­ments ago.

• I have come to be­lieve that—like a metaphor­i­cal Ground­hog Day—ev­ery con­ver­sa­tion on this topic is the same lines from the same play, with differ­ent ac­tors.

This is the part of the play where I re­peat more force­fully that you are fight­ing the hypo, but don’t seem to be re­al­iz­ing that you are fight­ing the hypo.

In the end, the les­son of the prob­lem is not about the bad­ness of tor­ture or what things count as pos­i­tive util­ity, but about learn­ing what com­mit­ments you make with var­i­ous as­ser­tions about the way moral de­ci­sions should be made.

• This is the part of the play where I re­peat more force­fully that you are fight­ing the hypo, but don’t seem to be re­al­iz­ing that you are fight­ing the hypo.

I don’t re­al­ize it ei­ther; I’m not sure that it’s true. For­give me if I’m miss­ing some­thing ob­vi­ous, but:

• gRR wants to in­clude the prefer­ences of the peo­ple get­ting dust-specked in his util­ity func­tion.

• But as you point out, he can’t; the hy­po­thet­i­cal doesn’t al­low it.

• So in­stead, he in­cludes his ex­trap­o­la­tion of what their prefer­ences would be if they were in­formed, and at­tempts to act on their be­half.

You can ar­gue that that’s a silly way to con­struct a util­ity func­tion (you seem to be head­ing that way in your third para­graph), but that’s a differ­ent ob­jec­tion.

• If you want to an­swer a ques­tion that isn’t asked by the hy­po­thet­i­cal, you are fight­ing the hypo. That’s ba­si­cally the paradig­matic ex­am­ple of “fight­ing the hypo.”

I think gRR has the right an­swer to the ques­tion he is ask­ing. But it is a differ­ent one that Eliezer was ask­ing, and teaches differ­ent les­sons. To the ex­tent that gRR thinks he has re­but­ted the les­sons from Eliezer’s ques­tion, he’s in­cor­rect.

• I’m not sure why do you think I’m ask­ing a differ­ent ques­tion. Do you mean to say that in the origi­nal Eliezer’s prob­lem all of the util­ities are fixed, in­clud­ing mine? But then, the ques­tion ap­pears en­tirely with­out con­tent:

“Here are two num­bers, this one is big­ger than that one, your task is to always choose the biggest num­ber. Now which num­ber do you choose?”

Be­sides, if this is in­deed what Eliezer meant, then his choice of “tor­ture” for one of the num­bers is in­con­sis­tent. Tor­ture always has util­ity im­pli­ca­tions for other peo­ple, not just the per­son be­ing tor­tured. I hy­poth­e­size that this is what makes it differ­ent (non-ad­di­tive, non-com­mea­surable, etc) for some moral philoso­phers.

• As fubarobfusco pointed out, your ar­gu­ment in­cludes the im­pli­ca­tion that dis­cov­er­ing or pub­li­ciz­ing un­pleas­ant truths can be morally wrong (be­cause the par­ti­ci­pants were ig­no­rant in the origi­nal for­mu­la­tion). It’s not ob­vi­ous to me that any moral the­ory is com­mit­ted to that po­si­tion.

And with­out that moral con­clu­sion, I think Eliezer is cor­rect that a to­tal util­i­tar­ian is com­mit­ted to be­liev­ing that choos­ing TORTURE over SPECKS max­i­mizes to­tal util­ity. The re­pug­nant con­clu­sion re­ally is that re­pug­nant. All of that was not an ob­vi­ous re­sult to me.

• Any util­ity func­tion that does not give an ex­plicit over­whelm­ingly pos­i­tive value to truth, and does give an ex­plicit pos­i­tive value to “plea­sure” would ob­vi­ously in­clude the im­pli­ca­tion that dis­cov­er­ing or pub­li­ciz­ing un­pleas­ant truths can be morally wrong. I don’t see why it is rele­vant.

If all the util­ities are speci­fied by the prob­lem text com­pletely, then TORTURE max­i­mizes the to­tal util­ity by defi­ni­tion. There’s noth­ing to be com­mit­ted about. But in this case, “tor­ture” is just a la­bel. It can­not re­fer to a real tor­ture, be­cause a real tor­ture would pro­duce differ­ent util­ity changes for peo­ple.

• It sounds to me as if you’re as­sert­ing that the ig­no­rance of the 3^^^3 peo­ple to the fact that their speck­less­ness de­pends on tor­ture, makes a pos­i­tive moral differ­ence in the mat­ter.

• It sounds to me as if you’re as­sert­ing that the ig­no­rance of the 3^^^3 peo­ple to the fact that their speck­less­ness de­pends on tor­ture, makes a pos­i­tive moral differ­ence in the mat­ter.

That doesn’t seem un­rea­son­able. THat knowl­edge is prob­a­bly worse than the speck.

• Sure, it does have the odd im­pli­ca­tion that dis­cov­er­ing or pub­li­ciz­ing un­pleas­ant truths can be morally wrong, though.

• That’s a re­ally good point. Does the “re­pug­nant con­clu­sion” prob­lem for to­tal util­i­tar­i­ans im­ply that they think in­form­ing oth­ers of bad news can be morally wrong in or­di­nary cir­cum­stances? Or just the product of a poor defi­ni­tion of util­ity?

I take it as fairly un­con­tro­ver­sial that a benev­olent lie when no changes in de­ci­sion by the listener are pos­si­ble is morally ac­cept­able. That is, falsely say­ing “Your son sur­vived the plane crash” to the father who is liter­ally mo­ments from dy­ing seems morally ac­cept­able be­cause the father isn’t go­ing to de­cide any­thing differ­ently based on that state­ment. But that’s an un­usual cir­cum­stance, so I don’t think it should trou­ble us.

Those of us who think tor­ture is worse (i.e. are not to­tal util­i­tar­i­ans) prob­a­bly are not com­mit­ted to any po­si­tion on the re­veal­ing-un­pleas­ant-truths-co­nun­drum. Right?

• That is, falsely say­ing “Your son sur­vived the plane crash” to the father who is liter­ally mo­ments from dy­ing seems morally ac­cept­able be­cause the father isn’t go­ing to de­cide any­thing differ­ently based on that state­ment. But that’s an un­usual cir­cum­stance, so I don’t think it should trou­ble us.

Agreed. Ly­ing to oth­ers to ma­nipu­late them de­prives them of the abil­ity to make their own choices — which is part of com­plex hu­man val­ues — but in this case the father doesn’t have any rele­vant choice to de­prive him of.

Those of us who think tor­ture is worse (i.e. are not to­tal util­i­tar­i­ans) prob­a­bly are not com­mit­ted to any po­si­tion on the re­veal­ing-un­pleas­ant-truths-co­nun­drum. Right?

Not that I can tell.

I sup­pose an­other way of look­ing at this is a col­lec­tive-ac­tion or ex­trap­o­lated-vo­li­tion prob­lem. Each in­di­vi­d­ual in the SPECKS case might pre­fer a mo­men­tary dust speck over the knowl­edge that their mo­men­tary com­fort im­plied some­one else’s 50 years of tor­ture. How­ever, a con­se­quen­tial­ist agent choos­ing TORTURE over SPECKS is do­ing so in the be­lief that SPECKS is ac­tu­ally worse. Can that agent be im­ple­ment­ing the ex­trap­o­lated vo­li­tion of the in­di­vi­d­u­als?

• Well, OK, sure, but… can’t any­thing fol­low from ex­pected util­ity max­i­miza­tion, the way you’re ap­proach­ing it? For all (X, Y), if some­one chooses X over Y, that can di­rectly fol­low from ex­pected util­ity max­i­miza­tion, if the util­ity func­tion val­ues X more than Y.

If that means the choice of X over Y is not nec­es­sar­ily wrong, OK, but it seems there­fore to fol­low that no choice is nec­es­sar­ily wrong.

I sus­pect I’m still miss­ing your point.

• Given: a para­dox­i­cal (to ev­ery­body ex­cept some moral philoso­phers) an­swer “TORTURE” ap­pears to fol­low from ex­pected util­ity max­i­miza­tion.

Pos­si­bil­ity 1: the the­ory is right, ev­ery­body is wrong.

But in the do­main of moral philos­o­phy, our prefer­ences should be treated with more re­spect than el­se­where. We cher­ish some of our bi­ases. They are what makes us hu­man, we wouldn’t want to lose them, even if some­times they give “in­effi­cient” an­swer from the point of view of sim­plest greedy util­ity func­tion.

Th­ese bi­ases are prob­a­bly re­flex­ively con­sis­tent—even if we knew more, we would still wish to have them. At least, I can hy­poth­e­size that they are so, un­til proven oth­er­wise. Sim­ply show­ing me the in­effi­ciency doesn’t make me wish not to have the bias. I value effi­ciency, but I value my hu­man­ity more.

Pos­si­bil­ity 2: the the­ory (ex­pected util­ity max­i­miza­tion) is wrong.

But the the­ory is rather nice and el­e­gant, I wouldn’t wish to throw it away. So, maybe there’s an­other way to fix the para­dox? Maybe, some­thing wrong with the prob­lem defi­ni­tion? And lo and be­hold—yes, there is.

Pos­si­bil­ity 3: the prob­lem is wrong

As the prob­lem is stated, the prefer­ences of 3^^^3 peo­ple are not taken into ac­count. It is as­sumed that the peo­ple don’t know and will never know about the situ­a­tion—be­cause their to­tal util­ity change re­gard­ing the whole is ei­ther noth­ing or a sin­gle small nega­tive value.

If peo­ple were aware of the situ­a­tion, their util­ity changes would be differ­ent—a large nega­tive value from know­ing about the tor­tured per­son’s plight and be­ing forcibly for­bid­den to help, or a pos­i­tive value from know­ing they helped. Well, there would also be a nega­tive value from moral philoso­phers who would know and worry about in­effi­ciency, but I think it would be a rel­a­tively small value, af­ter all.

Un­for­tu­nately, in the con­text of the prob­lem, the peo­ple are un­aware. The choice for the whole hu­man­ity is given to me alone. What should I do? Should I play dic­ta­tor and make a choice that would be re­pu­dated by ev­ery­one, if they only knew? This seems wrong, some­how. Oh! I can simu­late them, ask what they would pre­fer, and give their prefer­ence a pos­i­tive term within my own util­ity func­tion. I would be the rep­re­sen­ta­tive of the peo­ple in a gov­ern­ment, or an AI try­ing to im­ple­ment their CEV.

Re­sult: SPECKS!! Hur­ray! :)

• OK. I think I un­der­stand you now. Thanks for clar­ify­ing.

• I feel like this is mis­in­ter­pret­ing gRR’s com­ment. gRR is not claiming that nonu­til­i­tar­ian choices are prefer­able be­cause the util­ity func­tion has non-zero terms for prefer­ences of other peo­ple. It is a nec­es­sary con­di­tion, but not a suffi­cient one.

My model of other peo­ple says that a sig­nifi­cantly smaller per­centage of peo­ple would ac­cept los­ing a dol­lar in or­der to grant one per­son ten grand, than would ac­cept a dust speck in or­der to save one per­son 50 years of tor­ture.

• My model of other peo­ple says that a sig­nifi­cantly smaller per­centage of peo­ple would ac­cept los­ing a dol­lar in or­der to grant one per­son ten grand, than would ac­cept a dust speck in or­der to save one per­son 50 years of tor­ture.

As does mine.

gRR is not claiming that nonu­til­i­tar­ian choices are prefer­able be­cause the util­ity func­tion has non-zero terms for prefer­ences of other peo­ple. It is a nec­es­sary con­di­tion, but not a suffi­cient one.

That’s con­sis­tent with my un­der­stand­ing of their claim as well.

I feel like this is mis­in­ter­pret­ing gRR’s com­ment.

Can you ex­pand fur­ther on why you feel like this?

• Sure, al­though up­dat­ing upon read­ing your re­sponse, I now sus­pect that I have mis­in­ter­preted your com­ment. But I’ll ex­plain how I saw things when I first com­mented.

Ba­si­cally it looked like you were per­ceiv­ing gRR’s ar­gu­ment as a spe­cific in­stance of the fol­low­ing gen­eral ar­gu­ment:

(1a) lots of peo­ple might agree to take a small de­crease in util­ity in or­der to provide lots of util­ity /​ avoid lots of di­su­til­ity for an in­di­vi­d­ual even if the to­tal de­crease in util­ity over all the peo­ple is sub­stan­tially larger than the in­di­vi­d­ual util­ity granted /​ di­su­til­ity averted

(2a) when­ever lots of peo­ple would agree to that, it is a good idea to do it

(3) there­fore it is a good idea to take small amounts of util­ity from many peo­ple to give lots of util­ity /​ pre­vent lots of di­su­til­ity to one per­son pro­vided all/​an over­whelming ma­jor­ity of the peo­ple agree to it

You were then try­ing to re­veal the fault in gRR’s gen­eral ar­gu­ment by pre­sent­ing a differ­ent ex­am­ple (\$1m → \$10k) and ask­ing if the same ar­gu­ment would still hold there (which you pre­sume it wouldn’t). Then you sug­gested throw­ing an­other premise, (1b) I have nonzero terms for oth­ers’ prefer­ences, and pre­sum­ably re­plac­ing (2a) by (2b) which adds the re­quire­ment of (1b), and ask­ing if that would make the ar­gu­ment hold.

But gRR was not as­sert­ing that gen­eral ar­gu­ment—in par­tic­u­lar, not premise (2a)/​(2b). So it seemed like you seemed to be try­ing to tear down an ar­gu­ment that gRR was not con­struct­ing.

• Con­versely, if you asked some­body if they’d be will­ing to be tor­tured for 50 years in or­der to save 3^^^3 peo­ple from get­ting each a dust speck in the eye, they’d likely say NO FREAKIN’ WAY!!!.

BTW, wel­come to Less Wrong—you can in­tro­duce your­self in the wel­come thread.

• Th­ese eth­i­cal ques­tions be­come rele­vant if we’re im­ple­ment­ing a Friendly AI, and they are only of aca­demic in­ter­est if I in­ter­pret them liter­ally as a ques­tion about me.

If it’s a ques­tion about me, I’d prob­a­bly go with the dust specs. A small frac­tion of those peo­ple will have time to get to me, and of those, none of those peo­ple are likely to bother me if it’s just a dust speck. If I were to ad­vo­cate the tor­ture, the vic­tim or some­one who knows him might find me and try to get re­venge. I just gave you a data point about the psy­chol­ogy of one un­mod­ified hu­man, which is rel­a­tively use­less, so I don’t think that’s the ques­tion you re­ally wanted an­swered.

Per­haps the ques­tion is re­ally what a non-buggy om­nipo­tent Friendly AI would do. If it has been con­structed to care equally about that ab­surd num­ber of peo­ple, IMO it should choose tor­ture. If it’s not om­nipo­tent, then it has to con­sider re­venge of the vic­tim, so the cor­rect an­swer de­pends on the de­tails of how om­nipo­tent it isn’t.

• So, I’m very late into this game, and not through all the se­quences (where the an­swer might already be given), but still, I am very in­ter­ested in your po­si­tions (prob­a­bly no­body an­swers, but who knows):

1. Is there a nat­u­ral num­ber N for which you’d kill one per­son vs. giv­ing N peo­ple one-sin­gle dust-speck? (I as­sume this de­pends on whether one ex­pects an ever-last­ing uni­verse.)

2. Do you “in­te­grate” util­ity over time (or “ex­pe­rience-mo­ments”, as per time­less bla), or is it bet­ter to just max­i­mize the “fi­nal” point, how­ever one got there?

3. Does break­ing up the util­ity func­tion into sev­eral cat­e­gories re­ally al­low dutch-book­ing, as is in­di­cated in one of the com­ments? (I hope you un­der­stand what I mean with the cat­e­gories; you’ve a to­tal strict-or­der for them, with no two iden­ti­cal, el­e­ments within cat­e­gories “add up”, but not even an in­finite num­ber of “bad” things in one cat­e­gory can add up to a sin­gle one in the next higher one)

4. If “no” for 3, then: For a (cur­rent) hu­man we only have neu­rons, and a real break-point can prob­a­bly not be de­ter­mined; but a re-en­g­ineered per­son could im­ple­ment such a thing. Is it then prefer­able?

I ex­pect “yes” for 1, and I have to ex­pect “yes” for 3 (I per­son­ally do not see this, but I’m bad at math, and have to trust the com­ments any­way). If “no” for 3, I still ex­pect “no” for 4, per sim­plic­ity-ar­gu­ment, re­told many times.

I’m very cu­ri­ous for an­swer on ques­tion 2. Once Eliezer quoted “the end does not jus­tify the means”, but this sen­tence is so very much re-in­ter­pretable that it’s worth­less (even if he said oth­er­wise). But as per up­dat­ing: why should the or­der of when in­for­ma­tion is re­vealed change the fi­nal re­sult? What­ever.

When the an­swers of these ques­tions are some­where in the se­quences, just ig­nore this, I will sooner or later get to them.

• Is there a nat­u­ral num­ber N for which you’d kill one per­son vs. giv­ing N peo­ple one-sin­gle dust-speck? (I as­sume this de­pends on whether one ex­pects an ever-last­ing uni­verse.)

I don’t think this ques­tion (or one dis­cussed in the OP) ad­mit mean­ingful an­swers. It seems a pity to just ‘pour cold wa­ter over them’ but I don’t know what else to say—what­ever ‘moral truths’ there are in the world sim­ply don’t reach as far as such ab­surd sce­nar­ios.

Do you “in­te­grate” util­ity over time (or “ex­pe­rience-mo­ments”, as per time­less bla), or is it bet­ter to just max­i­mize the “fi­nal” point, how­ever one got there?

Depends what game you’re play­ing, surely. If you’re play­ing ‘In­vest For Re­tire­ment’ and the util­ity func­tion mea­sures the size of your re­tire­ment fund, then nat­u­rally the ‘fi­nal’ point is what mat­ters.

On the other hand, if you’re play­ing ‘En­joy Your Re­tire­ment’ and the util­ity func­tion mea­sures how much money you have to spend on a monthly ba­sis, then what’s im­por­tant is the “in­te­grated” util­ity.

Two points of in­ter­est here:

(1) Fi­nal util­ity in ‘In­vest for re­tire­ment’ equals in­te­grated util­ity in ‘En­joy your re­tire­ment’ (mod­ulo some faf­fing around with dis­count rates).

(2) The game of ‘En­joy your re­tire­ment’ is no­table in­so­far as it’s a game with a guaran­teed fi­nal util­ity of zero (or -in­finity if you pre­fer).

• Tor­ture is not the ob­vi­ous an­swer, be­cause tor­ture-based suffer­ing and dust-speck-based suffer­ing are not scalar quan­tities with the same units.

To be able to make a com­par­i­son be­tween two quan­tities, the units must be the same. That’s why we can say that 3 peo­ple suffer­ing tor­ture for 49.99 years is worse than 1 per­son suffer­ing tor­ture for 50 years. In­ten­sity Du­ra­tion Num­ber of Peo­ple gives us units of PainIn­ten­sity-Per­son-Years, or some­thing like that.

Yet tor­ture-based suffer­ing and dust-speck-based suffer­ing are not mea­sured in the same units. Con­se­quently, we can­not solve this ques­tion as a sim­ple math prob­lem. For ex­am­ple, the cor­rect units of tor­ture-based suffer­ing might in­volve San­ity-De­stroy­ing-Pain. There is no rea­son to be­lieve that we can quan­ti­ta­tively com­pare Easily-Re­cov­er­able-Pain to San­ity-De­stroy­ing-Pain; at least, the com­par­i­son is not just a math prob­lem.

To be able to do the math, we would have to con­vert both types of suffer­ing to the same units of di­su­til­ity. Some folks here seem to think that no mat­ter what the con­ver­sion func­tions are, 3^^^3 is just so big that the con­verted di­su­til­ity of 3^^3 dust specs is greater than the con­verted di­su­til­ity of 50 years of tor­ture for one per­son. But de­ter­mi­na­tion of the cor­rect di­su­til­ity con­ver­sion func­tions is it­self a philo­soph­i­cal prob­lem that can­not be waved away, and it’s im­pos­si­ble to eval­u­ate that claim un­til those con­ver­sion func­tions have at least been hinted at.

One way to get differ­ent types of suffer­ing to have the same units would be to rep­re­sent them as vec­tors, and find a way to get the mag­ni­tude of those vec­tors.

The tor­ture po­si­tion seems to do the math by us­ing pain in­ten­sity as a scalar. Yet there is no rea­son to to be­lieve that suffer­ing is a scalar quan­tity, or that the di­su­til­ity ac­corded to suffer­ing is a scalar quan­tity. Even pain in­ten­sity is case where “quan­tity has a qual­ity all of its own”: as you in­crease it, the suffer­ing goes through qual­i­ta­tive changes. For ex­am­ple, if just a 10% in­crease in pain du­ra­tion/​in­ten­sity causes Post-Trau­matic Stress Di­sor­der, that pain is more than 10% worse, and it’s be­cause a qual­i­ta­tively differ­ent type of suffer­ing. The units change.

Suffer­ing may well be bet­ter rep­re­sented as a vec­tor. Other di­men­sions in the vec­tor might in­clude vari­ables such as chance of Post-Trau­matic Stress Di­sor­der (0 in the case of dust specks which are un­com­fortable but not trau­matic, and ap­proach­ing 100% in the case of tor­ture), non-re­cov­ery chance (0% in the case of dust specks, ap­proach­ing 100% in the case of tor­ture), re­cov­ery time (<1 sec­ond in the case of dust specks, ap­proach­ing in­finity in the case of 50 years of tor­ture), in­san­ity, hu­man rights vi­o­la­tion, ca­reer-de­struc­tion, men­tal-health de­struc­tion, life de­struc­tion...

Choice of pain in­ten­sity only over other vari­ables rele­vant to suffer­ing is beg­ging the ques­tion. We could cherry-pick an­other di­men­sion out of the vec­tor to get a differ­ent re­sult, such as life de­struc­tion. LifeDestruc­tionChance(50YearsOfTor­ture) could be greater than LifeDestruc­tionChance(DustSpeck) * 3^^^3 (I might be com­mit­ting scope in­sen­si­tivity say­ing this, but the point is that the an­swer isn’t self-ev­i­dent). Of course, life de­struc­tion isn’t the only rele­vant vari­able to the calcu­la­tion of suffer­ing, but nei­ther is pain in­ten­sity.

Now, if there is a way to take the mag­ni­tude of a suffer­ing vec­tor (an­other philo­soph­i­cal prob­lem), it’s not at all self-ev­i­dent that Mag­ni­tude( Speck­Vec­tor ) * 3^^^3 > Mag­ni­tude( 50YearsOfTor­tureVec­tor), be­cause the Speck­Vec­tor has vir­tu­ally all its di­men­sions ap­proach­ing 0 while the Tor­tureVec­tor has many di­men­sions ap­proach­ing in­finity or their max value (which I think re­flects why peo­ple think tor­ture is so bad). That would de­pend on what the di­men­sions of those vec­tors are and how the mag­ni­tude func­tion works.

• But de­ter­mi­na­tion of the cor­rect di­su­til­ity con­ver­sion func­tions is it­self a philo­soph­i­cal prob­lem that can­not be waved away, and it’s im­pos­si­ble to eval­u­ate that claim un­til those con­ver­sion func­tions have at least been hinted at.

You seem to have got­ten hung up on 3^^^3, which is re­ally just a place­holder for “some finite num­ber so large it bog­gles the mind”. If you ac­cept that all types of pain can be mea­sured on a com­mon di­su­til­ity scale, then all you need is a non-zero con­ver­sion fac­tor, and the re­pug­nant con­clu­sion fol­lows (for some mind-bog­glingly large num­ber of specks). I think that if a line of ar­gu­ment that res­cues your re­but­tal ex­ists, it in­volves lex­i­co­graphic prefer­ences.

• The ques­tion is, of course, silly. It is perfectly ra­tio­nal to de­cline to an­swer. I choose to try to an­swer.

It is also perfectly ra­tio­nal to say “it de­pends”. If you re­ally think “a dust speck in 3^^^3 eyes” gives a uniquely defined prob­a­bil­ity dis­tri­bu­tion over differ­ent sub­sets of pos­si­bil­i­ty­verse, you are be­ing ridicu­lous. But let’s pre­tend it did—let’s pre­tend we had 3^^^^3 par­allel Eleiz­ers, stand­ing on flat golden sur­faces in 1G and one at­mo­sphere, for just long enough to ask each other enough enough ques­tions to define the prob­lem prop­erly. (I’m sorry, Eleizer, if by stat­ing that pos­si­bil­ity, I’ve in­creased the “true”ness of that part of the prob­a­bil­i­ty­verse by ((3^^^3+1)/​3^^^3) :) ).

You can or “I’ve thought about it, but I don’t trust my thought pro­cesses”. That is not my po­si­tion.

My po­si­tion is that this ques­tion does not, in fact, have an an­swer. I think that that fact is very im­por­tant.

It’s not that the num­bers are mean­ingless. 3^^^3 is a very ex­act num­ber, and you can prove any num­ber of things about it. A differ­ent ques­tion us­ing ridicu­lous num­bers—say, would you rather tor­ture 4^^^4 peo­ple for 5 min­utes or 3^^^3 of them for 50 years—has a sin­gle cor­rect an­swer which is very clear (of course, the 3^^^3 ones; 4^^^4 >>> (3^^^3)^2). (Un­less there were very bizarre ex­tra con­di­tions on the prob­lem.)

It’s just that there is no uni­ver­sal moral util­ity func­tion which in­puts a prob­a­bil­ity dis­tri­bu­tion over a finite sub­set of the pos­si­bil­i­ty­verse and out­puts a num­ber. It’s more like rel­a­tivis­tic causal­ity—sub­sti­tute “bet­ter” for “af­ter”. A is af­ter B and B is a spacelike dis­tance from C, but C can also be spacelike from A. The dust specks and the tor­ture are in­com­pa­rable, a spacelike dis­tance.

I think that, philo­soph­i­cally, that makes a big differ­ence. If you pi­lo­soph­i­cally can’t always go around morally com­par­ing near-in­finite sets, then it’s silly to try to ap­prox­i­mate how you would be­have if you could. Which means you con­sider the moral value of the con­se­quences which you could pos­si­bly an­ti­ci­pate. So yeah, if you are work­ing on AI, you are morally obli­gated to think about FAI, be­cause that’s in­ten­tional ac­tion, and you would have to be a mon­ster to say you didn’t care. But you don’t get to use FAI and the sin­gu­lar­ity to trump the here-and-now, be­cause in many ways they’re just not com­pa­rable.

Which means, to me, for in­stance, that peo­ple can un­der­stand the sin­gu­lar­ity idea and be­lieve it has a non-0 prob­a­bil­ity, and have abil­ities or re­sources that would be mean­ingful to the FAI effort, and still morally choose to sim­ply live as “good peo­ple” in a more tra­di­tional sense (have a good life in which they make the peo­ple with whom they in­ter­act over­all hap­pier). It’s not just a lack of abil­ity to trace the con­se­quences; it’s also the pos­si­bil­ity that the con­se­quences of this or that out­come will be liter­ally in­com­pa­rable by any finite halt­ing al­gorithm, whereas even our des­per­ately-limited brains have de­cent ap­prox­i­ma­tions of al­gorithms for morally com­par­ing the effect of, say, post­ing on OB ver­sus wash­ing the dishes.

Go­ing to wash the dishes now.

• Bog­dan’s pre­sented al­most ex­actly the ar­gu­ment that I too came up with while read­ing this thread. I would choose the specks in that ar­gu­ment and also in the origi­nal sce­nario (as long as I am not com­mit­ting to the same choice be­ing re­peated an ar­bi­trary num­ber of times, and I am not caus­ing more peo­ple to crash their cars than I cause not to crash their cars; the lat­ter seems like an un­likely as­sump­tion, but thought ex­per­i­ments are al­lowed to make un­likely as­sump­tions, and I’m in­ter­ested in the moral ques­tion posed when we ac­cept the as­sump­tion). Based on the com­ments above, I ex­pect that Eliezer is perfectly con­sis­tent and would choose tor­ture, though (as in the sce­nario with 3^^^3 re­peated lives).

Eliezer and Mar­cello do seem to be cor­rect in that, in or­der to be con­sis­tent, I would have to choose a cut-off point such that n dust specks in 3^^^3 eyes would be less bad than one tor­ture, but n+1 dust specks would be worse. I agree that it seems coun­ter­in­tu­itive that adding just one speck could make the situ­a­tion “in­finitely” worse, es­pe­cially since the speck­ists won’t be able to agree ex­actly where the cut-off point is.

But it’s only the in­finity that’s unique to speck­ism. Sup­pose that you had to choose be­tween in­flict­ing one minute of tor­ture on one per­son, or putting n dust specks into that per­son’s eye over the next fifty years. If you’re a con­sis­tent ex­pected util­ity al­tru­ist, there must be some n such that you would choose n specks, but not n+1 specks. What makes the n+1st speck differ­ent? Noth­ing, it just hap­pens to be the cut-off point you must choose if you don’t want to choose 10^57 specks over tor­ture, nor tor­ture over zero specks. If you make ten al­tru­ists con­sider the ques­tion in­de­pen­dently, will they ar­rive at ex­actly the same value of n? Prolly not.

The above ar­gu­ment does not de­stroy my faith in de­ci­sion the­ory, so it doesn’t de­stroy my pro­vi­sional ac­cep­tance of speck­ism, ei­ther.

• Jeffrey wrote: To me, this spe­cific ex­er­cise re­duces to a sim­pler ques­tion: Would it be bet­ter (more eth­i­cal) to tor­ture in­di­vi­d­ual A for 50 years, or in­flict a dust speck on in­di­vi­d­ual B? Gosh. The only jus­tifi­ca­tion I can see for that equiv­alence would be some gen­eral be­lief that bad­ness is sim­ply in­de­pen­dent of num­bers. Sup­pose the ques­tion were: Which is bet­ter, for one per­son to be tor­tured for 50 years or for ev­ery­one on earth to be tor­tured for 49 years? Would you re­ally choose the lat­ter? Would you not, in fact, jump at the chance to be the sin­gle per­son for 50 years if that were the only way to get that out­come rather than the other one?

In any case: since you now ap­pear to be con­ced­ing that it’s pos­si­ble for some­one to pre­fer TORTURE to SPECKS for rea­sons other than a childish de­sire to shock, are you re­tract­ing your origi­nal ac­cu­sa­tion and anal­y­sis of mo­tives? … Oh, wait, I see you’ve ex­plic­itly said you aren’t. So, you know that one lead­ing pro­po­nent of the TORTURE op­tion ac­tu­ally does care about hu­man­ity; you agree (if I’ve un­der­stood you right) that util­i­tar­ian anal­y­sis can lead to the con­clu­sion that TORTURE is the less-bad op­tion; I as­sume you agree that rea­son­able peo­ple can be util­i­tar­i­ans; you’ve seen that one per­son ex­plic­itly said s/​he’d be will­ing to be the one tor­tured; but in spite of all this, you don’t re­tract your char­ac­ter­i­za­tion of that view as shock­ing; you don’t re­tract your im­pli­ca­tion that peo­ple who ex­pressed a prefer­ence for TORTURE did so be­cause they want to show how un­com­pro­mis­ingly ra­tio­nal­ist they are; you don’t re­tract your im­pli­ca­tion that those peo­ple don’t ap­pre­ci­ate that real de­ci­sions have real effects on real peo­ple. I find that … well, “fairly shock­ing”, ac­tu­ally.

(It shouldn’t mat­ter, but: I was not one of those ad­vo­cat­ing TORTURE, nor one of those op­pos­ing it. If you care, you can find my opinions above.)

• The na­tion of Nod has a pop­u­la­tion of 3^^^3. By amaz­ing co­in­ci­dence, ev­ery per­son in the na­tion of Nod has \$3^^^3 in the bank. (With a money su­plly like that, those dol­lars are not worth much.) By yet an­other co­in­ci­dence, the gov­ern­ment needs to raise rev­enues of \$3^^^3. (It is a very effi­cient gov­ern­ment and doesn’t need much money.) Should the money be raised by tak­ing \$1 from each per­son, or by sim­ply tak­ing the en­tire amount from one per­son?

• I find it pos­i­tively bizarre to see so much in­ter­est in the ar­ith­metic here, as if know­ing how many dust flecks go into a year of tor­ture, just as one knows that six­teen ounces go into one pint, would in­form the an­swer.

What hap­pens to the de­bate if we ab­solutely know the equa­tion:

3^^^3 dust­flecks = 50 years of tor­ture

or

3^^^3 dust­flecks = 600 years of torture

or

3^^^3 dust­fleck = 2 years of tor­ture ?

• I’ll take it, as long as it’s no more likely to be one of the ear­liest lives. I don’t trust any uni­verse that can make 3^^^3 of me not to be a simu­la­tion that would get pul­led early.

Hrm… Re­cov­er­ing’s in­duc­tion ar­gu­ment is start­ing to sway me to­ward TORTURE.

In­ter­est­ing. The idea of con­vinc­ing oth­ers to de­cide TORTURE is both­er­ing me much more than my own de­ci­sion.

I hope these ideas never get ar­gued out of con­text!

• I think we may be at cross pur­poses; my apolo­gies if we are and it’s my fault. Let me try to be clearer.

Any par­tic­u­lar util­ity func­tion (if it’s real-val­ued and to­tal) “begs the ques­tion” in the sense that it ei­ther prefers SPECKS to TORTURE, or prefers TORTURE to SPECKS, or puts them ex­actly equal. I don’t see how this can pos­si­bly be con­sid­ered a defect, but if it is one then all util­ity func­tions have it, not just ones that pre­fer SPECKS to TORTURE.

Say­ing “Clearly SPECKS is bet­ter than TORTURE, be­cause here’s my util­ity func­tion and it says SPECKS is bet­ter” would be beg­ging the ques­tion (ab­sent ar­gu­ments in sup­port of that util­ity func­tion). I don’t see any­one do­ing that. Neel’s say­ing “You can’t rule out the pos­si­bil­ity that SPECKS is bet­ter than TORTURE by say­ing that no real util­ity func­tion prefers SPECKS, be­cause here’s one pos­si­ble util­ity func­tion that says SPECKS is bet­ter”. So far as I can tell you’re re­ject­ing that ar­gu­ment on the grounds that any util­ity func­tion that prefers SPECKS is ipso facto ob­vi­ously un­ac­cept­able; that is beg­ging the ques­tion.

• Hrm… Re­cov­er­ing’s in­duc­tion ar­gu­ment is start­ing to sway me to­ward TORTURE.

More to the point, that and some other com­ments are start­ing to sway me away from the thought that di­su­til­ity of sin­gle dust speck events per per­son be­comes sub­lin­ear as peo­ple ex­pe­rienc­ing it in­creases (but to­tal pop­u­la­tion is held con­stant)

I think if I made some er­rors, they were partly was caused by “I re­ally don’t want to say TORTURE”, and partly caused by my mis­tak­ing the ex­act na­ture of the non­lin­ear­ity. I main­tain “one per­son ex­pe­rienc­ing two dust specks” is not equal to, and ac­tu­ally worse, I think, than two peo­ple ex­pe­rienc­ing one dust speck, but now I’m start­ing to sus­pect that two peo­ple each ex­pe­rienc­ing one dust speck is ex­actly twice as bad as one per­son ex­pe­rienc­ing one dust speck. (As­sum­ing, as we shift num­ber of peo­ple ex­pe­rienc­ing DSE that we hold the to­tal pop­u­la­tion con­stant.)

Thus, I’m go­ing to ten­ta­tively shift my an­swer to TORTURE.

• With so many so deep in re­duc­tion­ist think­ing, I’m com­pel­led to stir the pot by ask­ing how one jus­tifies the as­sump­tion that the SPECK is a net nega­tive at all, ag­gre­gate or not, ex­tended con­se­quences or not? Wouldn’t such a mild ir­ri­tant, over such a vast and di­verse pop­u­la­tion, act as an ex­cel­lent stim­u­lus for pos­i­tive adap­ta­tions (non-ge­netic, of course) and likely pos­i­tive ex­tended con­se­quences?

• Oh geez. Origi­nally I had con­sid­ered this ques­tion un­in­ter­est­ing so I ig­nored it, but con­sid­er­ing the in­creas­ing de­vo­tion to it in later posts, I guess I should give my an­swer.

My jus­tifi­ca­tion, but not my an­swer, de­pends upon what how the change is made.

-If the offer is made to all of hu­man­ity be­fore be­ing im­ple­mented (“Do you want to be the ‘lots of peo­ple get specks race’ or the ‘one guy gets se­vere tor­ture’ race?“) I be­lieve peo­ple could all agree to the specks by “buy­ing out” who­ever even­tu­ally gets the tor­ture. For an im­mea­surably small amount, less than the pain of a speck, they can to­gether amass funds suffi­cient to re­turn the tor­ture to the in­di­v­d­ual’s in­differ­ence curve. OTOH, the per­son get­ting the tor­ture couldn’t pos­si­bly buy out that many peo­ple. (In other words, the specks are Kal­dor-Hicks effi­cient.)

-If the offer, at my de­ci­sion, would just be thrown onto hu­man­ity with­out pos­si­bly of ad­vance ne­go­ta­tion, I would still take the specks be­cause even if only peo­ple who feel bad for the tor­tured make a small con­tri­bu­tion, it will still com­pa­rable to what they had to offer in the above para­graph, such is the na­ture of large num­bers of peo­ple.

I don’t think this is the re­sult of my re­vul­sion to­ward the tor­ture, al­though I have that. I think my de­ci­sion stems from how such large (and su­per­lin­early in­creas­ing) util­ity differ­ences im­ply the pos­si­bil­ity of “evening it out” through some trans­fer.

• Re­cov­er­ing ir­ra­tional­ist: in your in­duc­tion ar­gu­ment, my first stab would be to deny the last premise (tran­si­tivity of moral judg­ments). I’m not sure why moral judg­ments have to be tran­si­tive.

Next, I’d deny the sec­ond-to-last premise (for one thing, I don’t know what it means to be hor­ribly tor­tured for the short­est pe­riod pos­si­ble—part of the tor­ture­ness of tor­ture is that it lasts a while).

• I agree with that. My point is that agree­ing that “A googol­plex peo­ple be­ing dust speck­led ev­ery sec­ond of their life with­out fur­ther ill effect is worse than one per­son be­ing hor­ribly tor­tured for the short­est pe­riod ex­pe­rien­ca­ble” doesn’t oblige me to agree that “A few billion* googol­plexes of peo­ple be­ing dust specked once with­out fur­ther ill effect is worse than one per­son be­ing hor­ribly tor­tured for the short­est pe­riod ex­pe­rien­ca­ble”.

Nei­ther would I, you don’t need to. :-)

The only rea­son I can pull this off is be­cause 3^^^3 is such a lu­dicrous num­ber of peo­ple, al­low­ing me to ac­tu­ally di­vide my army by a googol­plex a silly num­ber of times. You couldn’t cut the se­ries up fine enough with a mere six billion peo­ple.

If you agree with my first two state­ments listed, you can use them (and your vast googol­plex-cut­ter-proof army) to in­fer a se­ries of small steps from each of Eliezer’s op­tions, meet­ing in the mid­dle at my third state­ment in the list. You then have a se­ries of steps when a is worse than b, b than c, c than d, all the way from SPECS to my third state­ment to TORTURE.

If for some rea­son you ob­ject to one of the first 3 state­ments, my 3^^^3 vast hoard of minions will just cut the se­ries up even finer.

If that’s not clear it’s prob­a­bly my fault—I’ve never had to ex­plain any­thing like this be­fore.

if ev­ery one of those 3^^^3 peo­ple is will­ing to in­di­vi­d­u­ally suffer a dust speck in or­der to pre­vent some­one from suffer­ing tor­ture, is TORTURE still the right an­swer?

I sure would, but I wouldn’t ask 3^^^3 oth­ers to.

• All else equal, do you dis­agree with: “A googol­plex peo­ple dust specked x times dur­ing their life­time with­out fur­ther ill effect is worse than one per­son dust specked for x*2 times dur­ing their life­time with­out fur­ther ill effect” for the range con­cerned?

I agree with that. My point is that agree­ing that “A googol­plex peo­ple be­ing dust speck­led ev­ery sec­ond of their life with­out fur­ther ill effect is worse than one per­son be­ing hor­ribly tor­tured for the short­est pe­riod ex­pe­rien­ca­ble” doesn’t oblige me to agree that “A few billion* googol­plexes of peo­ple be­ing dust specked once with­out fur­ther ill effect is worse than one per­son be­ing hor­ribly tor­tured for the short­est pe­riod ex­pe­rien­ca­ble”. (Un­less “a fur­ther ill effect” is meant to ex­clude not only car ac­ci­dents but su­per­lin­ear per­sonal emo­tional effects, but that would be stupid.)

* 1 billion sec­onds = 31.7 years

I think that what we’re deal­ing here is more like the ir­ra­tional­ity of try­ing to im­pose and ra­tio­nal­ize com­fortable moral ab­solutes in defi­ance of ex­pected utility

Since real prob­lems never pos­sess the de­gree of cer­tainty that this dilemma does, hold­ing cer­tain heuris­tics as ab­solutes may be the util­ity-max­i­miz­ing thing to do. In a re­al­is­tic ver­sion of this prob­lem, you would have to con­sider the re­sults of em­pow­er­ing what­ever agent is do­ing this to tor­ture peo­ple with sup­pos­edly good but non­ver­ifi­able re­sults. If it’s a hu­man or group of hu­mans, not such a good idea; if it’s a Friendly AI, maybe you can trust it but can’t it figure out a bet­ter way to achieve the re­sult? (There is a Pas­cal’s Mug­ging prob­lem here.)

One more thing for TORTURErs to think about: if ev­ery one of those 3^^^3 peo­ple is will­ing to in­di­vi­d­u­ally suffer a dust speck in or­der to pre­vent some­one from suffer­ing tor­ture, is TORTURE still the right an­swer? I lean to­wards SPECK on con­sid­er­ing this, al­though I’m less sure about the case of tor­tur­ing 3^^^3 peo­ple for a minute each vs. 1 per­son for 50 years.

• A googol­plex peo­ple be­ing dust speck­led ev­ery sec­ond of their life with­out fur­ther ill effect

I don’t think this is di­rectly com­pa­rable, be­cause the di­su­til­ity of ad­di­tional dust speck­ing to one per­son in a short pe­riod of time prob­a­bly grows faster than lin­early—if I have to blink ev­ery sec­ond for an hour, I’ll prob­a­bly get ex­tremely frus­trated on top of the slight dis­com­fort of the specks them­selves. I would say that one per­son get­ting specked ev­ery sec­ond of their life is sig­nifi­cantly worse than a cou­ple billion peo­ple get­ting specked once.

• I won­der if my an­swers make me fail some kind of test of AI friendli­ness. What would the friendly AI do in this situ­a­tion? Prob­a­bly write po­etry.

• Eliezer: Why does any­thing have a util­ity at all? Let us sup­pose there are some things to which we at­tribute an in­trin­sic util­ity, nega­tive or pos­i­tive—those are our moral ab­solutes—and that there are oth­ers which only have a deriva­tive util­ity, de­riv­ing from the in­trin­sic util­ity of some of their con­se­quences. This is cer­tainly one way to get in­com­men­su­rables. If pain has in­trin­sic di­su­til­ity and in­con­ve­nience does not, then no finite quan­tity of in­con­ve­nience can by it­self trump the im­per­a­tive of min­i­miz­ing pain. But if the in­con­ve­nience might give rise to con­se­quences with in­trin­sic di­su­til­ity, that’s differ­ent.

• Where there are no spe­cial in­flec­tion points, a bad re­peated ac­tion should be a bad in­di­vi­d­ual ac­tion, a good re­peated ac­tion should be a good in­di­vi­d­ual ac­tion. Talk­ing about the re­peated case changes your in­tu­itions and gets around your scope in­sen­si­tivity, it doesn’t change the nor­ma­tive shape of the prob­lem (IMHO).

Hmm, I see your point. I can’t help like feel­ing that there are cases where rep­e­ti­tion does mat­ter, though. For in­stance, as­sum­ing for a mo­ment that rad­i­cal life-ex­ten­sion and the Sin­gu­lar­ity and all that won’t hap­pen, and as­sum­ing that we con­sider hu­man­ity’s con­tinued ex­is­tence to be a valuable thing—how about the choice of hav­ing/​not hav­ing chil­dren? Not hav­ing chil­dren causes a very small harm to ev­ery­body else in the same gen­er­a­tion (they’ll have less peo­ple sup­port­ing them when old). Doesn’t your rea­son­ing im­ply that ev­ery cou­ple should be forced into hav­ing chil­dren even if they weren’t of the type who’d want that (the “tor­ture” op­tion), to avoid caus­ing a small harm to all the oth­ers? This even though so­ciety could con­tinue to func­tion with­out ma­jor trou­ble even if a frac­tion of the pop­u­la­tion did choose to re­main child­free, for as long as suffi­ciently many oth­ers had enough chil­dren?

• “An op­tion that dom­i­nates in finite cases will always prov­ably be part of the max­i­mal op­tion in finite prob­lems; but in in­finite prob­lems, where there is no max­i­mal op­tion, the dom­i­nance of the op­tion for the in­finite case does not fol­low from its dom­i­nance in all finite cases.”

From Peter’s proof, it seems like you should be able to prove that an ar­bi­trar­ily large (but finite) util­ity func­tion will be dom­i­nated by events with ar­bi­trar­ily large (but finite) im­prob­a­bil­ities.

“Robin Han­son was cor­rect, I do think that TORTURE is the ob­vi­ous op­tion, and I think the main in­stinct be­hind SPECKS is scope in­sen­si­tivity.”

And so we come to the billion-dol­lar ques­tion: Will scope in­sen­si­tivity of this type be elimi­nated un­der CEV? So far as I can tell, a util­ity func­tion is ar­bi­trary; there is no truth which de­stroys it, and so the FAI will be un­able to change around our renor­mal­ized util­ity func­tions by cor­rect­ing for fac­tual in­ac­cu­racy.

“Which ex­act per­son in the chain should first re­fuse?”

The point at which the nega­tive util­ity of peo­ple catch­ing on fire ex­ceeds the pos­i­tive util­ity of sky­div­ing. If the tem­per­a­ture is 20 C, no­body will no­tice an in­crease of 0.00000001 C. If the tem­per­a­ture is 70 C, the ag­gre­gate nega­tive util­ity could start to out­weigh the pos­i­tive util­ity. This is not a new idea; see http://​en.wikipe­dia.org/​wiki/​Tragedy_of_the_com­mons.

“We face the real-world analogue of this prob­lem ev­ery day, when we de­cide whether to tax ev­ery­one in the First World one penny in or­der to save one starv­ing Afri­can child by mount­ing a large mil­i­tary res­cue op­er­a­tion that swoops in, takes the one child, and leaves.”

Ac­cord­ing to http://​www.wider.unu.edu/​re­search/​2006-2007/​2006-2007-1/​wider-wdhw-launch-5-12-2006/​wider-wdhw-press-re­lease-5-12-2006.pdf, 10% of the world’s adults, around 400 mil­lion peo­ple, own 85% of the world’s wealth. Tax­ing them each one penny would give a to­tal of \$4 mil­lion, more than enough to mount this kind of a res­cue op­er­a­tion. While in­cred­ibly waste­ful, this would ac­tu­ally be prefer­able to some of the stuff we spend our money on; my lo­cal school dis­trict just voted to spend \$9 mil­lion (cur­rent US dol­lars) to build a swim­ming pool. I don’t even want to know how much we spend on \$200 pants; prob­a­bly more than \$9 mil­lion in my town alone.

• Now, this is con­sid­er­ably bet­ter rea­son­ing—how­ever, there was no clue to this be­ing a de­ci­sion that would be se­lected over and over by countless of peo­ple. Had it been worded “you among many have to make the fol­low­ing choice...“, I could agree with you. But the cur­rent word­ing im­plied that it was once-a-uni­verse sort of choice.

The choice doesn’t have to be re­peated to pre­sent you with the dilemma. Since all el­e­ments of the prob­lem are finite—not countless, finite—if you re­fuse all ac­tions in the chain, you should also re­fuse the start of the chain even when no fu­ture rep­e­ti­tions are pre­sented as op­tions. This kind of rea­son­ing doesn’t work for in­finite cases, but it works for finite ones.

One po­ten­tial counter to the “global heat­ing” ex­am­ple is that at some point, peo­ple be­gin to die who would not oth­er­wise have done so, and that should be the point of re­fusal. But for the case of dust specks—and we can imag­ine get­ting more than one dust speck in your eye per day—it doesn’t seem like there should be any sharp bor­der­line.

We face the real-world analogue of this prob­lem ev­ery day, when we de­cide whether to tax ev­ery­one in the First World one penny in or­der to save one starv­ing Afri­can child by mount­ing a large mil­i­tary res­cue op­er­a­tion that swoops in, takes the one child, and leaves.

There is no “spe­cial penny” where this logic goes from good to bad. It’s wrong when re­peated be­cause it’s also wrong in the in­di­vi­d­ual case. You just have to come to terms with scope sen­si­tivity.

• “Swoops in, takes one child, and leaves”… wow. I’d like to say I can’t imag­ine be­ing so in­sen­si­tive as to think this would be a good thing to do (even if not worth the money), but I ac­tu­ally can.

And why would you use that hor­rible ex­am­ple, when the ar­gue­ment would work just fine if you sub­sti­tuted “A per­ma­nent pres­ence de­voted to giv­ing one per­son three square meals a day.”

• “… when­ever a tester finds a user in­put that crashes your pro­gram, it is always bad—it re­veals a flaw in the code—even if it’s not a user in­put that would plau­si­bly oc­cur; you’re still sup­posed to fix it. “Would you kill Santa Claus or the Easter Bunny?” is an im­por­tant ques­tion if and only if you have trou­ble de­cid­ing. I’d definitely kill the Easter Bunny, by the way, so I don’t think it’s an im­por­tant ques­tion.”

I write code for a liv­ing; I do not claim that it crashes the pro­gram. Rather the an­swer is ir­rele­vant as I don’t think that the ques­tion is im­por­tant or in­sight­ful re­gard­ing our moral judge­ments since it lacks phys­i­cal plau­si­bil­ity. BTW, since one can think of God as “Santa Claus for grown-ups”, the Easter Bunny lives.

• Since I chose the specks—no, I prob­a­bly wouldn’t pay a penny; avoid­ing the speck is not even worth the effort to de­cide to pay the penny or not. I would barely no­tice it; it’s too in­signifi­cant to be worth pay­ing even a tiny sum to avoid.

I sup­pose I too am “round­ing down to zero”; a more sig­nifi­cant harm would re­sult in a differ­ent an­swer.

• You’re avoid­ing the ques­tion. What if a penny was au­to­mat­i­cally payed for you each time in the fu­ture to avoid dust specks float­ing in your eye? The ques­tion is whether the dust speck is worth at least a nega­tive penny of di­su­til­ity. For me, I would say yes.

• You never said we couldn’t choose who speci­fi­cally gets tor­tured, so I’m as­sum­ing we can make that se­lec­tion. Given that, the once ag­o­niz­ingly difficult choice is made triv­ially sim­ple. I would choose 50 years of tor­ture for the per­son who made me make this de­ci­sion.

• > Would you con­demn one per­son to be hor­ribly tor­tured for fifty years with­out hope or rest, to save ev­ery qualia-ex­pe­rienc­ing be­ing who will ever ex­ist one blink?

That’s as­sum­ing you’re in­ter­pret­ing the ques­tion cor­rectly. That you aren’t deal­ing with an evil ge­nie.

• Eliezer, are you sug­gest­ing that de­clin­ing to make up one’s mind in the face of a ques­tion that (1) we have ex­cel­lent rea­son to mis­trust our judge­ment about and (2) we have no ac­tual need to have an an­swer to is some­how dis­rep­utable?

As for your link to the “mo­ti­vated stop­ping” ar­ti­cle, I don’t quite see why de­clin­ing to de­cide on this is any more “stop­ping” than choos­ing a definite one of the op­tions. Or are you sug­gest­ing that it’s an in­stance of mo­ti­vated con­tinu­a­tion? Per­haps it is, but (as you said in that ar­ti­cle) the prob­lem with ex­ces­sive “con­tinu­a­tion” is that it can waste re­sources and miss op­por­tu­ni­ties. I don’t see ei­ther of those be­ing an is­sue here, un­less you’re ac­tu­ally threat­en­ing to do one of those two things—in which case I de­clare you a Pas­cal’s mug­ger and take no no­tice.

• Eliezer wrote “Wow. Peo­ple sure are com­ing up with in­ter­est­ing ways of avoid­ing the ques­tion.”

I posted ear­lier on what I con­sider the more in­ter­est­ing ques­tion of how to frame the prob­lem in or­der to best ap­proach a solu­tion.

If I were to sim­ply provide my “an­swer” to the prob­lem, with the as­sump­tion that the dust in the eyes is like­wise limited to 50 years, then I would ar­gue that the dust is to be preferred to the tor­ture, not on a util­i­tar­ian ba­sis of rel­a­tive weights of the con­se­quences as speci­fied, but on the big­ger-pic­ture view that my preferred fu­ture is one in which tor­ture is ab­hor­rent in prin­ci­ple (not­ing that this en­tails sig­nifi­cant in­di­rect con­se­quences not speci­fied in the prob­lem state­ment.)

• Ben­quo, your first an­swer seems equiv­o­cal, and so did Se­bas­tian’s on a first read­ing, but now I see that it was not.

• Hmm, think­ing some more about this, I can see an­other an­gle (not the suffer­ing an­gle, but the “be­ing pru­dent about un­in­tended con­se­quences” an­gle):

If you had the choice be­tween very very slightly chang­ing the life of a huge num­ber of peo­ple or chang­ing a lot the life of only one per­son, the pru­dent choice might be to change the life of only one per­son (as hor­rible as that change might be).

Still, with the dust speck we can’t re­ally know if the net fi­nal out­come will be nega­tive or pos­i­tive. It might dis­tract peo­ple who are about to have ge­nius ideas, but it might also change chains of events that would lead to bad things. Aver­aged over so many peo­ple, it’s prob­a­bly go­ing to stay very close to neu­tral, pos­i­tive or nega­tive. The tor­ture of one per­son might also look very close to neu­tral if av­er­aged with the other 3^^^3 peo­ple, but we know that it’s go­ing to be nega­tive. Hmm..

• As I read this I knew my an­swer would be the dust specks. Since then I have been men­tally eval­u­at­ing var­i­ous meth­ods for de­cid­ing on the ethics of the situ­a­tion and have cho­sen the one that makes me feel bet­ter about the an­swer I in­stinc­tively chose.

I can tell you this though. I reckon I per­son­ally would choose max five min­utes of tor­ture to stop the dust specks event hap­pen­ing. So if the per­son threat­ened with 50yrs of tor­ture was me, I’d choose the dust specks.

• Dou­glas and Psy-Kosh: Den­nett ex­plic­itly says that in deny­ing that there are such things as qualia he is not deny­ing the ex­is­tence of con­scious ex­pe­rience. Of course, Dou­glas may think that Den­nett is ly­ing or doesn’t un­der­stand his own po­si­tion as well as Dou­glas does.

James Bach and J Thomas: I think Eliezer is ask­ing us to as­sume that there are no knock-on effects in ei­ther the tor­ture or the dust-speck sce­nario, and the usual as­sump­tion in these “which econ­omy would you rather have?” ques­tions is that the num­bers pro­vided rep­re­sent the situ­a­tion af­ter all par­ties con­cerned have ex­erted what­ever effort they can. (So, e.g., if al­most ev­ery­one is de­scribed as des­ti­tute, then it must be a so­ciety in which es­cap­ing des­ti­tu­tion by hard work is very difficult.) Of course I agree with both of you that there’s dan­ger in this sort of sim­plifi­ca­tion.

• The prob­lem with spam­mers isn’t the cause of a sin­gu­lar dust spec event: it’s the cause of mul­ti­ple dust speck events re­peat­edly to in­di­vi­d­u­als in the pop­u­la­tion in ques­tion. It’s also a ‘tragedy of the com­mons’ ques­tion, since there is more than one spam­mer.

To re­spond to your ques­tion: What is ap­pro­pri­ate pun­ish­ment for spam­mers? I am sad to con­clude that un­til Aubrey DeGray man­ages to con­quer hu­man mor­tal­ity, or the sin­gu­lar­ity oc­curs, there is no suit­able pun­ish­ment for spam­mers.

After ei­ther of those, how­ever, I would pro­pose un­block­ing ev­ery­one’s toi­lets and/​or triple shifts as a Fry’s Elec­tron­ics floor lackey un­til the uni­ver­sal heat death, un­less you have even >less< in­ter­est­ing sug­ges­tions.

• The dust specks seem like the “ob­vi­ous” an­swer to me, but how large the tiny harm must be to cross the line where the un­think­ably huge num­ber of them out­weighs a sin­gle tremen­dous one isn’t some­thing I could eas­ily say, when clearly I don’t think sim­ply calcu­lat­ing the to­tal amount of harm caused is the right mea­sure.

• To me it is im­me­di­ately ob­vi­ous that tor­ture is prefer­able. Judg­ing my the com­ments I’m in the minor­ity.

• I think the rea­son peo­ple are hes­i­tant to choose the dust speck op­tion is that they view the num­ber 3^^^3 as be­ing in­sur­mountable. It’s a combo chain that un­leashes a seem­ingly in­finite amount of points in the “Bad events I have per­son­ally caused” cat­e­gory on their score­board. And I get that. If the tor­ture op­tion is a thou­sand bad points, and the dust speck is 1/​1000th of a point for each per­son, than the math clearly states that tor­ture is the bet­ter op­tion.

But the thing is that you un­leash that combo chain ev­ery day.

Every­time you burn a piece of coal, or eat a seed, or an ap­ple, you are po­ten­tially caus­ing mild in­con­ve­nience to a hy­po­thet­i­cally in­finitely higher num­ber of peo­ple than 3^^^3. What if the piece of coal could warm some­one else up? What if that seed’s offspring would go on to spread and feed a mas­sive amount of peo­ple? The same ap­plies to all meat and all fruit, and most veg­eta­bles. By gain­ing a slight benefit now, you are po­ten­tially rob­bing over 3^^^3 peo­ple of their own slight benefit. Now, is it likely that said an­i­mal or seed will go on to benefit so many? Maybe not, but the chance ex­ists. Are you will­ing to take that chance with a num­ber like 3^^^3?

Well, you should be. Mo­ral­ity should not be solely based of off math­e­mat­i­cal for­mula and cost/​benefit anal­y­sis. It can greatly help de­ter­mine a moral course of ac­tion, but if that is your mo­ti­va­tion for want­ing to do the right thing than you have lost sight of what moral­ity is about. The ba­sis of moral­ity is this: Do unto oth­ers, as you would have done unto your­self. I, for one, would rather have a dust speck in my eye than be tor­tured for 50 years. And I wouldn’t get 3^^^3 specks of dust in my eye, be­cause none of them did ei­ther, they only got one. Even if I was as­sured of get­ting that many specks of dust in my eye, (in deep space, of course, be­cause the re­sult­ing ex­plo­sion of dust specks would surely en­gulf the Earth and pos­si­bly most of the Milky Way), I would still do it. Be­cause I choose to save the per­son in front of me, and fix any nega­tive re­sults af­ter­wards. I choose to stop the wrong­do­ing di­rectly in front of me, be­cause if ev­ery­one did so then ev­ery­one would be saved. Do what you can right now. Worry about dust specks later. Help that guy who is get­ting tor­tured now.

The Ones Who Walk Away From Ome­las Should Turn The Fuck Around And Sprint Back To The City, Be­cause Holy Shit That Poor Kid Is Be­ing Tor­tured So Those Fucks Can Have Air Con­di­tion­ing. -An im­proved ti­tle, in my opinion.

A mas­sive amount of benefi­cial out­comes be­ing caused by one per­son’s mis­for­tune is not always jus­tifi­able. If they are in­no­cent, it’s not jus­tifi­able. If they were go­ing to di­rectly and con­sciously and mal­i­ciously cause the mas­sive nega­tive out­come that will re­sult if you do not stop them, then it is ar­guably jus­tifi­able. How­ever, there is only one situ­a­tion, one con­text, in which one per­son’s suffer­ing for the benefit of countless oth­ers is wholly and to­tally jus­tified.

You see, some­one figured out the an­swer to this dilemna about 2000 years ago. You’ve prob­a­bly heard of Him.

One per­son’s suffer­ing benefit­ting countless oth­ers is a beau­tiful thing when they choose to suffer of their own free will.

You can choose the 50 years of tor­ture if you wish.....

But only if that per­son be­ing tor­tured is you will it be any­thing other than to­tal evil.

• New situ­a­tion: 3^^^3 peo­ple be­ing tor­tured for 50 years, or one per­son get­ting tor­tured for 50 years and get­ting a sin­gle speck of dust in their eye.

By do unto oth­ers, I should, of course, tor­ture the in­nu­mer­ably vast num­ber of peo­ple, since I’d rather be tor­tured for 50 years than be tor­tured for 50 years and get dust in my eye.

• a seem­ingly in­finite amount of points in the “Bad events I have per­son­ally caused” cat­e­gory on their scoreboard

Per­haps that is how some peo­ple who pre­fer TORTURE to DUST SPECKS are think­ing, but I see no rea­son to think it’s all of them, and I am pretty sure some of them have bet­ter rea­sons than the rather straw­manny one you are propos­ing. For in­stance, con­sider the fol­low­ing:

• Which would you pre­fer: one per­son tor­tured for 50 years or a trillion peo­ple tor­tured for 50 years minus one microsec­ond?

• I guess you pre­fer the first. So do I.

• Which would you pre­fer: a trillion peo­ple tor­tured for 50 years minus one microsec­ond, or a trillion trillion peo­ple tor­tured for 50 years minus two microsec­onds?

• I guess you pre­fer the first. So do I.

• … Now re­peat this un­til we get to …

• Which would you pre­fer: N/​10^12 peo­ple (note: N is very very large, but also vastly smaller than 3^^^3) tor­tured for one day plus one microsec­ond, or N peo­ple tor­tured for one day?

• I am pretty sure you pre­fer the first op­tion in ev­ery case up to here. Now per­haps microsec­onds are too large, so let’s ad­just a lit­tle:

• Which would you pre­fer: N peo­ple tor­tured for one day, or 10^12*N peo­ple tor­tured for one day minus one nanosec­ond?

• … and con­tinue iter­at­ing—I bet you pre­fer the first op­tion in ev­ery case—un­til we get to …

• Which would you pre­fer, M/​10^12 peo­ple tor­tured for ten sec­onds plus one nanosec­ond, or M peo­ple tor­tured for ten sec­onds? (M is much much larger than N—but still vastly smaller than 3^^^3.)

• I am pretty sure you still pre­fer the first case ev­ery time. Now, once the times get much shorter than this it may be difficult to say whether some­thing is re­ally tor­ture ex­actly, so let’s start ad­just­ing the sever­ity as well. Let’s first of all re­place the rather ill-defined “tor­ture” with some­thing less am­bigu­ous.

• Which would you pre­fer, M peo­ple tor­tured for ten sec­onds, or 10^12*M peo­ple tor­tured for 9 sec­onds and then kicked re­ally hard on the kneecap but definitely not hard enough to cause per­ma­nent dam­age?

• The in­ten­tion is that the tor­ture is bad enough that the lat­ter op­tion is less bad for each in­di­vi­d­ual. I hope you still pre­fer the first case here.

• Which would you pre­fer, 10^12*M peo­ple tor­tured for 9 sec­onds and then kicked re­ally hard on the kneecap, or 10^24*M peo­ple tor­tured for 8 sec­onds and then kicked re­ally hard on the kneecap, twice?

• … etc. …

• Which would you pre­fer, 10^108*M peo­ple tor­tured for one sec­ond and then kicked, or 10^120*M peo­ple just kicked 10 times?

• OK. Now we can start crank­ing things down a bit.

• Which would you pre­fer, 10^120*M peo­ple kicked re­ally hard on the kneecap 10 times or 10^132*M peo­ple kicked re­ally hard on the kneecap 9 times?

• That’s a much big­ger drop in sever­ity than I’ve used above; ob­vi­ously we can make it more grad­ual if you like with­out mak­ing any differ­ence to how this goes. Any­way, I hope you still much pre­fer the first case to the sec­ond. Any­way, con­tinue un­til we get to one kick each and then let’s try this:

• Which would you pre­fer, 10^228*M peo­ple kicked re­ally hard on the kneecap or 10^240*M peo­ple slapped hard in the face ten times?

• You might want to ad­just the mis­treat­ment in the sec­ond case to make sure it’s less bad than the kick­ing.

… Any­way, by this point I am prob­a­bly be­labour­ing things severely enough that it’s ob­vi­ous where it ends. After not all that many more steps we ar­rive at a choice whose sec­ond op­tion is a very large num­ber of peo­ple (but still much much much smaller than 3^^^3 peo­ple!) get­ting a dust speck in their eye. And ev­ery sin­gle step in­volves a re­ally small de­crease in the sever­ity of what they suffer, and a trillion­fold in­crease in the num­ber of peo­ple suffer­ing. But the chain be­gins with TORTURE and ends with DUST SPECKS, or more pre­cisely with some­thing strictly less bad than DUST SPECKS be­cause the num­ber of peo­ple in­volved is so much smaller.

To con­sider TORTURE worse than DUST SPECKS is to con­sider that at least one of those steps is not mak­ing things worse: that at some point in the chain, hav­ing a trillion times more vic­tims fails to out­weigh a teeny-tiny de­crease in the amount of suffer­ing each one un­der­goes.

I am a lit­tle skep­ti­cal, on gen­eral prin­ci­ples, of any ar­gu­ment con­cern­ing situ­a­tions so far be­yond any that ei­ther I or my an­ces­tors have any ex­pe­rience of. So I will not go so far as to say that this makes TORTURE ob­vi­ously less bad than DUST SPECKS. But I will say that the ar­gu­ment I have sketched above ap­pears to me to de­serve tak­ing much more se­ri­ously than you are tak­ing the TORTURE side of the ar­gu­ment, with your talk of score­boards.

Do unto oth­ers as you would have done to yourself

This is a pretty good prin­ci­ple; there’s a rea­son it and its near-equiv­a­lents have cropped up in re­li­gious and eth­i­cal sys­tems over and over again since long be­fore the par­tic­u­lar in­stance I think you have in mind. But it doesn’t deal well with cases where the “oth­ers” vary hugely in num­ber. (It also has prob­lems with cases where you and the oth­ers have very differ­ent prefer­ences.)

I, for one, would rather have a dust speck in my eye than be tor­tured for 50 years. And I wouldn’t get 3^^^3 specks of dust in my eye, be­cause none of them did ei­ther, they only got one.

This rea­son­ing would also sug­gest that if you have to choose be­tween hav­ing \$10 stolen from each of a mil­lion peo­ple and hav­ing \$20 stolen from one per­son, you should choose the lat­ter. That seems ob­vi­ously wrong to me; if you agree, you should re­con­sider.

en­gulf the Earth and pos­si­bly most of the Milky Way

You are vastly un­der­es­ti­mat­ing how big 3^^^3 is.

I choose to save the per­son in front of me, and fix any nega­tive re­sults afterwards

That sounds very nice, but if you are un­able to fix the nega­tive re­sults this may some­times be a re­ally ter­rible policy. Also, in the usual ver­sion of the hy­po­thet­i­cal the dust specks and the tor­ture are not differ­ent in “re­mote­ness”, so I don’t see how this heuris­tic ac­tu­ally helps re­solve it.

some­body figured out the an­swer to this dilemma about 2000 years ago

It is not, in fact, the same dilemma. (E.g., be­cause in that sce­nario it isn’t “one per­son get­ting some­thing very bad, ver­sus vast num­bers get­ting some­thing that seems only triv­ially bad”, it’s “one per­son get­ting some­thing very bad, ver­sus quite large num­bers get­ting some­thing very bad”.)

If you would like a re­li­gious ar­gu­ment then I would sug­gest the Open Thread as a bet­ter venue for it.

Any­way, I think your dis­cus­sion of harm­ing A in or­der to help B misses the point. In­flict­ing harm on other peo­ple is in­deed hor­rible, but note that (1) in the TvDS sce­nario harm is be­ing in­flicted on other peo­ple ei­ther way, and if you just blithely as­sert that it’s only in the TORTURE case that it’s bad enough to be a prob­lem then you’re sim­ply beg­ging the origi­nal ques­tion; and (2) in the TvDS sce­nario no one is talk­ing about in­flict­ing harm on some peo­ple to pre­vent harm to oth­ers, or at least they needn’t and prob­a­bly shouldn’t be. The ques­tion is sim­ply “which of these is worse?“, and you can and should an­swer that with­out treat­ing one as the de­fault and ask­ing “should I bring about the other one to avoid this one?“.

• Every­time you burn a piece of coal, or eat a seed, or an ap­ple, you are po­ten­tially caus­ing mild in­con­ve­nience to a hy­po­thet­i­cally in­finitely higher num­ber of peo­ple than 3^^^3.

But you’re also po­ten­tially caus­ing a mild benefit to a hy­po­thet­i­cally in­finitely higher num­ber of peo­ple than 3^^^3.

• I think the prob­lem here is the way the util­ity func­tion is cho­sen. Utili­tar­i­anism is es­sen­tially a for­mal­iza­tion of re­ward sig­nals in our heads. It is a heuris­tic way of quan­tify­ing what we ex­pect a healthy hu­man (one that can raise up and sur­vive in a typ­i­cal hu­man en­vi­ron­ment and has an ac­cu­rate model of re­al­ity) to want. All of this only con­verges roughly to a com­mon util­ity be­cause we have evolved to have the same needs which are nec­es­sar­ily pro-life and pro-so­cial (since oth­er­wise our species wouldn’t be pre­sent to­day).

Utili­tar­i­anism crudely ab­stracts from the mean­ings in our heads that we rec­og­nize as com­mon goals and as­signs num­bers to them. We have to be care­ful what we want to as­sign num­bers to in or­der to get re­sults that we want in all cor­ner cases. I think, hook­ing up the util­ity me­ter to neu­rons that de­tect minor in­con­ve­niences is not a smart way of achiev­ing what we col­lec­tively want be­cause it might con­tra­dict our pro-life and pro-so­cial needs. Only when the in­con­ve­niences ac­cu­mu­late in­di­vi­d­u­ally so that they con­dense as states of fear/​anx­iety or no­tice­ably shorten hu­man life, it af­fects hu­man goals and it makes sense to in­clude them into util­ity con­sid­er­a­tions (which, again, are only a crude ap­prox­i­ma­tion of what we have evolved to want).

• I think I’ve seen some other com­ments bring it up, but I’ll say it again. I think peo­ple who go for the tor­ture are work­ing off a model of lin­ear dis­com­fort ad­di­tion, in which case the bad­ness of the tor­ture would have to be as bad as 3^^^3 dust par­ti­cles in the eye to jus­tify tak­ing the dust. How­ever, I’d ar­gue that it’s not lin­ear. Two specs of dust is worse than twice as bad as one spec. 3^^^3 peo­ple get­ting specs in their eyes is uni­mag­in­ably less bad than one per­son get­ting 3^^^3 specs (a ridicu­lous image con­sid­er­ing that’s throw­ing uni­verses into a dude’s eye). So the spec very well may be less than 1/​(3^^^3) as bad as tor­ture.

Even so, I doubt it. So purely util­i­tar­ian prob­a­bly does sug­gest tor­ture the one guy.

• How­ever, I’d ar­gue that it’s not lin­ear.

It has to be more than just not lin­ear for that to solve it, it has to be so non­lin­ear that no finite num­ber of specks at all can add up to the tor­ture, since oth­er­wise we could just ask the same ques­tion us­ing the new num­ber in­stead of 3^^^3.

If it’s so non­lin­ear that no finite num­ber of specks can add up to tor­ture, then you find the max­i­mum amount that a finite num­ber of specks can add up to. Then there are two amounts (one slightly more than that and one slightly less) where one amount can­not be bal­anced by dust par­ti­cles and one amount can, which doesn’t re­ally make any sense.

• “The Lord Pilot shouted, fist held high and triumphant: “To live, and oc­ca­sion­ally be un­happy!“” (three wor­lds col­lide) dust specks are just dust specks—in a way its helpful to some­times have these things.

But the thing changes if you don’t dis­tribute the dust specks 1 per per­son but 10 per sec­ond per per­son?

• In the Least Con­ve­nient Pos­si­ble World of this hy­po­thet­i­cal, each and ev­ery dust speck causes a small con­stant amount of harm, with no knock-on effects(no in­creas­ing one’s ap­pre­ci­a­tion of the mo­ments when one does not have dust in ones eye, no pre­vent­ing a ‘bor­ing painless ex­is­tence,’ noth­ing of the sort). Now it may be ar­gued whether this would oc­cur with ac­tual dust, but that is not re­ally the ques­tion at hand. Dust was just cho­sen as be­ing a ‘seem­ingly triv­ial bad thing.’ and if you pre­fer some other triv­ial bad thing, just re­place that in the prob­lem and the ques­tion re­mains the same.

• If the dust specks could cause deaths I would re­fuse to chose ei­ther. If I some­how still did, I would pick the dusts any­how be­cause I know that I my­self would rather die by ac­ci­dent caused of a dust par­ti­cle than be tor­tured for even ten years.

• Would you also re­fuse to drive be­cause there is some non-zero chance that you’ll hit some­one and cause them to suffer tor­tur­ous pain?

• No I would not. I am not sure what you are get­ting at, but my point is that the tor­ture was a fact and the speck dusts were ex­tremely low prob­a­bil­ites, scat­tered over a big pop­u­la­tion. (Be­sides, I don´t think it is pos­si­ble for me to cause tor­tur­ous pain to some­one only by driv­ing.)

• Would it change any­thing if the sub­jets where ex­tremely cute pup­pies?

• I have mixed feel­ings on this ques­tion. On the one hand, I agree that scope in­sen­si­tivity should be avoided, and util­ity should count lin­early over or­ganisms. But at the same time, I’m not re­ally sure the dust specks are even … bad. If I could press a but­ton to elimi­nate dust specks from the world, then (ig­nor­ing in­stru­men­tal con­sid­er­a­tions, which would ob­vi­ously dom­i­nate) I’m not sure whether I would bother.

Maybe I’m not imag­in­ing the dust specks as be­ing painful, whereas Eliezer had in mind more of a splin­ter that is slightly painful. Or we can imag­ine other an­noy­ing ex­pe­riences like spilling your coffee or sit­ting on a cold toi­let seat. Here again, I’m not sure if these ex­pe­riences are even bad. They build char­ac­ter, and maybe they have a place even in par­adise.

There are many ex­pe­riences that are ac­tu­ally bad, like se­vere de­pres­sion, se­vere anx­iety, break­ing your leg, pain dur­ing a hos­pi­tal op­er­a­tion, etc. Th­ese do not be­long in par­adise.

If you imag­ine your­self sign­ing up for 3^^^3 dust specks, that might fill you with de­spair, but in that case, your nega­tive ex­pe­rience is more than a dust speck—you’re also imag­in­ing the drudgery of sit­ting through 3^^^3 of them. Just the dust specks by them­selves may not be bad, if only one is ex­pe­rienced by any given in­di­vi­d­ual, and no dust speck trig­gers more in­tense nega­tive re­ac­tions.

• There’s noth­ing im­por­tant about the dust-specks here; they were cho­sen as a con­crete illus­tra­tion of the small­est unit of di­su­til­ity. If think­ing about dust specks in par­tic­u­lar doesn’t work for you (you’re not alone in this), I recom­mend pick­ing a differ­ent illus­tra­tion and sub­sti­tut­ing as you read.

• 3^^^3 peo­ple? …

I can see what point you were try­ing to make....I think.

But I hap­pen to have a sig­nifi­cant dis­trust of clas­sic util­i­tar­i­anism: if you sum up the hap­piness of a so­ciety with a finite chance of last­ing for­ever, and sub­tract the sum of all the pain...you get in­finity-in­finity, which is con­di­tion­ally con­ver­gent. the sim­plest patch is to in­sert a very, very, VERY tiny fac­tor, re­duc­ing the weight of fu­ture so­cietal hap­piness in your com­pu­ta­tion… Any at­tempt to trans­late to so many peo­ple …places my in­tu­ition in charge of set­ting the sum­ma­tion up.

or else, y’know, “dust specks, be­cause that hap­pens to in­clude the abil­ity to get off this planet and pro­duce more hu­mans than atoms in the cur­rently visi­ble uni­verse”.

• For­give me for post­ing on such an old topic, but I’ve spent the bet­ter part of the last few days think­ing about this and had to get my thoughts to­gether some­where. But af­ter some con­sid­er­a­tion, I must say that I side with the “speck­ers” as it were.

Let us do away with “specks of dust” and “tor­ture” no­tions in an at­tempt to avoid ar­gu­ing the rel­a­tive value one might place on ei­ther event (i.e. - “round­ing to 0/​in­finity”), and in­stead fo­cus on the real is­sue. Re­place tor­ture with “event A” as the sin­gle most hor­rific event that can hap­pen to an in­di­vi­d­ual. Re­place dust motes with “event B” as the least in­con­ve­nience that can still be con­sid­ered an in­con­ve­nience to an in­di­vi­d­ual.

Similarly, let us do away with the no­tion of a rea­son­ing about a googol, or 3^^^3, or 3^^^^3, as our brains treat each of these num­bers as just a fea­ture­less con­glomer­a­tion, re­gard­less of how well we want to pre­tend we un­der­stand the differ­ences in mag­ni­tude. In­stead, re­place this with “n”, with “n” be­ing an ar­bi­trar­ily large num­ber.

The ques­tion then be­comes: Is it bet­ter to sub­ject a sin­gle in­di­vi­d­ual to event A, or n in­di­vi­d­u­als event B?

The util­i­tar­ian ar­gu­ment sup­poses that this ques­tion can be equiv­a­lently stated as such: Is the to­tal di­su­til­ity of sub­ject­ing n in­di­vi­d­u­als to event B greater than sub­ject­ing a sin­gle in­di­vi­d­ual to event A?

This seems rea­son­able enough, given a suffi­ciently “good” defi­ni­tion of util­ity. Let us as­sume that these state­ments are equiv­a­lent and pro­ceed from here.

Let “x” be the di­su­til­ity value of event A, and “y” be the di­su­til­ity value of event B. How can we com­pare x and y? In­tu­itively, it is ob­vi­ous that enough ad­di­tions of “y” would at the very least ap­proach “x”. I.E. - If you were to sub­ject a sin­gle in­di­vi­d­ual to event B of­ten enough and for long enough, this would ap­proach be­ing as “bad” as sub­ject­ing that same in­di­vi­d­ual to event A. Let “k” be such a num­ber of ad­di­tions, how­ever large it may be. Thus we have x ~= ky, or y ~= x/​k.

But how ex­actly do we mea­sure x? Does it even have a fixed value? Is the defi­ni­tion of event A even con­sis­tent across all in­di­vi­d­u­als (or even the defi­ni­tion of event B for that mat­ter)? Per­haps, per­haps not. But in­ter­est­ingly enough, we’ve already found a rea­son­able “fixed” defi­ni­tion for event B. This is sim­ply the most triv­ial in­con­ve­nience that can be sub­jected to an in­di­vi­d­ual which, if re­peated enough times, would be ap­prox­i­mately equiv­a­lent to sub­ject­ing them to event A.

So lets choose a scale where event A has di­su­til­ity 1 for a given in­di­vi­d­ual. Now event B has di­su­til­ity 1/​k for that same in­di­vi­d­ual. The scale may change rel­a­tive to an in­di­vi­d­ual, but lets make the as­sump­tion that this var­i­ance is mas­sively dwar­fed by the mag­ni­tude of “k”, which again seems rea­son­able. In other words, the differ­ence be­tween the worst that could hap­pen and the most triv­ial bad thing that can hap­pen is so great, that any var­i­ance in an in­di­vi­d­ual’s defi­ni­tion for the worst event is triv­ial in com­par­i­son. “Sa­cred” vs “mun­dane”, if you will. At least now we’re only work­ing in one vari­able.

We also want to com­pare the util­ity of both situ­a­tions over a pop­u­la­tion. That is, is it bet­ter for a sin­gle in­di­vi­d­ual to have a di­su­til­ity value of 1, or for n in­di­vi­d­u­als to have di­su­til­ity 1/​k? And this is where a sec­ond prob­lem arises. How ex­actly does one dis­tribute a util­ity value across a pop­u­la­tion? It might be tempt­ing to as­sume it just di­vides evenly into the pop­u­la­tion and is ad­di­tive across in­di­vi­d­u­als. For in­stance, one per­son stub­bing their toe twice in a day is ap­prox­i­mately equiv­a­lent to two peo­ple each stub­bing their toe once. It may hold for small scale sce­nar­ios that we are used to deal­ing with, but I’m not cer­tain it holds with larger scales.

One ques­tion­able ex­am­ple is that of wealth dis­tri­bu­tion amongst a na­tion. This is a very com­plex and nu­anced sub­ject, but the un­der­ly­ing is­sues can be ex­pressed in rel­a­tively sim­ple terms. As­sume util­ity here is di­rectly pro­por­tional to wealth. If we want to max­i­mize the av­er­age wealth of the na­tion, we could have a plethora of dis­tri­bu­tions where ev­ery­one is in poverty ex­cept for a small per­centage, who have vast ex­panses of wealth. This is an en­tirely valid solu­tion—if we are try­ing only to max­i­mize av­er­age util­ity.

But cer­tainly, a good mea­sure of util­ity should also take into ac­count the sta­tus of each per­son with re­spect to the whole. Few would ar­gue that a sys­tem where over half the pop­u­la­tion ex­ists in poverty is bet­ter than a sys­tem with al­most no poverty. But again, per­haps it is de­sir­able to have some dis­par­ity in such a dis­tri­bu­tion, to en­tice peo­ple to work harder and to con­tribute more to so­ciety as a whole with the prospect of in­creas­ing their per­sonal wealth. Per­haps this lends to a more sus­tain­able sys­tem.

It is for this rea­son that I be­lieve a “good” func­tion should not only at­tempt to max­i­mize the av­er­age util­ity, but also min­i­mize the (nega­tive) de­vi­a­tion away from the group av­er­age for each in­di­vi­d­ual—of course tak­ing into ac­count other con­straints re­gard­ing sus­tain­abil­ity, sta­bil­ity, etc.

So let us now con­sider a “good” util­ity func­tion that takes as pa­ram­e­ters (1) the pop­u­la­tion size and (2) a list of the av­er­age util­ity scores for of each in­di­vi­d­ual in that pop­u­la­tion. Since its all the same in this ex­am­ple, we’ll just rep­re­sent (2) as a sin­gle num­ber. Let us call this func­tion F. We can restate the ques­tion en­tirely in math­e­mat­i­cal terms.

Is F(n, 1/​k) > F(1, 1)?

Per­haps. Per­haps not. It de­pends mostly on what util­ity func­tion would be con­sid­ered “good” in this in­stance. What no one would dis­agree on, how­ever, is that:

F(n, k/​k) = F(n, 1) >> F(1, 1) for n > 1.

Also, F(n, (k-1)/​k >> F(1, 1).

And you can con­tinue this pat­tern on­wards. Con­sider the gen­eral equa­tion:

F(n, m/​k) >? F(1, 1) : 1 ⇐ m ⇐ k.

There is cer­tainly a “break­ing point” for which m is large enough for the gen­eral well be­ing to eclipse the in­di­vi­d­ual.

In other words, there is cer­tainly a point where sub­ject­ing each in­di­vi­d­ual in an ar­bi­trar­ily large pop­u­la­tion to a mas­sively ex­ces­sive amount of triv­ial in­con­ve­niences is morally worse than sub­ject­ing a sin­gle in­di­vi­d­ual to a hor­rific event. But where is this “break­ing point?”

My con­clu­sion is that it de­pends on the size of the pop­u­la­tion, the ex­tent to which each in­di­vi­d­ual can rea­son­ably bear ex­ces­sive triv­ial bur­dens, and what crite­ria is used for the func­tion map­ping util­ity to a pop­u­la­tion.

I per­son­ally find it very hard to swal­low that it would ever be a good idea to al­low one in­di­vi­d­ual to suffer im­mea­surably than to sub­ject an im­mea­surable pop­u­la­tion to suffer triv­ially. I would sus­pect the “break­ing point” in the ex­am­ple given would be some­where be­tween hav­ing ev­ery­one in the pop­u­la­tion stub a toe, and hav­ing ev­ery­one in the pop­u­la­tion lose a toe.

A re­lat­able ex­am­ple would be dis­tribut­ing stress in a build­ing. It would gen­er­ally be a bet­ter de­sign to al­low for each in­di­vi­d­ual piece in the build­ing to be stressed triv­ially to com­pen­sate for one piece bear­ing a dis­pro­por­tionate load, than to al­low for any given piece to break away as nec­es­sary to pre­vent the rest from bear­ing a triv­ial load. Cer­tainly there is a point, how­ever, that it be­comes un­de­sir­able to un­nec­es­sar­ily com­pro­mise the over­all struc­ture (or per­haps just to in­tro­duce un­ac­cept­able risks) for the sake of a sin­gle piece. Pie­ces are ul­ti­mately re­place­able. The whole struc­ture, how­ever, is not.

Is this an in­stance of me be­ing ir­ra­tional due to some form of scale in­sen­si­tivity? Pos­si­bly. But to err is hu­man, and I would rather err on the side of com­pas­sion than on that of cold calcu­la­tion. I would also say that some cau­tion should be taken when work­ing with large scales and with con­tinu­ums. It may be just as ir­ra­tional to dis­re­gard our in­tu­itions in the face of the un­known as to cling blindly to them.

• So af­ter a lot of thought, and about 5 months spent read­ing ar­ti­cles on this site, I think I can see the big pic­ture a lit­tle more clearly now. Imag­ine hav­ing a re­ally large col­lec­tion of grains of sand that are all sus­pended in the air in the shape of a flat disk. Imag­ine too, that it takes en­ergy to move any sin­gle grain in col­lec­tion up­wards or down­wards, but once a grain is moved, it stays put un­less moved again.

Just con­cep­tu­ally let grains of sand rep­re­sent peo­ple and grain move­ment up­wards/​down­wards rep­re­sent util­ity/​di­su­til­ity.

What Eliezer is ar­gu­ing is that, as­sum­ing it takes the same amount of en­ergy to move each in­di­vi­d­ual grain of sand, then clearly it takes far less en­ergy to move a sin­gle grain of sand very far down­ward than to move ev­ery grain of sand just slightly down­ward.

What I ini­tially ob­jected to, and what I was try­ing to in­tuit through in my first post, is that per­haps it is the case that the en­ergy re­quired to move a sin­gle grain of sand is not con­stant. Per­haps it in­creases with dis­tance from the disk. I still hold to this ob­jec­tion.

Even if so, it is cer­tainly a valid con­clu­sion to draw that mov­ing a sin­gle grain far enough down­wards re­quires less en­ergy than mov­ing ev­ery grain slightly down­wards. In­creas­ing the num­ber of grains of sand cer­tainly af­fects this. And no mat­ter what the growth fac­tor may be on the non­lin­ear amount of en­ergy re­quired to move a sin­gle grain very far from its start­ing point, it is still finite. And you can add enough grains of sand so that the mul­ti­plica­tive fac­tor of mov­ing ev­ery­thing slightly down­wards dwarfs the non­lin­ear growth fac­tor.

Thus, given enough peo­ple (and I do stress, enough peo­ple), it may be morally worse to sub­ject them all to hav­ing a sin­gle dust speck en­ter their eye for a brief mo­ment than to sub­ject a sin­gle in­di­vi­d­ual to tor­ture for 50 years.

Its just that our in­tu­ition says that for any scale our minds are even close to ca­pa­ble of rea­son­ing about, ex­po­nen­tial/​su­per-ex­po­nen­tial func­tions (even with a tiny start­ing value) greatly dwarf mul­ti­plica­tive scal­ing func­tions.

But this in­tu­ition can­not be ac­cu­rate for scales larger than our minds are ca­pa­ble of rea­son­ing about.

I un­der­stand now: “Shut up and mul­ti­ply.”

• # Error

The value NIL is not of type STRING when binding #:HTML-BODY52

• To flip the ques­tion on its head:

Would it be morally ac­cept­able for an im­mea­surably large pop­u­la­tion of in­di­vi­d­u­als to al­low a sin­gle in­di­vi­d­ual to be mer­cilessly tor­tured if it would spare the en­tire pop­u­la­tion some triv­ial in­con­ve­nience?

• I think that ex­am­ple trig­gers our “not, it would be im­moral” in­tu­ition, be­cause an im­moral pop­u­la­tion would make the choice against the triv­ial in­con­ve­nience with even greater ease. So, their say­ing “yes, do please al­low some in­di­vi­d­ual to be mer­cilessly tor­tured” func­tions as Bayesian ev­i­dence in sup­port of their im­moral­ity.

But if you had a large pop­u­la­tion of peo­ple de­cide be­tween a triv­ial in­con­ve­nience for a differ­ent large pop­u­la­tion of peo­ple vs a sin­gle in­di­vi­d­ual se­lected from their own midst to be mer­cilessly tor­tured, I’m guess­ing that the moral in­tu­ition would be the ex­act differ­ent, and it would feel im­moral for this pop­u­la­tion to con­demn a differ­ent large pop­u­la­tion to such an in­con­ve­nience just to benefit one of their own.

• So you’re say­ing it is po­ten­tially im­moral if the group them­selves de­cide to make the de­ci­sion, but po­ten­tially moral if an out­sider of the group makes the ex­act same de­ci­sion?

• No, I’m not say­ing that. Don’t start with the ill-defined con­cept of “moral” and “im­moral”—start from the undis­puted re­al­ity of the mat­ter that peo­ple pass moral judge­ments on ac­tions they hear about.

So I’m say­ing that when Alice hears of
X: group A choos­ing to sac­ri­fice one of their own rather than in­con­ve­nience group B
Alice is likely to pass a differ­ent moral judge­ment of that choice than if Alice hears of
Y: group A choos­ing to sac­ri­fice a mem­ber of group B rather than in­con­ve­nience them­selves.

Even though util­i­tar­i­anism would ar­gue that ac­tions X and Y are equally moral taken by them­selves, ac­tions X and Y provide differ­ent ev­i­dence about whether group A is re­ally act­ing on moral prin­ci­ples. So if the evolu­tion­ary pur­pose for our moral in­tu­itions is to e.g. iden­tify peo­ple as villains or not, ac­tion Y trig­gers our moral in­tu­itions nega­tively and ac­tion X trig­gers our moral in­tu­itions pos­i­tively. Be­cause at a deeper level the real pur­pose of judg­ing the deed is to judge the doer.

• I would sug­gest that tor­ture has greater and greater di­su­til­ity the larger the size of the so­ciety. So given a spe­cific so­ciety of a spe­cific size, the dust specks can never add up to more suffer­ing than the tor­ture; the greater the num­ber of dust specs pos­si­ble, the greater the di­su­til­ity of the tor­ture, and the tor­ture will always add up to worse.

If you’re com­par­ing so­cieties of differ­ent size, it may be that the so­ciety with the dust specks has as much di­su­til­ity as the so­ciety with the tor­ture, but this is no longer a choice be­tween dust specks and tor­ture, it’s a choice be­tween dust specks+A to tor­ture+B, and it’s not so coun­ter­in­tu­itive that I might pre­fer tor­ture+B.

As for why I have such an odd util­ity func­tion as “tor­ture is worse tin a larger so­ciety”? I’m try­ing to de­rive my util­ity func­tion from my prefer­ences and this is what I come up with—I’m not choos­ing a util­ity func­tion as a start­ing point.

• I’m try­ing to de­rive my util­ity func­tion from my prefer­ences and this is what I come up with—I’m not choos­ing a util­ity func­tion as a start­ing point.

Any util­ity func­tion runs into a re­pug­nant con­clu­sion of one type or an­other. I won­der if there is a the­o­rem to this effect, fol­low­ing from tran­si­tivity + con­ti­nu­ity. Yours is no ex­cep­tion.

For ex­am­ple, in your case of di­su­til­ity of tor­ture grow­ing larger with the size of the so­ciety, doesn’t the di­su­til­ity of dust specks grow both with the num­ber of peo­ple sub­jected to it and the so­ciety’s size? If not, how about the in­ter­me­di­ate di­su­til­ities, that of a stabbed toe, a one-min long agony and up and up slowly un­til you get to the full-blown 50 years of tor­ture? Where it this magic bound­ary be­tween the so­ciety size-in­de­pen­dent di­su­til­ity of specks and the scal­ing up di­su­til­ity of tor­ture?

• As I noted, I’m try­ing to com­pute my util­ity func­tion from my prefer­ences, not the other way around. So in re­sponse to that I’d re­fine the util­ity func­tion a bit: My new util­ity func­tion has two terms, the main term and an in­equal­ity term. While my origi­nal state­ment that tor­ture has a term based on the size of the so­ciety is still true, it is true be­cause in­creas­ing the size of the so­ciety and still tor­tur­ing 1 per­son means more in­equal­ity.

doesn’t the di­su­til­ity of dust specks grow both with the num­ber of peo­ple sub­jected to it and the so­ciety’s size?

The ex­tra term ap­plies to the dust specks as well, but I don’t think this is a prob­lem.

In the origi­nal prob­lem, ev­ery­one gets a dust speck, so there’s no in­equal­ity term. The tor­ture does have an in­equal­ity term and ends up always worse than the dust specks.

If you want to move to­wards in­ter­me­di­ate val­ues by in­creas­ing the main term and keep­ing the in­equal­ity term con­stant, thus in­creas­ing the dust specks to stubbed toes and the like, you’ll even­tu­ally come to some point where it ex­ceeds the tor­ture. But at that point they won’t be dust specks—in­stead you’ll de­cide that, for in­stance, many peo­ple suffer­ing 1 day of tor­ture will be worse than one per­son suffer­ing 50 years of tor­ture. I can live with that re­sult.

If you want to move to­wards in­ter­me­di­ate val­ues by in­creas­ing the in­equal­ity term and keep­ing the main term con­stant, you would “clump up” the dust specks, so one per­son re­ceives many dust specks worth of di­su­til­ity. If you keep do­ing this, you might even­tu­ally ex­ceed the tor­ture as well—but again, at the point where you ex­ceed the tor­ture, you won’t have dust specks any more, you’ll have larger clumps and you’ll say that many clumps (equiv­a­lent to 1 day of tor­ture each, for in­stance) can ex­ceed one per­son get­ting 50 years. Again, I can live with that re­sult.

If you want to move to­wards in­ter­me­di­ate val­ues by in­creas­ing the in­equal­ity term and not both­er­ing to keep the pop­u­la­tion con­stant, adding more peo­ple (in a way that is oth­er­wise neu­tral if you ig­nore the in­equal­ity term) would in­crease the di­su­til­ity. I haven’t worked out if this re­quires be­ing able to in­crease the di­su­til­ity be­yond that of tor­ture, but as I noted above, that would be a case of dust specks+A com­pared to tor­ture+B and hav­ing ei­ther of those quan­tities be greater wouldn’t sur­prise me..

This is a type of vari­able value prin­ci­ple and avoids the Repug­nant Con­clu­sion it­self, but may al­low for a va­ri­ety of Sadis­tic Con­clu­sion, since adding some tor­tured peo­ple can be bet­ter than adding a larger num­ber of well-off peo­ple. How­ever, I would ar­gue that de­spite the name “Sadis­tic”, this should be okay: I am not claiming that adding tor­tured peo­ple is good, just that it is bad but less bad than the other choice. And the other choice is bad be­cause the de­crease in to­tal util­ity from adding more peo­ple and in­creas­ing in­equal­ity over­whelms the in­crease in to­tal util­ity from those new peo­ple liv­ing good lives.

• I used to think that the dust specks was the ob­vi­ous an­swer. Then I re­al­ized that I was adding fol­low-on util­ity to tor­ture (in­abil­ity to do much else due to the pain) but not the dust specks (car crashes etc due to the dis­trac­tion). It was also about then that I changed from two-box­ing to one-box­ing, and started think­ing that wire­head­ing wasn’t so bad af­ter all. Are opinions to these three usu­ally cor­re­lated like this?

• Then I re­al­ized that I was adding fol­low-on util­ity to tor­ture (in­abil­ity to do much else due to the pain) but not the dust specks (car crashes etc due to the dis­trac­tion).

Per­haps a bet­ter anal­ogy would be dust specks that are only slightly dis­tract­ing, de­tract­ing from what­ever you were do­ing but not enough to cause you to make tan­gible mis­takes, ver­sus tor­tur­ing some­body who’s fly­ing a plane at the time.

In other words, this “fol­low-on util­ity” should be sep­a­rated from op­por­tu­nity costs, shouldn’t it?

• This ques­tion re­minds me of the dilemma posed to med­i­cal stu­dents. It went some­thing like this;

if the op­por­tu­nity pre­sented it­self to se­cretly, with no chance of be­ing caught, ‘ac­ci­den­tally’ kill a healthy pa­tient who is seen as wast­ing their life (smok­ing, drink­ing, not ex­er­cis­ing, lack of goals etc) in or­der to har­vest his/​her or­gans in or­der to save 5 other pa­tients should you go ahead with it?

From a util­i­tar­ian per­spec­tive, it makes perfect sense to com­mit the mur­der. The per­son who in­tro­duced me to the dilemma also pre­sented the ra­tio­nale for say­ing ‘no’… Thank­fully it wasn’t “It’s just wrong” or even “mur­der is wrong”… The an­swer sug­gested was “You wouldn’t want to live in a world where doc­tors might reg­u­larly op­er­ate in such a man­ner nor would you want to be a pa­tient in such a sys­tem… It would be ter­rify­ing”.

I sus­pect the key el­e­ments in the hos­pi­tal and dust speck sce­nar­ios are a) some­one power over an as­pect of other peo­ples fates and b) the level of trust of those peo­ple. The net-sum calcu­la­tion of over­all ‘good’ might well sug­gest tor­ture or or­gan har­vest­ing as the solu­tion, but how would you feel about nom­i­nat­ing some­one else to be the one who makes that de­ci­sion… Would you want that per­son to fa­vor the mo­men­tary 3^^^3 dust speck in­ci­dent or the 50 year tor­ture of an in­di­vi­d­ual?

• I have been read­ing less wrong about 6 month but this is my first post. I’m not an ex­pert but an in­ter­ested am­a­teur. I read this post about 3 weeks ago and thought it was a joke. After work­ing through replies and fol­low­ing links, I get it is a se­ri­ous ques­tion with se­ri­ous con­se­quences in the world to­day. I don’t think my com­ments du­pli­cate oth­ers already in the thread so here goes…

Let’s call this a “one blink” dis­com­fort (it comes and goes in a blink) and let’s say that on av­er­age each per­son gets one ev­ery 10 min­utes dur­ing their wak­ing hours. In re­al­ity it is prob­a­bly more, but you for­get about such things al­most as quick as they hap­pen. If it is “right” to send one per­son to 50 years of tor­ture to save 3^^^3 from a one blink dis­com­fort, it is right to send 6 peo­ple per hour to the same fate for each of the six­teen wak­ing hours per day, for a to­tal of 96 peo­ple per day and 35,040 peo­ple each year.

And if it is right to send 35,040 to 50 years of tor­ture to save 3^^^3 peo­ple for a sin­gle year from all one blink dis­com­forts, then it is right to send an­other 35,040 to the same fate in or­der to save 3^^^3 peo­ple from all two blink dis­com­forts, as­sum­ing each per­son on av­er­age get a two blink dis­com­fort (it comes and goes in two blinks) thrice per hour. We have now saved the mul­ti­verse from all one and two bink dis­com­forts for one year at the cost of send­ing 70,080 peo­ple to 50 years tor­ture.

By the time the first per­son comes out of tor­ture 50 years later, over 3.5 mil­lion peo­ple will have fol­lowed them into the tor­ture cham­bers to save ev­ery­one else from dis­com­forts last­ing two blinks or less. If you fol­low the logic through from one blink up to each per­son gets one mild cold or some such per year that in­flicts 2 days of dis­com­fort, and you are send­ing un­told trillions to 50 years tor­ture ev­ery year. It seems to me a sig­nifi­cant minor­ity of the pop­u­la­tion would be in the tor­ture cham­bers at any one time to save the ma­jor­ity from dis­com­fort.

My ques­tion to the tor­tur­ers is how far are you will­ing to take your logic be­fore you look at your­self in the mir­ror and see a mon­ster?

• 3^^^3 is very very very very large.

If we’re send­ing un­told trillions of peo­ple to tor­ture ev­ery year, out of 3^^^3 peo­ple to­tal, that means that over the whole his­tory of our uni­verse, fu­ture and past, we have a van­ish­ingly small chance of see­ing a sin­gle per­son in our uni­verse get taken away for tor­ture. Two peo­ple is even more neg­ligible. In the mean­time, all dis­com­forts up to the level of one mild cold get pre­vented for ev­ery­one.

Heck, I’d be will­ing to ab­sorb all the tor­ture-prob­a­bil­ity our uni­verse would re­ceive for my­self, just so I wouldn’t have to suffer through the mild cold I’m hav­ing right now. I take a greater risk by walk­ing down the stairs ev­ery day. Where do I sign up?

• OK, so you would ac­cept less than one per­son per uni­verse to be tor­tured for 50 years for ev­ery­one to avoid oc­ca­sional mild dis­com­fort. But that doesn’t an­swer the ques­tion of how far you are will­ing to take this logic. We haven’t even be­gan to touch se­ri­ous dis­com­fort like half the pop­u­la­tion get­ting men­strual cramps ev­ery month, let alone pro­longed pain and suffer­ing. Would you send one per­son per planet for tor­ture? One per­son per city? One per­son per fam­ily?

The end re­sult of this game is that a sig­nifi­cant minor­ity of peo­ple are be­ing tor­tured at any one time so the ma­jor­ity can live lives free of dis­com­fort, pain and suffer­ing. So is your ac­cept­able ra­tio 1:1,000,000, 1:10?

• I’m pretty sure that right now more than 1 in 1,000,000 peo­ple around the world (that is, around 7000 peo­ple to­tal) are ex­pe­rienc­ing suffer­ing at least as bad as the hy­po­thet­i­cal tor­ture. Tak­ing that into ac­count, a ra­tio of 1:1,000,000 would be a strict im­prove­ment. Faced with a choice like that, I might self­ishly re­fuse due to the chance that I would be one of the un­lucky few, whereas right now I am do­ing pretty well com­pared to most peo­ple. But I would like to be the sort of per­son that wouldn’t re­fuse.

(I’m also not con­vinced that a life com­pletely free of dis­com­fort, pain, and suffer­ing is pos­si­ble or de­sir­able; how­ever, this ob­jec­tion doesn’t reach the heart of the mat­ter, so I’m will­ing to ig­nore it for the sake of ar­gu­ment.)

The de­ci­sion would be more difficult once we get to a ra­tio which does not strictly dom­i­nate our cur­rent situ­a­tion. The ter­rible un­fair­ness of a world where you’re ei­ther free of all dis­com­fort or be­ing hor­ribly tor­tured both­ers me; for this rea­son, I think I wouldn’t make the trade for any ra­tio where the to­tal amount of suffer­ing is roughly com­pa­rable to the sta­tus quo. I would have to do some re­search to give you a pre­cise num­ber.

But now we are very far off from the origi­nal prob­lem of dust specks vs. tor­ture, in which the num­ber 3^^^3 is speci­fi­cally cho­sen to be suffi­ciently large that if you have an ac­cept­able ex­change rate at all, 1 : 3^^^3 will be ac­cept­able to you.

• Don’t be bam­boo­zled by big num­bers, it is ex­actly the same prob­lem: How far would you go to max­i­mize pain in the minor­ity in or­der to min­i­mize it in the ma­jor­ity. As Eliezer ar­gued so force­fully in the com­ments above, this prob­lem ex­ists on a con­tinuum and if you want to break the chain at any point you have to jus­tify why that point and not an­other.

Your ar­gu­ment for 1:1,000,000 does not go far enough in min­imis­ing pain for the ma­jor­ity. One per­son can­not take the pain of 1,000,000 peo­ple with­out dy­ing or at least be­com­ing un­con­scious. I sus­pect the max­i­mum “other peo­ple’s pain” a per­son could en­dure with­out los­ing con­scious is broadly be­tween 5 and 50, let’s say 25.

So if you are will­ing to send one hu­man be­ing out of 3^^^3 peo­ple to be tor­tured for 50 years to re­move a van­ish­ingly small mo­men­tary dis­com­fort for the ma­jor­ity, then you must also be will­ing to con­tinu­ally tor­ture 1 in 25 peo­ple to erad­i­cate all pain in the ma­jor­ity of the other 24. They are two ends of the same con­tinuum, you can­not break the chain.

Both in­stances are bru­tally un­fair on the peo­ple tor­tured, but at least in the sec­ond in­stance the ma­jor­ity will lead bet­ter lives while in the first in­stance not a sin­gle per­son is aware they had one less blink of dis­com­fort in their en­tire life­time. So my ques­tion re­mains to the tor­tur­ers, are you a mon­ster for send­ing 1 in 25 peo­ple to be tor­tured?

• When did we start talk­ing about some­one “tak­ing the pain of other peo­ple”? This is news to me; it wasn’t part of the ar­gu­ment be­fore.

This, I un­der­stand is the rea­son you’re sug­gest­ing that I would tor­ture 1 in 25 peo­ple. Well, I wouldn’t tor­ture 1 in 25 peo­ple. I have already stated that if the to­tal amount of pain is con­served (there may be difficul­ties with mea­sur­ing “to­tal pain”, but bear with me here) then I pre­fer it to be spread out evenly rather than piled onto one per­son.

In the dust speck for­mu­la­tion, the 3^^^3 be­ing dust­specked are, in ag­gre­gate, suffer­ing much more than the one per­son be­ing tor­tured. 3^^^3 is very large. For any con­tinuum you could ac­tu­ally de­scribe that ends in “tor­ture 1 in X peo­ple so that the re­main­der live perfect lives”, X will still be ap­prox­i­mately 3^^^3. Pos­si­bly di­vided by some in­signifi­cant num­ber like googol­plex that can be writt­ten down in mere sci­en­tific no­ta­tion.

At no point did any­one ac­cept your 1:25 pro­posal.

• What makes you think that mak­ing the num­bers big­ger changes any­thing? Any­one who switches an­swers be­tween the origi­nal ques­tion and yours is con­fused.

• So you would be will­ing to keep send­ing more and more peo­ple to tor­ture for triv­ial less dis­com­fort for the ma­jor­ity for each per­son tor­tured. At what point would you say enough is enough?

• Once the pos­i­tive con­se­quences are out­weighed by the nega­tive con­se­quences, ob­vi­ously.

• I would sug­gest the an­swer is fairly ob­vi­ously that one per­son be hor­ribly tor­tured for 50 years, on the grounds that the idea “there ex­ists 3^^^3 peo­ple” is in­com­pre­hen­si­ble cos­mic hor­ror even be­fore you add in the mote of dust.

• I am not so sure the ex­is­tence of 3^^^3 peo­ple is a bad thing, but even grant­ing that, as­sume that the 3^^^3 peo­ple ex­ist re­gard­less, and the two choices you have are: a) one of them is tor­tured for 50 years, or b) each and ev­ery one of them gets a mote of dust in the eye.

In gen­eral, if you find an ob­jec­tion to the premises of a ques­tion that does not di­rectly im­pact the “point” of the ques­tion, you should find a var­i­ant of the premises that re­moves that ob­jec­tion, and an­swer the var­i­ant of the ques­tion with that as the premise. See The Least Con­ve­nient Pos­si­ble World.

• Wait, does the origi­nal ques­tion sim­plify to:

“[There ex­ists 3^^^3 peo­ple] AND [of the set of all peo­ple there ex­ists one that is tor­tured for 50 years OR of the set of all peo­ple, all get a mote of dust in the eye; which would you pre­fer]“?

Be­cause that would be quite differ­ent to:

“[of the set of all peo­ple there ex­ists one per­son who will be tor­tured for 50 years] OR [there ex­ists 3^^^3 peo­ple AND each of them gets a mote of dust in the eye]; which would you pre­fer?”

I an­swered the lat­ter.

• The point of the ques­tion was to ask us to judge be­tween the di­su­til­ity of many peo­ple dust specked and a sin­gle per­son tor­tured, not to place a value on whether 3^^^3 ex­is­tences is it­self a bad or a good thing.

So, kinda of the former in­ter­pre­ta­tion, ex­cept that the “3^^^3 peo­ple” part is merely the set­ting that en­ables the ques­tion, not re­ally the point of the ques­tion...

EDIT: Btw, since I’m an anti-specker, I tried to calcu­late an up­per bound once, for num­ber of specks… It ended up be­ing about 1.2 * 10^20 dust specks

• Surely the in­com­pre­hen­si­bly large num­ber is part of the point of the ques­tion, oth­er­wise why not use the set of all ex­ist­ing peo­ple be­ing dust specked? ~7 billion dust­moted vs. 1 tor­tured?

3^^^3 peo­ple is more sen­tient mass than could phys­i­cally fit in our uni­verse.

Edit: Here’s how I imag­ined that play­ing out: 3^^^3 peo­ple are brought into ex­is­tence, dis­plac­ing all the mat­ter of the uni­verse. Which, while still mo­men­tar­ily con­scious, each gets a mote of this mat­ter in their eye, caus­ing minor dis­com­fort. They then all im­me­di­ately die, and in the fol­low­ing eter­nity their bod­ies and the re­main­der of the uni­verse col­lapses to a sin­gle point.

• Surely the in­com­pre­hen­si­bly large num­ber is part of the point of the ques­tion, oth­er­wise why not use the set of all ex­ist­ing peo­ple be­ing dust specked? ~7 billion dust­moted vs. 1 tor­tured?

Be­cause 7 billion dust specks aren’t enough. Ob­vi­ously.

The point of the ques­tion is an ex­tremely large num­ber of tiny di­su­til­ities com­pared to a sin­gle vast di­su­til­ity. When you’re imag­in­ing 3^^^3 deaths in­stead and the de­struc­tion of the uni­verse, you’re kinda miss­ing the point.

• What about 7 billion stubbed toes?

• A few posts up, I’ve already linked to some calcu­la­tions about var­i­ous sce­nar­ios. You can look at them, if you are re­ally gen­uinely in­ter­ested—but why would you be? It’s the prin­ci­ple of the thing that’s in­ter­est­ing, not some in­ex­act num­bers one roughly calcu­lates.

• To me, this ex­per­i­ment shows that ab­solute util­i­tar­i­anism does not make a good so­ciety. Con­versely, a de­ci­sion be­tween, say, per­son A get­ting \$100 and per­son B get­ting \$1 or both of them get­ting \$2 shows ab­solute egal­i­tar­i­anism isn’t satis­fac­tory ei­ther (as­sum­ing sim­ple trans­fers are banned). Per­haps the in­evitable re­al­iza­tion...is that some bal­ance be­tween them is pos­si­ble, such as the weighted sum (sum in­di­cat­ing util­i­tar­i­anism) with more weight ap­plied to those who have less (this in­di­cat­ing egal­i­tar­i­anism) can provide such a bal­ance?

• To me, this ex­per­i­ment shows that ab­solute util­i­tar­i­anism does not make a good so­ciety.

I don’t see how you’ve ar­rived at that at all. Would you mind elab­o­rat­ing?

• To choose tor­ture rather than dust specks is the util­i­tar­ian op­tion, max­i­miz­ing the to­tal sum of sub­jec­tive util­ity. This, how­ever, causes ex­treme pain to 1 per­son to merely make ev­ery­one else re­ceive slightly less neg­ligible in­con­ve­nience. Any­one who picks dust specks is agree­ing that util­i­tar­i­anism is not always right (in fact eliezer says in his fol­low-up of this that in do­ing so, one re­jects a cer­tain kind of util­i­tar­i­anism). If you chose tor­ture though, I can see why you’d feel oth­er­wise.

• Where’s your ar­gu­ment to the effect that ab­solute util­i­tar­i­anism does not make a good so­ciety? Fur­ther, could you taboo “good so­ciety” while you’re at it?

• Right, I should have said “is not op­ti­mal” rather than “does not make a good so­ciety”. My ba­sic point be­ing that if we agree that dust specks are best (which I ad­mit we’re not in una­n­im­ity about), we re­ject util­i­tar­i­anism as an op­ti­mal al­lo­ca­tion rule. I do not dis­credit it as a whole (i.e. util­i­tar­i­anism still has some merit as a guideline), but if we re­ject it even once, “ab­solute util­i­tar­i­anism” (the be­lief that it is always op­ti­mal) can­not hold.

• So your ba­sic con­tention is: “If you agree that dust specks is the an­swer, you can’t say that tor­ture is the an­swer”?

This sounds fairly ob­vi­ous.

• Heh, no I’m not say­ing that if X holds then ~X fails to hold, I ex­pect that to also be the case, but that’s not what I’m say­ing. I’m say­ing that we (those of us who chose dust specks) have cho­sen to re­ject util­i­tar­i­anism and propos­ing an al­ter­na­tive, since we can’t merely choose non­ap­ples over ap­ples.

• Heh, no I’m not say­ing that if X holds then ~X fails to hold.

I had a feel­ing you weren’t. :)

I’m say­ing that we (those of us who chose dust specks) have cho­sen to re­ject util­i­tar­i­anism and propos­ing an al­ter­na­tive, since we can’t merely choose non­ap­ples over ap­ples.

Yes, that’s ac­cu­rate. If you take util­i­tar­i­anism to its log­i­cal con­clu­sion, you reach things like Tor­ture in T v. DS prob­lems. This con­ver­sa­tion re­minds me a lot of the ex­cel­lent book “The Limits of Mo­ral­ity.”

I’d be cu­ri­ous as to why any­one would choose to re­ject util­i­tar­i­anism on the ba­sis of this thought ex­per­i­ment, though.

• Then it seems we’ve reached an agree­ment, as the agree­ment the­o­rem says we should. And yes, this is a thought ex­per­i­ment, it is un­likely that any­one will ever have to choose be­tween such ex­tremes (or that 3^^^3 peo­ple will ever ex­ist, at once or even in to­tal). How­ever, whether real or not, if one re­jects util­i­tar­i­anism here, they can’t sim­ply say “Well it works in all real sce­nar­ios though”. Eliezer could have just as eas­ily men­tioned a util­ity mon­ster, but he felt like con­vey­ing the same thought ex­per­i­ment in a more origi­nal way.

• How­ever, whether real or not, if one re­jects util­i­tar­i­anism here, they can’t sim­ply say “Well it works in all real sce­nar­ios though”. Eliezer could have just as eas­ily men­tioned a util­ity mon­ster, but he felt like con­vey­ing the same thought ex­per­i­ment in a more origi­nal way.

Right. I’m just un­clear as to why peo­ple (not you speci­fi­cally, I just meant it gen­er­ally in my pre­vi­ous com­ment) in­ter­pret these kinds of sto­ries as crit­i­cisms of util­i­tar­i­anism. They are sim­ply tak­ing the ax­ioms to their log­i­cal ex­tremes, not offer­ing ar­gu­ments against ac­cept­ing those ax­ioms in the first place.

• Ah, well if that’s the point you’re mak­ing then yes, you’re in­deed cor­rect. Eliezer has by no means ar­gued that util­i­tar­i­anism is en­tirely wrong, just shown that its log­i­cal ex­treme is wrong (which may or may not have been his in­ten­tion). If you’re ar­gu­ing that oth­ers are see­ing this in a differ­ent way than we agree­ably have, and have in­ter­preted this ar­ti­cle in a differ­ent way than is ra­tio­nal...well, you may also have a point there. It’s not par­tic­u­larly sur­pris­ing though, since there are dozens (per­haps hun­dreds) of ways to suc­cumb to 1 or more fal­la­cies and only 1 way to suc­cumb to none.

• First of all, I am for the tor­ture—so are 22.1% of the peo­ple re­cently sur­veyed vs 36.8% who are for the dust specks—the rest don’t want to re­spond or are un­sure.

Se­condly, the is­sue of small dis­persed di­su­til­ities vs large con­cen­trated ones is one we con­stantly en­counter in the real world, and time af­ter time so­ciety ac­cepts that for the pur­pose of e.g. the con­ve­nience of driv­ing, we can tol­er­ate the un­avoid­able trade­off of the oc­ca­sional traf­fic ac­ci­dents. Where we don’t sac­ri­fice ev­ery tiny lit­tle lux­ury just to gather re­sources to save a sin­gle ex­tra life. If you had to break 7 billion legs to save a sin­gle man from be­ing tor­tured, most peo­ple would not ac­cept this trade­off as ac­cept­able.

Once this logic is in place, all that re­mains is the scope in­sen­si­tivity where peo­ple can’t re­ally in­tuit the vast size of 3^^^3.

• The math­e­mat­i­cal ob­ject to use for the moral calcu­la­tions needs not be ho­molo­gous to real num­bers.

My way of see­ing it is that the speck of dust barely no­tice­able will be strictly smaller than tor­ture no mat­ter how many in­stances of speck of dust hap­pen. That’s just how my ‘moral num­bers’ op­er­ate. The speck of dust equals A>0, the tor­ture equals B>0, the A*N<B holds for any finite N . . I for­bid in­fini­ties (the num­ber of dis­tinct be­ings is finite).

If you think that’s nec­es­sar­ily ir­ra­tional you have a lot of math­e­mat­ics to learn. You can start with or­di­nal num­bers.

edit: note, i am ig­nor­ing con­se­quences of the specks in the eyes as I think they are not the point of the ex­er­cise and only obfus­cate ev­ery­thing plus one has to make as­sump­tions like specks end­ing up in eyes of peo­ple whom are driv­ing.

• If I un­der­stand cor­rectly, then I agree with you. But this view­point has con­se­quences.

• The linked post still as­sumes that dis­com­fort space is one di­men­sional, which it needs not be. The de­ci­sion out­comes do need to be­have like com­par­i­son does (if a>b and b>c it must fol­low that a>c) but that’s about it.

Bot­tom line is, we can’t very well re­flect on how we think about this is­sue, so its hard to come up with some model that works the same as your head, and which you can re­flect on, calcu­late with com­puter, etc.

By the way, con­sider a be­ing made of 10^30 parts with 10^30 states each. That’s quite big be­ing, way big­ger than hu­man. The num­ber of dis­tinct states of such be­ing is (10^30)^(10^30) = 10^(30*10^30) , which is uni­mag­in­ably smaller than 3^^^3 . You can pick be­ings that are to hu­mans as hu­mans are to amoeba, re­peated many times, and still be waaay short of 3^^^3. The guys who chose tor­ture, con­grats on also hav­ing a demon­stra­ble rea­son­ing failure when rea­son­ing about huge num­bers.

edit: em­bar­rass­ing math glitch of my own. It is difficult to rea­son about huge num­bers and easy to miss some­thing, such as num­ber of ‘peo­ple’ ex­ceed­ing num­ber of pos­si­ble hu­man mind states by uni­mag­in­ably far.

• I ten­ta­tively like to mea­sure hu­man ex­pe­rience with log­a­r­ithms and ex­po­nen­tials. Our hear­ing is log­a­r­ith­mic, loud­ness wise, hence the unit dB. Hu­man ex­pe­riences are rarely lin­ear, thus is it is al­most never true that f(x*a) = f(x)*a.

In the above hy­po­thet­i­cal, we can imag­ine the dust specks and the tor­ture. If we pro­pose that NO dust speck ever does any­thing other than cause mild an­noy­ance, never one en­ters the eye of a driver who blinks at an in­op­por­tune time and crashes; then I would pro­pose we can say: awful­ness(pain) = k^pain.

A dust speck causes ap­prox­i­mately Dust = ep­silon Dols (unit of pain, think the op­po­site of he­dons) while in­tense, effec­tive tor­ture causes pos­si­bly sev­eral kilo­Dols per sec­ond. Now it is sim­ply a mat­ter to say Tor­ture = W kDol/​s * 50 years, for some rea­son­able W; Lastly com­pare k^Dust * 3^^^3 ⇔ k^Tor­ture.

• I choose the specks. My util­ity func­tion u(what hap­pens to per­son 1, what hap­pens to per­son 2, …, what hap­pens to per­son N) doesn’t equal f_1(what hap­pens to per­son 1) + f_2(what hap­pens to per­son 2) + … + f_N(what hap­pens to per­son N) for any choice of f_1, …, f_N, not even al­low­ing them to be differ­ent; in par­tic­u­lar, u(each of n peo­ple gets one speck in their eye) doesn’t ap­proaches a finite limit as n ap­proaches in­finity, and this limit is less nega­tive than u(one per­son gets tor­tured than 50 years)

• I think it might be in­ter­est­ing to re­flect on the pos­si­bil­ity that among the 3^^^3 dust speck vic­tims there might be a smaller-but-still-vast num­ber of peo­ple be­ing sub­jected to vary­ing lengths of “con­stantly-hav­ing-dust-thrown-in-their-eyes tor­ture”. Throw­ing one more dust speck at each of them is, up to per­mut­ing the vic­tims, like giv­ing a smaller-but-still-vast num­ber of peo­ple 50 years of dust speck tor­ture in­stead of leav­ing them alone.

(Don’t know if any­one else has already made this point—I haven’t read all the com­ments.)

• Idea 1: dust specks, be­cause on a lin­ear scale (which seems to be always as­sumed in dis­cus­sions of util­ity here) I think 50 years of tor­ture is more than 3^^^3 times worse than a dust speck in one’s eye.

Idea 2: dust specks, be­cause most peo­ple ar­bi­trar­ily place bad things into in­com­pa­rable cat­e­gories. The death of your loved one is deemed to be in­finitely worse than be­ing stuck in an air­port for an hour. It is in­com­pa­rable; any amount of 1 hour waits are less bad than a sin­gle loved one dy­ing.

• Idea 1: dust specks, be­cause on a lin­ear scale (which seems to be always as­sumed in dis­cus­sions of util­ity here) I think 50 years of tor­ture is more than 3^^^3 times worse than a dust speck in one’s eye.

How much would you have to de­crease the amount of tor­ture, or in­crease the num­ber of dust specks, be­fore the dust specks would be worse?

• I don’t know. I don’t sup­pose you claim to know at which point the num­ber of dust specks is small enough that they are prefer­able to 50 years of tor­ture?

(which is why I think that Idea 2 is a bet­ter way to rea­son about this)

• I would pre­fer the dust motes, and strongly. Pain trumps in­con­ve­nience.

And yet...we ac­cept au­to­mo­biles, which kill tens of thou­sands of peo­ple per year, to avoid in­con­ve­nience. (That is, au­to­mo­biles in the hands of reg­u­lar peo­ple, not just trained pro­fes­sion­als like am­bu­lance drivers.) But it’s hard to calcu­late the benefits of hav­ing a ve­hi­cle.

Re­duc­ing the na­tional speed limit to 30mph would prob­a­bly save thou­sands of lives. I would find it un­con­scionable to keep the speed limit high if ev­ery­one were im­mor­tal. At pre­sent, such a mea­sure would trade lives for parts of lives, and it’s a mat­ter of math to say which is bet­ter...though we could eas­ily re­ar­range our lives to ob­vi­ate most travel.

• Re­duc­ing the na­tional speed limit to 30mph would prob­a­bly save thou­sands of lives. I would find it un­con­scionable to keep the speed limit high if ev­ery­one were im­mor­tal.

I had to read that twice be­fore I re­al­ised you meant “im­mor­tal like an elf” rather than “im­mor­tal like Jack Hark­ness and Con­nor MaCleod”.

• In­ter­est­ing ques­tion. I think a similar real-world situ­a­tion is when peo­ple cut in line.

Sup­pose there is a line of 100 peo­ple, and the line is mov­ing at a rate of 1 per­son per minute.

Is it ok for a new per­son to cut to the front of the line, be­cause it only costs each per­son 1 ex­tra minute, or should the new per­son stand at the back of the line and en­dure a full 100 minute wait?

Of course, not ev­ery­one in line en­dures the same wait du­ra­tion; a per­son near the front will have a sig­nifi­cantly shorter wait than a per­son near the back. To ad­dress that is­sue one could av­er­age the wait times of ev­ery­one in line and say that there is an av­er­age wait time of 49.5 min­utes per per­son in line [Avg(n) = (n-1) + Avg(n-1)].

Is it ok for a sec­ond per­son to also cut to the front of the line? How many peo­ple should be al­lowed to cut to the front, and which peo­ple of those who could pos­si­bly cut to the front should be al­lowed to do so?

• Is it ok for a new per­son to cut to the front of the line, be­cause it only costs each per­son 1 ex­tra minute, or should the new per­son stand at the back of the line and en­dure a full 100 minute wait?

This is one of the rea­sons why util­i­tar­i­anism makes me cringe. “We can do first-or­der calcu­la­tions and come up with a good an­swer! What could go wrong?”

• I’d gladly get a speck of dust in my eye as many times as I can, and I’m sure those 3^^^3 peo­ple would join me, to keep one guy from be­ing tor­tured for 50 years.

• Maybe you will in­deed, but should you?

• This seems to work nearly as well for any harm less than be­ing tor­tured for 50 years — say, be­ing tor­tured for 25 years.

• I wouldn’t vol­un­teer for 25 years of tor­ture to save a ran­dom per­son from 50. A rel­a­tive, maybe.

• Sup­pose some frac­tion of the 3^^^3 dropped out. How many dust specks would you be will­ing to take? Two? Ten? A thou­sand? A mil­lion? A billion? That’s half a mil­lime­ter in di­ame­ter, now, and we’re only at 10^9. How about 10^12? 10^15? 10^18? We’re around half a me­ter in di­ame­ter now, ap­proach­ing or ex­ceed­ing the size of a foot­ball, and we’ve not even reached 3^^4 - and re­mem­ber that 3^^^3 is 3^^3^^3 = 3^^7,625,597,484,987.

What, you think that all of the 3^^^3 will go for it? All of them, chip­ping in to save one per­son who was get­ting 50 years of tor­ture? In a uni­verse with 3^^^3 peo­ple in it, how many peo­ple do you think are be­ing tor­tured? Our planet has had around 10^11 hu­man be­ings in his­tory. If we say that only one of those 10^11 peo­ple were ever tor­tured for 50 years in his­tory—or even that there were a one-in-a-thou­sand chance of it, one in 10^14 - how many peo­ple would be tor­tured for 50 years among the more than 3^^^3 we are posit­ing? And do you think that all 3^^^3 will choose the same one you did?

Would you con­sider think that, per­haps, one dust speck is a bit much to pay to save one part in 3^^^3 of a vic­tim?

• Would you con­sider think that, per­haps, one dust speck is a bit much to pay to save one part in 3^^^3 of a vic­tim?

When mul­ti­ple agents co­or­di­nate, their de­ci­sion de­liv­ers the whole out­come, not a part of it. Depend­ing on what you de­cide, ev­ery­one who rea­sons similarly will de­cide. Thus, you have the ab­solute con­trol over what out­come to bring about, even if you are only one of a gazillion like-minded vot­ers.

Here, you de­cide whether to save one per­son, at the cost of harm­ing 3^^^3 peo­ple. This is not equiv­a­lent to sav­ing 1/​3^^^3 of a per­son at the cost of harm­ing one per­son, be­cause the sav­ing of 1/​3^^^3 of a per­son is not some­thing that ac­tu­ally could hap­pen, it is at best util­i­tar­ian sim­plifi­ca­tion, which you must make ex­plicit and not con­fuse for a de­ci­sion-the­o­retic con­struc­tion.

• If it were a one-shot deal with no cheaper al­ter­na­tive, I could see agree­ing. But that still leaves the other 3^^^3/​10^14 vic­tims and this won’t scale to deal with those.

• In the real world the pos­si­blity of tor­ture ob­vi­ously hurts more peo­ple than just the per­son be­ing tor­tured. By the­o­riz­ing about the util­ity of tor­ture you are ac­tu­ally sub­ject­ing pos­si­bly billions of peo­ple to pe­ri­odic bouts of fear and pain.

• Ask this to your­self to make the ques­tion eas­ier. What would you pre­fer, get­ting 3^^^3 dust specks in your eye or be­ing hit with a spiked whip for 50 years.

You must live long enough to feel the 3^^^3 specks in your eye, and each one lasts a frac­tion of a sec­ond. You can feel noth­ing else but that speck in your eye.

So, it boils down to this ques­tion. Would you rather be whipped for 50 years or get specks in your eye for over a google­plex of years.

If I could pos­si­ble put a marker of the util­ity of bad that a speck of dust in the eye is, and com­pare that to the nega­tive util­ity that a year of de­pres­sion could be, or be­ing whipped once or hav­ing arms bro­ken, it seems im­pos­si­ble that the 50 years of tor­ture could give a more nega­tive value.

• It seems that many, in­clud­ing Yud­kowsky, an­swer this ques­tion by mak­ing the most ba­sic mis­take, i.e. by cheat­ing—as­sum­ing facts not in ev­i­dence.

We don’t know any­thing about (1) the side-effects of pick­ing SPECKS (such as car crashes); and definitely don’t know that (2) the tor­ture vic­tim can “ac­cli­mate”. (2) in par­tic­u­lar seems like cheat­ing in a big way—es­pe­cially given the state­ment “with­out hope or rest”.

There’s noth­ing ra­tio­nal about pos­ing a hy­po­thet­i­cal and then adding in ad­di­tional facts in your an­swer. How­ever, that’s a great way to avoid the ques­tion pre­sented.

• I’ve re­ceived minus 2 points (that’s bad I guess?) with no replies, which is very illu­mi­nat­ing… I sup­pose I’m just re­peat­ing the above points on lex­i­co­graphic prefer­ences.

Any an­swer to the ques­tion in­volves mak­ing value choices about the rel­a­tive harms as­so­ci­ated with tor­ture and specks, I can’t see how there’s an “ob­vi­ous” an­swer at all, un­less one is ar­ro­gant enough to as­sume their value choices are uni­ver­sal and be­yond challenge.

Un­less you add facts and as­sump­tions not stated, the ques­tion com­pares tor­ture x 50 years to 1 dust speck in an in­finite num­ber peo­ple’s eyes, one time. Am I miss­ing some­thing? Be­cause it seems It can’t be an­swered with­out refer­ence to value choices—which to any­one who doesn’t share those val­ues will nat­u­rally ap­pear ir­ra­tional.

• “I’ve re­ceived minus 2 points (that’s bad I guess?) with no replies, which is very illu­mi­nat­ing… ”

I think this is mainly be­cause your com­ment seemed un­in­formed by the rele­vant back­ground but was pre­sented with a con­de­scend­ing and nega­tive tone. Com­ments with both these char­ac­ter­is­tics tend to get down­voted, but if you cut back on one or the other you should get bet­ter re­sponses.

“It seems that many, in­clud­ing Yud­kowsky, an­swer this ques­tion by mak­ing the most ba­sic mis­take, i.e. by cheat­ing—as­sum­ing facts not in ev­i­dence.”

http://​less­wrong.com/​lw/​2k/​the_least_con­ve­nient_pos­si­ble_world/​

“Any an­swer to the ques­tion in­volves mak­ing value choices”

Yes it does.

“com­pares tor­ture x 50 years to 1 dust speck in an in­finite num­ber peo­ple’s eyes”

3^^^3 is a (very large) finite num­ber.

“It can’t be an­swered with­out refer­ence to value choices—which to any­one who doesn’t share those val­ues will nat­u­rally ap­pear ir­ra­tional.”

Mo­ral anti-re­al­ists don’t have to view differ­ences in val­ues as re­flect­ing ir­ra­tional­ity.

• Fair enough, apolo­gies for the tone.

But if an­swer­ing the ques­tion in­volves mak­ing ar­bi­trary value choices I don’t un­der­stand how there can pos­si­bly be an ob­vi­ous an­swer.

• There isn’t for agents in gen­eral, but most hu­mans will in fact trade off prob­a­bil­ities of big bads (death, tor­ture, etc) against minor harms, and so prefer­ring SPECKS in­di­cates a seem­ing in­co­herency of val­ues.

• Thanks for the pa­tient ex­pla­na­tion.

• Com­ments with both these char­ac­ter­is­tics tend to get down­voted, but if you cut back on one or the other you should get bet­ter re­sponses.

I’d just like to note that com­ments in­formed by the rele­vant back­ground but con­de­scend­ing and nega­tive are of­ten voted down as well. Though An­noy­ance seems to have rel­a­tively high karma any­way.

• I’d just like to note that com­ments in­formed by the rele­vant back­ground but con­de­scend­ing and nega­tive are of­ten voted down as well.

I agree. See DS3618 for a crys­tal-clear ex­am­ple.

• I don’t think that case is crys­tal-clear, could you ex­plain this a bit more?

Look­ing at DS3618′s com­ments, he (I es­ti­mate gen­der based on writ­ing style and the de­mo­graph­ics of this fo­rum and of the CMU PhD pro­gram he claims to have en­tered) had some good (al­though ob­vi­ous) points re­gard­ing peer-re­view and Flare. Those com­ments were up­voted.

The com­ments that were down­voted seem to have been very nega­tive and low in in­formed con­tent.

He claimed that call­ing in­tel­li­gent de­sign cre­ation­ism “cre­ation­ism” was “wrong” be­cause ID is log­i­cally sep­a­rable from young earth cre­ation­ism and in­cor­po­rates the idea of ‘ir­re­ducible com­plex­ity.’ How­ever, ar­gu­ments from de­sign, in­clud­ing forms of ‘ir­re­ducible com­plex­ity’ ar­gu­ment, have been cre­ation­ist stand­bys for cen­turies. Rudely chew­ing some­one out for not defin­ing cre­ation­ism in a par­tic­u­lar nar­row fash­ion, the fash­ion ad­vanced by the Dis­cov­ery In­sti­tute as part of an or­ga­nized cam­paign to evade court rul­ings, does de­serve down­vot­ing. Suggest­ing that the Dis­cov­ery In­sti­tute, in­clud­ing Behe, isn’t a Chris­tian front group is also pretty in­defen­si­ble given the pub­lic info on it (e.g. the “wedge strat­egy” and nu­mer­ous similar state­ments by DI mem­bers to Chris­tian au­di­ences that they are a two-faced or­ga­ni­za­tion).

This com­ment im­plic­itly de­manded that no one note limi­ta­tions of the brain with­out first build­ing AGI, and was lack­ing in con­tent.

DS3618 also claims to have a strato­spheric IQ, but makes nu­mer­ous spel­ling and gram­mat­i­cal er­rors. Per­haps he is not a na­tive English speaker, but this does shift prob­a­bil­ity mass to the hy­poth­e­sis that he is a troll or sock pup­pet.

He says that he en­tered the CMU PhD pro­gram with­out a bach­e­lor’s de­gree based on in­dus­try ex­pe­rience. This is pos­si­ble, as CMU’s PhD pro­gram has no for­mal ad­mis­sions re­quire­ments ac­cord­ing to its doc­u­ment. How­ever, given base rates, and the con­text of the claim, it is sus­pi­ciously con­ve­nient and shifts fur­ther prob­a­bil­ity mass to­wards the troll hy­poth­e­sis. I sup­pose one could go through the CMU Com­puter Science PhD stu­dent di­rec­tory to find some­one with­out a B.S. and with his stated work back­ground to con­firm his iden­tity (only re­port­ing whether there is such a per­son, not mak­ing the anony­mous DS3618′s iden­tity pub­lic with­out his con­sent).

• I strongly doubt that per­son counts as “in­formed by the rele­vant back­ground”.

• I con­sid­ered that, which is why I said that the re­sponses would be “bet­ter.”

• Tim: You’re right—if you are a rea­son­ably at­trac­tive and charis­matic per­son. Other­wise, the ques­tion (from both sides) is worse than the dust speck.

(Ask­ing peo­ple also puts you in the pic­ture. You must like to spend eter­nity ask­ing peo­ple a silly ques­tion, and learn­ing all pos­si­ble lin­guis­tic vo­cal­iza­tions in or­der to do so. There are many fewer vo­cal­iza­tions than pos­si­ble lan­guages, and many fewer pos­si­ble hu­man lan­guages than 3^^^3. You will be spend­ing more time go­ing from one per­son of the SAME lan­guage to an­other, at 1 fem­tosec­ond per jour­ney, than you would spend learn­ing all pos­si­ble hu­man lan­guages. That would be true even if the peo­ple were fully shuffled by lan­guage—just 1 fem­tosec­ond each for all the times when co­in­ci­dence gives you two of the same lan­guage in a row. 3^^^3 is that big.)

• I think you should ask ev­ery­one un­til you have at least 3^^^3 peo­ple whether they would con­sent to hav­ing a dust speck fly into their eye to save some­one from tor­ture. When you have enough peo­ple just put dust specks into their eyes and save the oth­ers.

• I came across this post only to­day, be­cause of the cur­rent com­ment in the “re­cent com­ments” column. Clearly, it was an ex­er­cise that drew an un­usual amount of re­sponse. It fur­ther re­in­forces

my im­pres­sion of much of the OB blog, posted in Au­gust, and de­nied by email.

• OK, I see I got a bit long-winded. The in­ter­est­ing part of my ques­tion is if you’d take the same de­ci­sion if it’s about you in­stead of oth­ers. The an­swer is ob­vi­ous, of course ;-)

The other de­tails/​ver­sions I men­tioned are only in­tended to ex­plore the “con­tour of the value space” of the other posters. (: I’m sure Eliezer has a term for this, but I for­get it.)

• I know you’re all get­ting a bit bored, but I’m cu­ri­ous what you think about a differ­ent sce­nario:

What if you have to choose be­tween (a) for the next 3^^^3 days, you get an ex­tra speck in your eye per day than nor­mally, and 50 years you’re placed in sta­sis, or (b) you get the nor­mal amount of specks in your eyes, but dur­ing the next 3^^^3 days you’ll pass through 50 years of atro­cious tor­ture.

Every­thing else is con­sid­ered equal in the other cases, in­clud­ing the fact that (i) your to­tal lifes­pan will be the same in both cases (more than 3^^^3 days), (ii) the specks are guaran­teed to not cause any phys­i­cal effects other than those men­tioned in the origi­nal post (i.e., you’re min­i­mally an­noyed and blink once more each day; there are no “tricks” about hid­den con­se­quences of specks), (iii) any other oc­cur­rence of specks in the eye (yours or oth­ers’) or tor­ture (you or oth­ers) will hap­pen ex­actly the same for ei­ther choice, (iv) the 50 years of ei­ther sta­sis or tor­ture would hap­pen at the same points and (v) af­ter the end of the 3^^^3 days the state of the world is ex­actly the same ex­cept for you (e.g., the ge­nie doesn’t come back with some­thing tricky).

Also as­sume that the 3^^^3 days you are hu­man-shaped and hu­man-minded, ex­cept for the change that your mem­ory (and abil­ity to use it) is stretched to work over the du­ra­tion as a typ­i­cal hu­man’s does dur­ing a typ­i­cal life.

Does your an­swer change if ei­ther:
A) it’s guaran­teed that ev­ery­thing else is perfectly equal (e.g., the two pos­si­ble cases will mag­i­cally be for­bid­den to in­terfere with any of your de­ci­sions dur­ing the 3^^^3 days, but af­ter­wards you’ll re­mem­ber them; in the case of tor­ture, any re­main­ing trauma will re­main un­til healed “phys­i­cally”. More suc­cinctly, there are no side effects dur­ing the 3^^^3 days, and none other than the “nor­mal” ones af­ter­wards).
B) the 50 years of tor­ture hap­pen at the start, end, or dis­tributed through­out the pe­riod.
C) we re­place the life pe­riod with ei­ther (i) your en­tire lifes­pan or (ii) in­finity, and/​or the pe­riod of tor­ture with (i) any con­stant length larger than one year or (ii) any con­stant frac­tion of the lifes­pan dis­cussed.
D) you are mag­i­cally jus­tified to put ab­solute cer­tain trust in the offer (i.e., you’re sure the ge­nie isn’t trick­ing you).
E) re­place “speck in the eye” by “one hair on your body grows by half the nor­mal amount” for each day.

Of course, you don’t have to ad­dress ev­ery vari­a­tion men­tioned, just those that you think rele­vant.

• “Sup­pose the ques­tion were: Which is bet­ter, for one per­son to be tor­tured for 50 years or for ev­ery­one on earth to be tor­tured for 49 years? Would you re­ally choose the lat­ter? Would you not, in fact, jump at the chance to be the sin­gle per­son for 50 years if that were the only way to get that out­come rather than the other one?”

My crit­i­cism was for this spe­cific ini­tial ex­am­ple, which yes did seem “ob­vi­ous” to me. Very few, if any, eth­i­cal opinions can be gen­er­al­ized over any situ­a­tion and still seem rea­son­able. At least by my defi­ni­tion of “rea­son­able”.

No­tice that I didn’t sin­gle any­one out as be­ing “bad”. Mo­ral­ity is sub­jec­tive and I don’t dis­pute that. “Every man is right by his own mind”. I cau­tioned that we shouldn’t al­low a de­sire to stand-out fac­tor into a de­ci­sion such as this. I know well that the­atrics isn’t an un­com­mon el­e­ment on mailing lists/​blogs. This ex­am­ple shocked me be­cause toy de­ci­sions can be­come real de­ci­sions. I have a hunch that I wouldn’t be the only per­son shocked by this. If this spe­cific ex­am­ple were put be­fore all of hu­man­ity, I imag­ine that the peo­ple who were not shocked by it, would be the minor­ity. I don’t think that I’m be­ing un­rea­son­able.

• Jeffrey, do you re­ally think se­rial kil­ling is no worse than mur­der­ing a sin­gle in­di­vi­d­ual, since “Sub­jec­tive ex­pe­rience is re­stricted to in­di­vi­d­u­als”?

In fact, if you kill some­one fast enough, he may not sub­jec­tively ex­pe­rience it at all. In that case, is it no worse than a dust speck?

• I know that this is only a hy­po­thet­i­cal ex­am­ple, but I must ad­mit that I’m fairly shocked at the num­ber of peo­ple in­di­cat­ing that they would se­lect the tor­ture op­tion (as long as it wasn’t them be­ing tor­tured). We should be wary of the temp­ta­tion to sup­port some­thing un­ortho­dox for the effect of: “Hey, look at what a hard­core ra­tio­nal­ist I can be.” Real de­ci­sions have real effects on real peo­ple.

• Since I would not be one of the peo­ple af­fected I would not con­sider my­self able to make that de­ci­sion alone. In fact my prefer­ences are ir­rele­vant in that situ­a­tion even if I con­sider situ­a­tion to be ob­vi­ous.

To have situ­a­tion with 3^^^3 peo­ple we must have at least that many peo­ple ca­pa­ble of ex­ist­ing in some mean­ingful way. I as­sume we can­not query them about their prefer­ences in any mean­ingful (om­ni­scient) way. As I can­not choose who will be tor­tured or who gets dust specks I have to make col­lec­tive de­cis­sion.

I think that my solu­tion would be to take three differ­ent groups of ran­domly cho­sen peo­ple. First group would be asked that ques­tion and given chance to dis­cuss and change their minds. Se­cond group would be asked would they save 3^^^3 peo­ple from dust specks by ac­cept­ing tor­ture. Third group would be asked would they agree to be dust specked giv­ing per­son to be tor­tured 1/​3^^^3 chance to be saved.

If one of the lat­ter tests would show sig­nifi­cant prefer­ence over one of the situ­a­tions I would as­sume it is for some rea­son more ac­cept­able given chance to choose. If it would seem that peo­ple are ei­ther will­ing to change sce­nario given chance in both situ­a­tions or not will­ing to change situ­a­tion in ei­ther sce­nario I would rely on their stated prefer­ence from first group and go by that.

I do not think this solu­tion is good enough.

• I have a ques­tion/​an­swer in re­la­tion to this post that seems to be off-topic for the fo­rum. Click on my name if in­ter­ested.

• My ini­tial re­ac­tion (be­fore I started to think...) was to pick the dust specks, given that my bi­ases made the suffer­ing caused by the dust specks morally equiv­a­lent to zero, and 0^^^3 is still 0.

How­ever, given that the prob­lem stated an ac­tual phys­i­cal phe­nomenon (dust specks), and not a hy­po­thet­i­cal min­i­mal an­noy­ance, then you kind of have to take the other con­se­quences of the sud­den ap­pear­ance of the dust specks un­der con­sid­er­a­tion, don’t you?

If I was om­nipo­tent, and I could make ev­ery­one on Earth get a dust speck in their eye right now, how many car ac­ci­dents would oc­cur? Heavy ma­chin­ery ac­ci­dents? Work­place ac­ci­dents? Even if the chance is van­ish­ingly small—let’s say 6 ac­ci­dents oc­cur on Earth be­cause ev­ery­one got a dust speck in their eye. That’s one in a billion.

That’s one ac­ci­dent for ev­ery 10e9 peo­ple. Now, what per­centage of those are fatal? Trans­port Canada cur­rently lists the 23.7 of car ac­ci­dents in 2003 as re­sult­ing in a fatal­ity, which is 1 in 4. Let’s be nice, and as­sume that ev­ery­where else on earth safer, and take that down to 1 in 100 ac­ci­dents be­ing fatal.

Now, if ev­ery­one in ex­is­tence gets a dust speck in their eye be­cause of my de­ci­sion, as­sum­ing the hy­po­thet­i­cal 3^^^3 peo­ple live in some­thing ap­prox­i­mat­ing the lifestyles on Earth, I’ve con­ceiv­ably doomed 1 in 10e11 peo­ple to death.

That is, my cloud of dust specks have kil­led 3^^^3 /​ 10e11 peo­ple.

• It is cheat­ing to an­swer this by us­ing worse in­di­vi­d­ual con­se­quences than the dust specks them­selves.

The very point of the ques­tion is the in­finites­i­mal­ity of each in­di­vi­d­ual di­su­til­ity.

• The more I think about the ques­tion, the more I’m con­vinced that it at­tempts to demon­strate the com­men­su­ra­bil­ity of di­su­til­ity by in­vok­ing the com­men­su­ra­bil­ity of di­su­til­ity.

• I don’t see how it’s at­tempt­ing to demon­strate the com­men­su­ra­bil­ity of di­su­til­ity at all; it seems to be us­ing the as­sumed com­men­su­ra­bil­ity of di­su­til­ity to challenge in­tu­itions about di­su­til­ity. Can you say more about what is con­vinc­ing you?

• If the OP’s challeng­ing a moral in­tu­ition that doesn’t at some point re­duce to com­men­su­ra­bil­ity, then I don’t know what it is. It asks us to imag­ine the worst thing that could hap­pen to a ran­dom per­son, and then the least per­cep­ti­bly bad thing that could hap­pen, and seems to be mak­ing the ar­gu­ment that an uni­mag­in­ably huge num­ber of the lat­ter would trump a sin­gle in­stance of the former. What’s that a re­duc­tio for, if not the as­sump­tion that tor­ture (or any­thing com­pa­rably bad) car­ries a spe­cial kind of di­su­til­ity?

On the other hand I’m not sure what the post was writ­ten in re­sponse to, if any­thing, so there might be some con­tex­tual in­for­ma­tion there that I’m miss­ing.

• I’m… puz­zled by this ex­change.

But, yes, agreed that a lot of ob­jec­tions to this post im­plic­itly as­sert that tor­ture is in­com­men­su­rable with dust-specks, and EY is challeng­ing that in­tu­ition.

• As­sum­ing that there are 3^^^3 dis­tinct in­di­vi­d­u­als in ex­is­tence, I think the an­swer is pretty ob­vi­ous- pick the tor­ture. How­ever, the fact that we can­not pos­si­bly hope to vi­su­al­ize so many in­di­vi­d­u­als it’s a pointlessly large num­ber. In fact, I would go so low as one quadrillion hu­man be­ings with dust specks in their eyes out­weighs one in­di­vi­d­ual’s 50 years of tor­ture. Con­sider- one quadrillion sec­onds of minute but no­tice­able pain ver­sus a scant fifty years of tor­tured hell. One quadrillion sec­onds is about 31,709,792 years. Let’s just go with 32 mil­lion years. Then fac­tor in the mag­ni­tudes- tor­ture is far worse than dust specks- 50 years ver­sus 32 mil­lion good enough odds for you?

How­ever, that be­ing said, the ques­tion is yet an­other in­stal­l­ment of life­boat ethics, and has lit­tle bear­ing on the real world. If we are ever forced to make such a de­ci­sion, that’s one thing, but in the mean­time let’s work through sys­temic is­sues that might lead to such a situ­a­tion in­stead.

• An­drew Mac­don­ald asked:
Any tak­ers for the tor­ture?
As­sum­ing the tor­ture-life is ran­domly cho­sen from the 3^^^3 sized pool, definitely tor­ture. If I have a strong rea­son to ex­pect the tor­ture life to be found close to the be­gin­ning of the se­quence, similar con­sid­er­a­tions as for the next an­swer ap­ply.

OK here goes… it’s this life. Tonight, you start fifty years be­ing loved at by countless sadis­tic Bar­ney the Dinosaurs. Or, for all 3^^^3 lives you (at your pre­sent age) have to sin­ga­long to one of his songs. BARNEYLOVE or SONGS?
The an­swer de­pends on whether I ex­pect to make it through the 50 year or­deal with­out per­ma­nent psy­cholog­i­cal dam­age. If I know with close to cer­tainty that I will, the an­swer is BARNEYLOVE. Other­wise, it’s SONGS; while I might still ac­quire ir­re­versible psy­cholog­i­cal dam­age, it would prob­a­bly take much longer, giv­ing me a chance to live rel­a­tively sane for a long time be­fore then.

• I take \$1 from each per­son. It’s not the same dilemma.

----

Ri:The idea of con­vinc­ing oth­ers to de­cide TORTURE is both­er­ing me much more than my own de­ci­sion.

PK:I don’t think there’s any worry that I’m off to get my “rack wind­ing cer­tifi­cate” :P

Yes, I know. :-) I was just cu­ri­ous about the bi­ases mak­ing me feel that way.

in­di­vi­d­ual liv­ing 3^^^3 times...keep mem­o­ries and so on of all pre­vi­ous lives

3^^^3 lives worth of mem­o­ries? Even at one bit per life, that makes you far from hu­man. Be­sides, you’re likely to get tor­tured in googol­plexes of those life­times any­way.

Ar­rrgh, stop mess­ing with my head. Ac­tu­ally, no, don’t stop, this is fun! :)

OK here goes… it’s this life. Tonight, you start fifty years be­ing loved at by countless sadis­tic Bar­ney the Dinosaurs. Or, for all 3^^^3 lives you (at your pre­sent age) have to sin­ga­long to one of his songs. BARNEYLOVE or SONGS?

• To con­tinue this busi­ness of look­ing at the prob­lem from differ­ent an­gles:

Another for­mu­la­tion, com­ple­men­tary to An­drew Mac­don­ald’s, would be: Should 3^^^3 peo­ple each vol­un­teer to ex­pe­rience a speck in the eye, in or­der to save one per­son from fifty years of tor­ture?

And with re­spect to util­ity func­tions: Another non­lin­ear way to ag­gre­gate in­di­vi­d­ual di­su­til­ities x, y, z… is just to take the max­i­mum, and to say that a situ­a­tion is only as bad as the worst thing hap­pen­ing to any in­di­vi­d­ual in that situ­a­tion. This could be defended if one’s as­sign­ment of util­ities was based on in­ten­sity of ex­pe­rience, for ex­am­ple. There is no-one ac­tu­ally hav­ing a bad ex­pe­rience with 3^^^3 times the bad­ness of a speck in the eye. As for the fact that two peo­ple suffer­ing iden­ti­cally turns out to be no worse than just one—ac­cept­ing a few coun­ter­in­tu­itive con­clu­sions is a small price to pay for sim­plic­ity, right?

• Re­cov­er­ing: chuck­les no, I meant think­ing about that, and re­think­ing about what the ac­tual prop­er­ties of what I’d con­sider to be a rea­son­able util­ity func­tion led me to re­ject my ear­lier claim of the spe­cific non­lin­ear­ity that lead to my as­sump­tion that as you in­crease the num­ber of peo­ple that re­cieve a spec, the di­su­til­ity is sub­lin­ear, and now I be­lieve it to be lin­ear. So huge big­big­big­big­gi­gan­taenor­mous num specks would, of course, even­tu­ally have to have more di­su­til­ity than the tor­ture. But since to get to that point knuth ar­row no­ta­tion had to be in­voked, I don’t think there’s any worry that I’m off to get my “rack wind­ing cer­tifi­cate” :P

But yeah, out of con­text this de­bate would sound like com­plete non­sense… “crazy geeks find it difficult to de­cide be­tween dust specks and ex­treme tor­ture.”

I do have to ad­mit though, An­drew’s com­ment about in­di­vi­d­ual liv­ing 3^^^3 times and so on has me think­ing again. If “keep mem­o­ries and so on of all pre­vi­ous lives = yes” (so it’s re­ally one re­ally long lifes­pan) and “per­ma­nent phys­i­cal and psy­cholog­i­cal dam­age post tor­ture = no”) then I may take that. I think. Ar­rrgh, stop mess­ing with my head. Ac­tu­ally, no, don’t stop, this is fun! :)

• Tom, if hav­ing an up­per limit on di­su­til­ity(Specks) that’s lower than di­su­til­ity(Tor­ture1) is beg­ging the ques­tion in favour of SPECKS then why isn’t not* hav­ing such an up­per limit beg­ging the ques­tion in favour of TORTURE?

It should be ob­vi­ous why. The con­straint in the first one is nei­ther ar­gued for nor agreed on and by it­self en­tails the con­clu­sion be­ing ar­gued for. There’s no such el­e­ment in the sec­ond.

• Tom, if hav­ing an up­per limit on di­su­til­ity(Specks) that’s lower than di­su­til­ity(Tor­ture1) is beg­ging the ques­tion in favour of SPECKS then why isn’t not* hav­ing such an up­per limit beg­ging the ques­tion in favour of TORTURE?

I find it rather sur­pris­ing that so many peo­ple agree that util­ity func­tions may be dras­ti­cally non­lin­ear but are ap­par­ently com­pletely cer­tain that they know quite a bit about how they be­have in cases as ex­otic as this one.

• @Neel.

Then I only need to make the con­di­tion slightly stronger: “Any slight ten­dency to ag­gre­ga­tion that doesn’t beg the ques­tion.” Ie, that doesn’t place a math­e­mat­i­cal up­per limit on di­su­til­ity(Specks) that is lower than di­su­til­ity(Tor­ture=1). I trust you can see how that would be sim­ply beg­ging the ques­tion. Your for­mu­la­tion:

D(Tor­ture, Specks) = [10 * (Tor­ture/​(Tor­ture + 1))] + (Specks/​(Specks + 1))

...doesn’t meet this test.

Con­trary to what you think, it doesn’t re­quire un­bounded util­ity. Limit­ing the lower bound of the range to (say) 2 * di­su­til­ity(tor­ture) will suffice. The rest of your mes­sage as­sumes it does.

For com­plete­ness, I note that in­tro­duc­ing num­bers com­pa­rable to 3^^^3 in an at­tempt to undo the 3^^^3 scal­ing would cause a for­mu­la­tion to fail the “slight” con­di­tion, mod­est though it is.

• Again, not ev­ery­one agrees with the ar­gu­ment that un­bounded util­ity func­tions give rise to Dutch books. Un­bounded util­ities only ad­mit Dutch books if you do al­low a dis­con­ti­nu­ity be­tween in­finite re­wards and the limit of in­creas­ing finite awards, but you don’t al­low a dis­con­ti­nu­ity be­tween in­finite plan­ning and the limit of in­creas­ing finite plans.

• Re­cov­er­ing ir­ra­tional­ist, I hadn’t thought of things in pre­cisely that way—just “3^^4 is re­ally damn big, never mind 3^^7625597484987”—but now that you point it out, the ar­gu­ment by googol­plex gra­da­tions seems to me like a much stronger ver­sion of the ar­gu­ments I would have put forth.

It only re­quires 3^^5 = 3^(3^7625597484987) to get more googol­plex fac­tors than you can shake a stick at. But why not use a googol in­stead of a googol­plex, so we can stick with 3^^4? If any­thing, the case is more per­sua­sive with a googol be­cause a googol is more com­pre­hen­si­ble than a googol­plex. It’s all about scope ne­glect, re­mem­ber—googol­plex just fades into a fea­ture­less big num­ber, but a googol is ten thou­sand trillion trillion trillion trillion trillion trillion trillion trillion.

• Re­cov­er­ing ir­ra­tional­ist: in your in­duc­tion ar­gu­ment, my first stab would be to deny the last premise (tran­si­tivity of moral judg­ments). I’m not sure why moral judg­ments have to be tran­si­tive.

I ac­knowl­edged it won’t hold for ev­ery moral. There are some pretty bark­ing ones out there. I say it holds for choos­ing the op­tion that cre­ates less suffer­ing. For finite val­ues, tran­si­tivity should work fine.

Next, I’d deny the sec­ond-to-last premise (for one thing, I don’t know what it means to be hor­ribly tor­tured for the short­est pe­riod pos­si­ble—part of the tor­ture­ness of tor­ture is that it lasts a while).

Fine, I still have plenty of googol­plex-di­vi­sions left. Cut the se­ries as fine as you like. Have billions of in­ter­ven­ing lev­els of dis­com­fort from spec->itch->ouch->“fifty years of read­ing the com­ments to this post.” The point is if you slowly morph from TORTURE to SPEC in very small steps, ev­ery step gets worse be­cause the pop­u­la­tion mul­ti­plies enor­mously while the pain differs by a in­cred­ibly tiny amount.

• ok, with­out read­ing the above com­ments… (i did read a few of them, in­clud­ing robin han­son’s first com­ment—don’t know if he weighed in again).

dust specks over tor­ture.

the ap­para­tus of the eye han­dles dust specks all day long. i just blinked. it’s quite pos­si­ble there was a dust speck in there some­where. i just don’t see how that adds up to any­thing, even if a very large num­ber is in­voked. in fact with a very large num­ber like the one de­scribed it is likely that hu­man be­ings would evolve more effi­cient tear ducts, or faster blink­ing, or some­thing like that. we would adapt and be stronger.

tor­tur­ing one per­son for fifty years how­ever puts a stain on the whole hu­man race. it af­fects all of us, even if the tor­ture is car­ried out fifty miles un­der­ground in com­plete se­crecy.

• Just thought I’d com­ment that the more I think about the ques­tion, the more con­fus­ing it be­comes. I’m in­clined to think that if we con­sider the max util­ity state of ev­ery per­son hav­ing max­i­mal fulfil­ment, and a “dust speck” as the min­i­mal amount of “un­fulfil­ment” from the top a per­son can ex­pe­rience, then two peo­ple ex­pe­rienc­ing a sin­gle “dust speck” is not quite as bad as a sigle per­son two “dust specks” be­low op­ti­mal. I think the rea­son I’m think­ing that is that the sec­ond speck takes away more pro­por­tion­ally than the first speck did.

Oh, one other thing. I was as­sum­ing for my replies both here and in the other thread that we’re only talk­ing about the ac­tual “mo­ment of suffer­ing” caused by a dust speck event, with no po­ten­tial “side effects”

If we con­sider that those can have con­se­quences, I’m pretty sure that on av­er­age those would be nega­tive/​harm­ful, and when the law of large num­bers is in­voked via stu­pen­dously large num­bers, well, in that case I’m go­ing with TORTURE.

For the mo­ment at least. :)

• the di­su­til­ity of ad­di­tional dust speck­ing to one per­son in a short pe­riod of time prob­a­bly grows faster than linearly

That’s why I used a googol­plex peo­ple to bal­ance the growth. All else equal, do you dis­agree with: “A googol­plex peo­ple dust specked x times dur­ing their life­time with­out fur­ther ill effect is worse than one per­son dust specked for x*2 times dur­ing their life­time with­out fur­ther ill effect” for the range con­cerned?

one per­son get­ting specked ev­ery sec­ond of their life is sig­nifi­cantly worse than a cou­ple billion peo­ple get­ting specked once.

I agree. I never said it wasn’t.

Have to run—will elab­o­rate later.

• Paul: Yet a third might be that we can’t ag­gre­gate the pain of dust ha­rass­ment across peo­ple, so that there’s some amount of sin­gle-per­son dust ha­rass­ment that will be worse than some amount of tor­ture, but if we spread that out, it’s not.

My in­duc­tion ar­gu­ment cov­ers that. As long as, all else equal, you be­lieve:

• A googol­plex peo­ple tor­tured for time x is worse than one per­son tor­tured for time x+0.00001%.

• A googol­plex peo­ple dust specked x times dur­ing their life­time with­out fur­ther ill effect is worse than one per­son dust specked for x*2 times dur­ing their life­time with­out fur­ther ill effect.

• A googol­plex peo­ple be­ing dust speck­led ev­ery sec­ond of their life with­out fur­ther ill effect is worse than one per­son be­ing hor­ribly tor­tured for the short­est pe­riod ex­pe­rien­ca­ble.

• If a is worse than b and b is worse than c then a is worse than c.

…you can show that all else equal, to re­duce suffer­ing you pick TORTURE. As far as I can see any­way, I’ve been wrong be­fore. Again, I ac­knowl­edge that it de­pends on how much you care about re­duc­ing suffer­ing com­pared to other con­cerns, such as an ar­bi­trary cut-off point, ab­ho­r­a­tion to us­ing maths to an­swer such ques­tions, or sa­cred val­ues, which cer­tainly can have util­ity by keep­ing worse ir­ra­tional­ities in check.

• For Robin’s statis­tics:
Tor­ture on the first prob­lem, and tor­ture again on the fol­lowup dilemma.

rele­vant ex­per­tise: I study prob­a­bil­ity the­ory, ra­tio­nal­ity and cog­ni­tive bi­ases as a hobby. I don’t claim any real ex­per­tise in any of these ar­eas.

• Eliezer—I think the is­sues we’re get­ting into now re­quire dis­cus­sion that’s too in­volved to han­dle in the com­ments. Thus, I’ve com­posed my own post on this ques­tion. Would you please be so kind as to ap­prove it?

Re­cov­er­ing ir­ra­tional­ist: I think the hope­fully-forth­com­ing-post-of-my-own will con­sti­tute one kind of an­swer to your com­ment. One other might be that one can, in fact, pre­fer huge dust ha­rass­ment to a lit­tle tor­ture. Yet a third might be that we can’t ag­gre­gate the pain of dust ha­rass­ment across peo­ple, so that there’s some amount of sin­gle-per­son dust ha­rass­ment that will be worse than some amount of tor­ture, but if we spread that out, it’s not.

• @Paul, I was try­ing to find a solu­tion that didn’t as­sume “b) all types of plea­sures and pains are com­men­su­rable such that for all i, j, given a quan­tity of plea­sure/​pain ex­pe­rience i, you can find a quan­tity of plea­sure/​pain ex­pe­rience j that is equal to (or greater or less than) it. (i.e. that plea­sures and pains ex­ist on one di­men­sion).“, but rather es­tab­lished it for the case at hand. Un­less it’s speci­fi­cally stated in the hy­po­thet­i­cal that this is a true 1-shot choice (which we know it isn’t in the real world, as we make analo­gous choices all the time), I think it’s le­gi­t­i­mate to as­sume the ag­gre­gate re­sult of the test re­peated by ev­ery­one. Thus, I’m not in­vok­ing util­i­tar­ian calcu­la­tion, but Kan­tian ab­solutism! ;) I mean to ap­peal to your prac­ti­cal in­tu­ition by sug­gest­ing that a con­stant bar­rage of specks will cre­ate an ex­pe­rience of a like kind with tor­ture.

@Robin Han­son, what lit­tle ex­per­tise I have is in the liberal arts and sci­ences; Eu­clid and Ptolemy, Aris­to­tle and Kant, Ein­stein and Sopho­cles, etc.

• For Robin’s statis­tics:
Given no other data but the choice, I would have to choose tor­ture. If we don’t know any­thing about the con­se­quences of the blink­ing or how many times the choice is be­ing made, we can’t know that we are not caus­ing huge amounts of harm. If the ques­tion de­liber­ately elimi­nated these un­knowns- ie the bad­ness was limited to an eye­blink that does not im­me­di­ately re­sult in some dis­aster for some­one or blind­ness for an­other, and you re­ally are the one and only per­son mak­ing the choice ever, then I’d go with the dust—But these qual­ifi­ca­tions are huge when you con­sider 3^^^3. How can we say the eye­blink didn’t dis­tract a sur­geon and cause a slip of his knife? Given enough tri­als, some­thing like that is bound to hap­pen.

• Dare I say that peo­ple may be over­valu­ing 50 years of a sin­gle hu­man life? We know for a fact that some effect will be mul­ti­plied by 3^^^3 by our choice. We have no idea what strange an un­ex­pected ex­is­ten­tial side effects this may have. It’s worth avoid­ing the risk. If the ques­tion were posed with more de­tail, or spe­cific limi­ta­tions on the na­ture of the effects, we might be able to an­swer more con­fi­dently. But to risk not only hu­man civ­i­liza­tion, but ALL POSSIBLE CIVILIZATIONS, you must be DAMN SURE you are right. 3^^^3 makes even in­cred­ibly small doubts sig­nifi­cant.

• “The no­tion of sa­cred val­ues seems to lead to ir­ra­tional­ity in a lot of cases, some of it gross ir­ra­tional­ity like scope ne­glect over hu­man lives and “Can’t Say No” spend­ing.”

Could you post a sce­nario where most peo­ple would choose the op­tion which un­am­bigu­ously causes greater harm, with­out get­ting into these kinds of de­bates about what “harm” means? Eg., where op­tion A ends with shoot­ing one per­son, and op­tion B ends with shoot­ing ten peo­ple, but op­tion B sounds bet­ter ini­tially? We have a hard enough time get­ting rid of ir­ra­tional­ity, even in cases where we know what is ra­tio­nal.

• all types of plea­sures and pains are com­men­su­rable such that for all i, j, given a quan­tity of plea­sure/​pain ex­pe­rience i, you can find a quan­tity of plea­sure/​pain ex­pe­rience j that is equal to (or greater or less than) it. (i.e. that plea­sures and pains ex­ist on one di­men­sion)

Is a con­sis­tent and com­plete prefer­ence or­der­ing with­out this prop­erty pos­si­ble?

• dozens of com­ments filled with blinkered non­sense like “the con­tra­dic­tion be­tween in­tu­ition and philo­soph­i­cal con­clu­sion” when the alleged “philo­soph­i­cal con­clu­sion” hinges on some ridicu­lous sim­plis­tic Ben­thamite util­i­tar­i­anism that no­body out­side of cer­tain eco­nomics de­part­ments and in­su­lar tech­no­cratic com­puter-geek blog com­mu­ni­ties ac­tu­ally ac­cepts!

You’ve quoted one of the few com­ments which your crit­i­cism does not ap­ply to. I carry no wa­ter for util­i­tar­ian philos­o­phy and was here high­light­ing its failure to cap­ture moral in­tu­ition.

• Ac­tu­ally, that was a poor ex­am­ple be­cause tax­ing one penny has side effects. I would rather save one life and ev­ery­one in the world poked with a stick with no other side effects, be­cause I put a sub­stan­tial prob­a­bil­ity on lifes­pans be­ing longer than many might an­ti­ci­pate. So even re­peat­ing this six billion times to save ev­ery­one’s life at the price of 120 years of be­ing re­peat­edly poked with a stick, would still be a good bar­gain.

Where there are no spe­cial in­flec­tion points, a bad re­peated ac­tion should be a bad in­di­vi­d­ual ac­tion, a good re­peated ac­tion should be a good in­di­vi­d­ual ac­tion. Talk­ing about the re­peated case changes your in­tu­itions and gets around your scope in­sen­si­tivity, it doesn’t change the nor­ma­tive shape of the prob­lem (IMHO).

• Re­cov­er­ing Ir­ra­tional­ist:
True: my ex­pected value would be 50 years of tor­ture, but I don’t think that changes my ar­gu­ment much.

Se­bas­tian:
I’m not sure I un­der­stand what you’re try­ing to say. (50*365)/​3^^^3 (which is ba­si­cally the same thing as 1/​3^^^3) days of tor­ture wouldn’t be any­thing at all, be­cause it wouldn’t be no­tice­able. I don’t think you can di­vide time to that ex­tent from the point of view of hu­man con­scious­ness.

I don’t think the math in my per­sonal util­ity-es­ti­ma­tion al­gorithm works out sig­nifi­cantly differ­ently de­pend­ing on which of the cases is cho­sen.
To the ex­tent that you think that and it is rea­son­able, I sup­pose that would un­der­mine my ar­gu­ment that the per­sonal choice frame­work is the wrong way of look­ing at the ques­tion. I would choose the speck ev­ery day, and it seems like a clear choice to me, but per­haps that just re­flects that I have the bias this thought ex­per­i­ment was meant to bring out.

• It is clearly not so easy to have a non-sub­jec­tive de­ter­mi­na­tion of util­ity.
After some thought I pick the tor­ture. That is be­cause the con­cept of 3^^^3 peo­ple means that no evolu­tion will oc­cur while that many peo­ple live. The one ad­van­tage to death is that it al­lows for evolu­tion. It seems likely that we will have evovled into much more in­ter­est­ing life forms long be­fore 3^^^3 of us have passed.
What’s the util­ity of that?

• Bayesi­anism, In­finite De­ci­sions, and Bind­ing replies to Vann McGee’s “An air­tight dutch book”, defend­ing the per­mis­si­bil­ity of an un­bounded util­ity func­tion.

An op­tion that dom­i­nates in finite cases will always prov­ably be part of the max­i­mal op­tion in finite prob­lems; but in in­finite prob­lems, where there is no max­i­mal op­tion, the dom­i­nance of the op­tion for the in­finite case does not fol­low from its dom­i­nance in all finite cases.

If you al­low a dis­con­ti­nu­ity where the util­ity of the in­finite case is not the same as the limit of the util­ities of the finite cases, then you have to al­low a cor­re­spond­ing dis­con­ti­nu­ity in plan­ning where the ra­tio­nal in­finite plan is not the limit of the ra­tio­nal finite plans.

• be­cause of com­plex­ity com­pres­sion. If you have 3^^^^3 peo­ple with dust specks, al­most all of them will be iden­ti­cal copies of each other, greatly re­duc­ing abs(U(specks)).

If so, I want my anti-wish back. Evil Ge­nie never said any­thing about com­pres­sion. No won­der he has so many peo­ple to dust. I’m com­plain­ing to GOD Over Djinn.

If they’re not com­pressed, surely a copy will still ex­pe­rience qualia? Does it mat­ter that it’s iden­ti­cal to an­other? If the sum ex­pe­rience of many copies is weighted as if there was just one, then I’m offi­cially con­vert­ing from in­finite set ag­nos­tic to in­finite set athe­ist.

• Tom McCabe wrote:
The prob­a­bil­ity is effec­tively much greater than that, be­cause of com­plex­ity com­pres­sion. If you have 3^^^^3 peo­ple with dust specks, al­most all of them will be iden­ti­cal copies of each other, greatly re­duc­ing abs(U(specks)). abs(U(tor­ture)) would also get re­duced, but by a much smaller fac­tor, be­cause the num­ber is much smaller to be­gin with.

Is there some­thing wrong with view­ing this from the per­spec­tive of the af­fected in­di­vi­d­u­als (unique or not)? For any in­di­vi­d­ual in­stance of a per­son, the prob­a­bil­ity of di­rectly ex­pe­rienc­ing the tor­ture is (10(10100))/​(3^^^3), re­gard­less of how many iden­ti­cal copies of this per­son ex­ist.

Mike wrote:
I think a more ap­po­site ap­pli­ca­tion of that trans­la­tion might be:
If I knew I was go­ing to live for 3^^^3+50*365 days, and I was faced with that choice ev­ery day …

I’m won­der­ing how you would phrase the daily choice in this case, to get the prop­er­ties you want. Per­haps like this:
1.) Add a pe­riod of (50*365)/​3^^^3 days to the time pe­riod you will be tor­tured at the end of your life.
2.) Get a speck.

This isn’t quite the same as the origi­nal ques­tion, as it gives choices be­tween the two ex­tremes. And in prac­tice, this could get rather an­noy­ing, as just hav­ing to an­swer the ques­tion would be similarly bad to get­ting a speck. Leav­ing that aside, how­ever, I’d still take the (ridicu­lously short) tor­ture ev­ery day.

The differ­ence is that fram­ing the ques­tion as a one-off in­di­vi­d­ual choice ob­scures the fact that in the ex­am­ple proffered, the tor­ture is a cer­tainty.
I don’t think the math in my per­sonal util­ity-es­ti­ma­tion al­gorithm works out sig­nifi­cantly differ­ently de­pend­ing on which of the cases is cho­sen.

• An­swer de­pends on the per­son’s POV on con­scious­ness.

• 1/​3^^^3 chance of be­ing tor­tured… If I knew I was go­ing to live for 3^^^3+50*365 days, and I was faced with that choice ev­ery day, I would always choose the speck, be­cause I would never want to en­dure the in­evitable 50 years of tor­ture.

That wouldn’t make it in­evitable. You could get away with it, but then you could get mul­ti­ple tor­tures. Rol­ling 6 dice of­ten won’t get ex­actly one “1”.

• Eliezer, in your re­sponse to g, are you sug­gest­ing that we should strive to en­sure that our prob­a­bil­ity dis­tri­bu­tion over pos­si­ble be­liefs sum to 1? If so, I dis­agree: I don’t think this can be con­sid­ered a plau­si­ble re­quire­ment for ra­tio­nal­ity. When you have no in­for­ma­tion about the dis­tri­bu­tion, you ought to as­sign prob­a­bil­ities uniformly, ac­cord­ing to Laplace’s prin­ci­ple of in­differ­ence. But the prin­ci­ple of in­differ­ence only works for dis­tri­bu­tions over finite sets. So for in­finite sets you have to make an ar­bi­trary choice of dis­tri­bu­tion, which vi­o­lates in­differ­ence.

• “Wow. Peo­ple sure are com­ing up with in­ter­est­ing ways of avoid­ing the ques­tion.”

My re­sponse was a real re­quest for in­for­ma­tion- if this is a pure util­ity test, I would se­lect the dust specks. If this were done to a com­plex, func­tion­ing so­ciety, adding dust specks into ev­ery­one’s eyes would dis­rupt a great deal of im­por­tant stuff- some­one would al­most cer­tainly get kil­led in an ac­ci­dent due to the dis­trac­tion, even on a planet with only 10^15 peo­ple and not 3^^^^3.

• Let’s sup­pose we mea­sure pain in pain points (pp). Any event which can cause pain is given a value in [0, 1], with 0 be­ing no pain and 1 be­ing the max­i­mum amount of pain per­ceiv­able. To calcu­late the pp of an event, as­sign a value to the pain, say p, and then mul­ti­ply it by the num­ber of peo­ple who will ex­pe­rience the pain, n. So for the tor­ture case, as­sume p = 1, then:

tor­ture: 1*1 = 1 pp

For the spec in eye case, sup­pose it causes the least amount of pain greater than no pain pos­si­ble. Denote this by e. As­sume that the dust speck causes e amount of pain. Then if e < 1/​3^^^3

spec: 1 * e < 1 pp

and if e > 1/​3^^^3

spec: 1 * e > 1 pp

So as­sum­ing our moral calcu­lus is to always choose whichever op­tion gen­er­ates the least pp, we need only ask if e is greater than or less than 1/​n.

If you’ve been pay­ing at­ten­tion, I now have an out to give no an­swer: we don’t know what e is, so I can’t de­cide (at least not based on pp). But I’ll go ahead and wa­ger a guess. Since 1/​3^^^3 is very small, I think that most likely any pain sens­ing sys­tem of any pre­sent or fu­ture in­tel­li­gence will have e > 1/​3^^^3, then I must choose tor­ture be­cause tor­ture costs 1 pp but the specs cost more than 1 pp.

This doesn’t feel like what, as a hu­man, I would ex­pect the an­swer to be. I want to say don’t tor­ture the poor guy and all the rest of us will suffer the spec so he need not be tor­tured. But I sus­pect this is hu­man in­abil­ity to deal with large num­bers, be­cause I think about how I would be will­ing to ac­cept a spec so the guy wouldn’t be tor­ture since e pp < 1 pp, and ev­ery other in­di­vi­d­ual, sup­pos­ing they were pp-fear­ing peo­ple, would make the same short-sighted choice. But the net cost would be to dis­tribute more pain with the specs than the tor­ture ever would.

Weird how the hu­man mind can find a log­i­cal an­swer and still ex­pect a non­log­i­cal an­swer to be the truth.

• For those who would pick TORTURE, what about Vas­sar’s uni­verses of ag­o­nium? Say a googol­plex-per­sons’ worth of ag­o­nium for a googol­plex years.

Tor­ture, again. From the per­spec­tive of each af­fected in­di­vi­d­ual, the choice be­comes:

1.) A (10(10100))/​(3^^^3) chance of be­ing tor­tured for (10(10100)) years.
2.) A 1 chance of a dust speck.
(or very slightly differ­ent num­bers if the (10(10100)) peo­ple ex­ist in ad­di­tion to the 3^^^3 peo­ple; the differ­ence is too small to be not­i­ca­ble)

I’d still take the former. (10(10100))/​(3^^^3) is still so close to zero that there’s no way I can tell the differ­ence with­out get­ting a larger uni­verse for stor­ing my mem­ory first.

• what about Vas­sar’s uni­verses of ag­o­nium? Say a googol­plex-per­sons’ worth of ag­o­nium for a googol­plex years.

To re­duce suffer­ing in gen­eral rather than your own (it would be tough to live with), bring on the cod­dling grinders. (10^10^100)^2 is a joke next to 3^^^3.

Hav­ing said that, it de­pends on the qualia-ex­pe­rienc­ing pop­u­la­tion of all ex­is­tence com­pared to the num­bers af­fected, and whether you change ex­ist­ing lives or make new ones. If only a few googol­plex-squared peo­ple-years ex­ist any­way, I vote dust.

I also vote to kill the bunny.

• “For those who would pick SPECKS, would you pay a sin­gle penny to avoid the dust specks?”

To avoid all the dust specks, yeah, I’d pay a penny and more. Not a penny per speck, though ;)

The rea­son is to avoid hav­ing to deal with the “un­in­tended con­se­quences” of be­ing re­spon­si­ble for that very very small change over such a large num­ber of peo­ple. It’s bound to have some sig­nifi­cant in­di­rect con­se­quences, both pos­i­tive and nega­tive, on the far edges of the bell curve… the net im­pact could be nega­tive, and a penny is lit­tle to pay to avoid re­spon­si­bil­ity for that pos­si­bil­ity.

• > For those who would pick TORTURE, what about Vas­sar’s uni­verses of ag­o­nium? Say a googol­plex-per­sons’ worth of ag­o­nium for a googol­plex years.

If you mean would I con­demn all con­scious be­ings to a googol­plex of tor­ture to avoid uni­ver­sal an­nihila­tion from a big “dust crunch” my an­swer is still prob­a­bly yes. The al­ter­na­tive is uni­ver­sal doom. At least the tor­tured masses might have some small chance of find­ing a solu­tion to their prob­lem at some point. Or at least a googol­plex years might pass leav­ing some fu­ture civ­i­liza­tion free to pros­per. The dust is ab­solute doom for all po­ten­tial fu­tures.

Of course, I’m as­sum­ing that 3^^^3 con­scious be­ings are un­likely to ever ex­ist and so that dust would be ap­plied over and over to the same peo­ple caus­ing the uni­verse to be filled with dust. Maybe this isn’t how the me­chan­ics of the prob­lem work.

• “Re­gard­ing (1), we pretty much always have ex­cel­lent rea­son to mis­trust our judg­ments, and then we have to choose any­way; in­ac­tion is also a choice. The null plan is a plan. As Rus­sell and Norvig put it, re­fus­ing to act is like re­fus­ing to al­low time to pass.”

This goes to the crux of the mat­ter, why to the ex­tent the fu­ture is un­cer­tain, it is bet­ter to de­cide based on prin­ci­ples (rep­re­sent­ing wis­dom en­coded via evolu­tion­ary pro­cesses over time) rather than on the flat ba­sis of ex­pected con­se­quences.

• Fas­ci­nat­ing, and scary, the ex­tent to which we ad­here to es­tab­lished mod­els of moral rea­son­ing de­spite the ob­vi­ous in­con­sis­ten­cies. Some­one here pointed out that the prob­lem wasn’t suffi­ciently defined, but then pro­ceeded to offer ex­am­ples of ob­jec­tive fac­tors that would ap­pear nec­es­sary to eval­u­a­tion of a con­se­quen­tial­ist solu­tion. Robin seized upon the “ob­vi­ous” an­swer that any sig­nifi­cant amount of dis­com­fort, over such a vast pop­u­la­tion, would eas­ily dom­i­nate, with any con­ceiv­able scal­ing fac­tor, the util­i­tar­ian value of the tor­ture of a sin­gle in­di­vi­d­ual. But I think he took the prob­lem state­ment too liter­ally; the dis­com­fort of the dust mote was in­tended to be van­ish­ingly small, over a vast pop­u­la­tion, thus keep­ing the prob­lem in­ter­est­ing rather than “ob­vi­ous.”

But most in­ter­est­ing to me is that no one pointed out that fun­da­men­tally, the as­sessed good­ness of any act is a func­tion of the val­ues (effec­tive, but not nec­es­sar­ily ex­plicit) of the as­ses­sor. And as­sessed moral­ity as a func­tion of group agree­ment on the “good­ness” of an act, pro­mot­ing the in­creas­ingly co­her­ent val­ues of the group over in­creas­ing scope of ex­pected con­se­quences.

Now the val­ues of any agent will nec­es­sar­ily be rooted in an evolu­tion­ary branch of re­al­ity, and this is the ba­sis for in­creas­ing agree­ment as we move to­ward the com­mon root, but this evolv­ing agree­ment in prin­ci­ple on the di­rec­tion of in­creas­ing moral­ity should never be con­sid­ered to point to any par­tic­u­lar des­ti­na­tion of good­ness or moral­ity in any ob­jec­tive sense, for that way lies the “re­pug­nant con­clu­sion” and other para­doxes of util­i­tar­i­anism.

Ob­vi­ous? Not at all, for while we can in­creas­ingly con­verge on prin­ci­ples pro­mot­ing “what works” to pro­mote our in­creas­ingly co­her­ent val­ues over in­creas­ing scope, our ex­pres­sion of those val­ues will in­creas­ingly di­verge.

• There isn’t any right an­swer. An­swers to what is good or bad is a mat­ter of taste, to bor­row from Niet­zsche.

To me the ex­am­ple has mes­si­anic qual­ity. One per­son suffers im­mensely to save oth­ers from suffer­ing. Does the sense that there is a ‘right’ an­swer come from a Judeo-Chris­tian sense of what is ap­pro­pri­ate. Is this a sort of bias in line with bi­ases to­wards ex­pect­ing facts to con­form to a story?

Also, this ex­am­ple sug­gests to me that the value plu­ral­ism of Cowen makes much more sense than some re­duc­tive ap­proach that seeks to cre­ate one ob­jec­tive mea­sure of good and bad. One per­son might seek to re­duce in­stances of ill­ness, an­other to max­i­mize re­ported hap­piness, an­other to max­i­mize a per­sonal sense of beauty. IMO, there isn’t a judge who will de­cide who is right and who is wrong, and the de­ci­sive fac­tor is who can marhsal the power to bring about his will, as un­sa­vory as that might be (un­less your side is win­ning).

• I’m with Tomhs. The ques­tion has less value as a moral dilemma than as an op­por­tu­nity to rec­og­nize how we think when we “know” the an­swer. I in­ten­tion­ally did not read the com­ments last night so I could ex­am­ine my own thought pro­cess, and tried very hard to hold an open mind (my in­stinct was dust). It’s been a use­ful and in­ter­est­ing ex­pe­rience. Much bet­ter than the brain teasers which I can gen­er­ally get be­cause I’m on high­t­ened alert when read­ing El’s posts. Here be­ing on alert sim­ply al­lowed me to try to avoid im­me­di­ately giv­ing in to my bias.

• @Robin,

“But even though there are 26 com­ments here, and many of them prob­a­bly know in their hearts tor­ture is the right choice, no one but me has said so yet.”

I thought that Se­bas­tian Ha­gen and I had said it. Or do you think we gave weasel an­swers? Mine was only con­tin­gent on my math be­ing cor­rect, and I thought his was similarly clear.

Per­haps I was un­clear in a differ­ent way. By ask­ing if the choice was re­peat­able, I didn’t mean to dodge the ques­tion; I meant to make it more vivid. Mo­ral ques­tions are asked in a situ­a­tion where many peo­ple are mak­ing moral choices all the time. If dust-speck dis­plea­sure is ad­di­tive, then we should eval­u­ate our choices based on their po­ten­tial ag­gre­gate effects.

Essen­tially, it’s a same-ra­tio prob­lem, like show­ing that 6:4::9:6, be­cause 6x3=9x2 and 4x3=6x2. If the ag­gre­gate of dust-speck­ing can ever be greater than the equiv­a­lent ag­gre­gate of tor­tur­ing, then it is always greater.

• Re­gard­ing your ex­am­ple of in­come dis­par­ity: I might rather be born into a sys­tem with very un­equal in­comes, if, as in Amer­ica (in my per­sonal and bi­ased opinion), there is a rea­son­able chance of up­ping my in­come through per­sis­tence and pluck. I mean hey, that guy with all that money has to spend it some­where—per­haps he’ll shop at my su­per­store!

But wait, what does wealth mean? In the case where ev­ery­one has the same in­come, where are they spend­ing their money? Are they all buy­ing the same things? Is this a to­tal­i­tar­ian state? An econ­omy with­out dis­par­ity is pretty dis­turb­ing to con­tem­plate, be­cause it means no one is mak­ing an effort to do bet­ter than other peo­ple, or else no one can do bet­ter. Money is not be­ing con­cen­trated or fun­nel­led any­where. Sounds like a pretty moribund econ­omy.

If it’s a situ­a­tion where ev­ery­one always gets what they want and need, then wealth will have lost its con­ven­tional mean­ing, and no one will care whether one per­son is rich and an­other one isn’t. What they will care about is the suc­cess of their God, their sports teams, and their chil­dren.

I guess what I’m say­ing is that there may be no in­ter­est­ing way to sim­plify in­ter­est­ing moral dilem­mas with­out de­stroy­ing the dilemma or ren­der­ing it ir­rele­vant to nat­u­ral dilem­mas.

• Uh… If there’s no such thing as qualia, there’s no such thing as ac­tual suffer­ing, un­less I mi­s­un­der­stand your de­scrip­tion of Den­nett’s views.

But if my un­der­stand­ing is cor­rect, and those views were cor­rect, then wouldn’t the an­swer be “no­body ac­tu­ally ex­ists to care one way or an­other?” (Or am I sorely mis­taken in in­ter­pret­ing that view?)

• The non-lin­ear na­ture of ‘qualia’ and the difficulty of as­sign­ing a util­ity func­tion to such things as ‘minor an­noy­ance’ has been noted be­fore. It seems to some in­solv­able. One solu­tion pre­sented by Den­nett in ‘Con­scious­ness Ex­plained’ is to sug­gest that there is no such thing as qualia or sub­jec­tive ex­pe­rience. There are only ob­jec­tive facts. As Searle calls it ‘con­scious­ness de­nied’. With this ap­proach it would (at least the­o­ret­i­cally) be pos­si­ble to ob­jec­tively de­ter­mine the an­swer to this ques­tion based on some­thing like the num­ber of ergs needed to fire the neu­rons that would rep­re­sent the out­comes of the two differ­ent choices. The idea of which would be the more/​less pleas­ant ex­pe­rience is there­fore not rele­vant as there is no sub­jec­tive ex­pe­rience to be had in the first place. Of course I’m be­ing sloppy here- the word choice would have to be re-defined to in­clude that each ac­tion is de­ter­mined by the phys­i­cal con­figu­ra­tion of the brain and that the chooser is in fact a fic­tional con­struct of that phys­i­cal con­figu­ra­tion. Other­wise, I ad­mit that 3^^^3 peo­ple is not some­thing I can eas­ily con­tem­plate, and that clouds my abil­ity to think of an an­swer to this ques­tion.

• Would it change any­thing if the sub­jects were ex­tremely cute pup­pies with eyes so wide and in­no­cent that even the hard­est lum­ber­jack would soon?

• It seems to me that prefer­ence util­i­tar­i­anism neatly rec­on­ciles the gen­eral in­tu­itive view against tor­ture with a math­e­mat­i­cal util­i­tar­ian po­si­tion. If a pro­por­tion p of those 3^^^3 peo­ple have a moral com­punc­tion against peo­ple be­ing tor­tured, and the re­main­der are in­differ­ent to tor­ture but have a very slight prefer­ence against dust specks, then as long as p is not very small, the over­all prefer­ence would be for dust specks (and if p was very small, then the moral in­tu­itions of hu­man­ity in gen­eral have com­pletely changed and we shouldn’t be in a po­si­tion to make any de­ci­sions any­way). Is there some­thing I’m miss­ing?

• I’m not sure I’m un­der­stand­ing your rea­son­ing here. It seems like you’re sim­ply think­ing about peo­ple’s prefer­ences for a dust speck in the eye, rel­a­tive to their prefer­ences for tor­ture, with­out refer­ence to how many dust specks and how much tor­ture… is that right?

If so, that doesn’t seem to cap­ture the gen­eral in­tu­itive view. In­tu­itively, I strongly pre­fer los­ing a finger to los­ing an arm, but I pre­fer 1 per­son los­ing an arm to a mil­lion peo­ple los­ing a finger. (Or, put differ­ently, I pre­fer a one-in-a-mil­lion chance of los­ing my arm to the cer­tainty of los­ing a finger.) Quan­tity seems to mat­ter.

• There are many ways of ap­proach­ing this ques­tion, and one that I think is valuable and which I can’t find any men­tion of on this page of com­ments is the de­sirist ap­proach.

De­sirism is an eth­i­cal the­ory also some­times called de­sire util­i­tar­i­anism. The de­sirist ap­proach has many de­tails for which you can Google, but in gen­eral it is a form of con­se­quen­tial­ism in which the rele­vant con­se­quences are de­sire-satis­fac­tion and de­sire-thwart­ing.

Fifty years of tor­ture satis­fies none and thwarts vir­tu­ally all de­sires, es­pe­cially the most in­tense de­sires, for fifty years of one in­di­vi­d­u­als’ life, and most of the sub­se­quent years of life also due to ex­treme psy­cholog­i­cal dam­age. Barely no­tice­able dust specks nei­ther satisfy nor thwart any de­sires, and so in a pop­u­la­tion of any finite size the minor pain is of no ac­count what­ever in de­sirist terms. So a de­sirist would pre­fer the dust specks.

The Rep­e­ti­tion Ob­jec­tion: If this choice was re­peated say, a billion times, then the lives of the 3^^^3 peo­ple would be­come un­liv­able due to con­stant dust specks, and so at some point it must be that an ad­di­tional in­di­vi­d­ual tor­tured be­comes prefer­able to an­other dust speck in 3^^^3 eyes.

The de­sirist re­sponse bites the bul­let. Dust specks in eyes may in­crease lin­early, but their effect on de­sire-satis­fac­tion and de­sire-thwart­ing is highly non­lin­ear. It’s prob­a­bly the case that an ad­di­tional tor­ture be­comes prefer­able as soon as the ex­pected marginal util­ity of the next dust speck is a few mil­lion de­sires thwarted, and cer­tainly the case when the ex­pected marginal util­ity of the next dust speck is a few billion de­sires thwarted.

• Can you clar­ify your grounds for claiming that barely no­tice­able dust specks nei­ther satisfy nor thwart any de­sires?

• Ah, yeah, that could be a prob­le­matic as­sump­tion. The grounds for my claim was gen­er­al­iza­tion from my own ex­pe­rience. I have no con­sciously ac­cessible de­sires which are af­fected by barely no­tice­able dust specks.

• Fair enough. I don’t know what de­sirism has to say about con­sciously in­ac­cessible de­sires, but leav­ing that aside for now… can you name an event that would thwart the most neg­li­gable de­sire to which you do have con­scious ac­cess?

• I have a high tol­er­ance for chaotic sur­round­ings, but even so I oc­ca­sion­ally ex­pe­rience a weak, fleet­ing de­sire to im­pose greater or­der on other peo­ple’s be­long­ings in my phys­i­cal en­vi­ron­ment. It could be thwarted by an event like a fly buzzing around my head once, which though not painful at all would di­vert my at­ten­tion long enough to en­sure that the de­sire died with­out hav­ing been suc­cess­fully acted on.

• OK. So, if we as­sume for sim­plic­ity that a fly-buzzing event is the small­est mea­surable de­sire-thwart­ing event a hu­man can ex­pe­rience, you can sub­sti­tute “fly-buzz” for “dust speck” ev­ery­where it ap­pears here and trans­late the ques­tion into a de­sirist eth­i­cal refer­ence frame.

The ques­tion in those terms be­comes: is there some num­ber of peo­ple, each of whom is ex­pe­rienc­ing a sin­gle fly-buzz, where the ag­gre­gated de­sire-thwart­ing caused by that ag­gre­gate event is worse than a much greater de­sire-thwart­ing event (e.g. the canon­i­cal 50 years of tor­ture) ex­pe­rienced by one per­son?

And if not, why not?

• Well, Yes, but then as stated ear­lier I think de­sirism bites the bul­let on “dust speck”, too, given more dust specks. For a quick Fermi es­ti­mate, if I sup­pose that the fly-buzz-sce­nario takes about 5 sec­onds and is 1/​1000th as strong (in some sense) as the de­sire not to be tor­tured for 5 sec­onds, then the num­ber of peo­ple where the fly-buzz-sce­nar­ios out­weight the tor­ture is about a half trillion.

Granted, for peo­ple who don’t find de­sirism in­tu­itive, this al­tered sce­nario changes noth­ing about the ar­gu­ment. I per­son­ally do find de­sirism in­tu­itive, though un­likely to be a com­plete the­ory of ethics. So for me, given the dilemma be­tween 50 years of tor­ture of one in­di­vi­d­ual and one dust-speck-in-eye or one fly-buzz-dis­trac­tion for each of 3^^^3 peo­ple, I have a strong gut re­ac­tion of “Hell yes!” to prefer­ring the specks and “Hell no!” to prefer­ring the dis­trac­tions.

• Ah. I think I mi­s­un­der­stood you ini­tially, then. Thanks for the clar­ifi­ca­tion.

• No. One of those ac­tions, or some­thing differ­ent, hap­pens if I take no ac­tion. As­sum­ing that nei­ther the one per­son nor the 3^^^3 peo­ple have con­sented to al­low me to harm them, I must choose the course of ac­tion by which I harm no­body, and the ab­stract force harms peo­ple.

If you in­stead offer me the choice where I pre­vent the harm (and that the 3^^^3+1 peo­ple all con­sent to al­low me to do so), then I choose to pre­vent the tor­ture.

My max­i­mal ex­pected util­ity is one in which there is a uni­verse in which I have taken zero ad­di­tional ac­tions with­out the con­sent of ev­ery other party in­volved. With that satis­fied, I seek to max­i­mize my own hap­piness. It would make me hap­pier to pre­vent a sig­nifi­cant harm than to pre­vent an in­signifi­cant harm, and both would be prefer­able to pre­vent­ing no harm, all other things be­ing equal.

If the peo­ple in ques­tion con­sented to the treat­ment, then the de­ci­sion is amoral, and I would choose to in­flict the in­signifi­cant harm.

From a strict util­ity per­spec­tive, if you de­scribe the value the tor­ture as −1, do you de­scribe the value of the speck of dust in one per­son’s eye as less than −1/​(3^^^3)? There is some ep­silon for which it is prefer­able to have harm of ep­silon done to any real num­ber of peo­ple than to have harm of −1 done to one per­son. Ad­mit­ting that does not pro­hibit you from com­par­ing ep­silons, ei­ther.

• At first, I picked the dust specks as be­ing the prefer­able an­swer, and it seemed ob­vi­ous. What even­tu­ally turned me around was when I con­sid­ered the op­po­site situ­a­tion—with GOOD things hap­pen­ing, rather than BAD things. Would I pre­fer that one per­son ex­pe­rience 50 years of the most hap­piness re­al­is­tic in to­day’s world, or that 3^^^3 peo­ple ex­pe­rience the least good, good thing?

• Why do you think that there has to be a sym­me­try be­tween pos­i­tive and nega­tive util­ity?

• The dusk speck is a slight ir­ri­ta­tion. Hear­ing about somone be­ing tor­tured is a big­ger ir­ri­ta­tion. Also, pain de­pends on greatly on con­cen­tra­tion. Some­thing that hurts “twice as much” is ac­tu­ally much worse: lets say it is a hun­dred times worse. Off­course this lev­els off (it is a curve) at some point, but in this case that is not prob­lem as we can say that the tor­ture is very close to the phys­i­cal max and the speck’s are very close to the phys­i­cal min­i­mum pain. The differ­ence be­tween the Speck and the tor­ture is im­mense. Differense in time = 1.5 M. Differ­ence in hurt­ing 2 M. So we can have a huge num­ber (like 2 Million to the power of 24 Mi to the power of 1.5 Mi). This num­ber is go­ing to be huge. Even if this does not add up to our num­ber of specks one can see that one can define per­ime­ters to make ei­ther side the bet­ter choice. In the end it is just a moral ques­tion.

• Com­mon sense tells me the tor­ture is worse. Com­mon sense is what tells me the earth is flat. Math­e­mat­ics tells me the dust specks sce­nario is worse. I trust math­e­mat­ics and will damn one per­son to tor­ture.

• Con­struct a thought ex­per­i­ment in which ev­ery sin­gle one of those 3^^^3 is asked whether he would ac­cept a dust speck in the eye to save some­one from be­ing tor­tured, take the an­swers as a vote. If the ma­jor­ity would deem it per­son­ally ac­cept­able, then ac­cept­able it is.

• This doesn’t work at all. If you ask each of them to make that de­ci­sion you are ask­ing to com­pare their one dust speck, with some­body else’s one in­stance of tor­ture. Com­par­ing 1 dust speck with tor­ture 3^^^3 times is not even re­motely the same as com­par­ing 3^^^3 dust specks with tor­ture.

If you ask me whether 1 is greater than 3 I will say no. If you ask me 5 times I will say no ev­ery time. But if you ask me whether 5 is greater than 3 I will say yes.

The only way to make it fair would be to ask them to com­pare them­selves and the other 3^^^3 − 1 get­ting dust specks with tor­ture, but I don’t see why ask­ing them should get you a bet­ter an­swer than ask­ing any­one else.

• Com­pare two sce­nar­ios: in the first, the vote is on whether ev­ery one of the 3^^^3 peo­ple are dust-specked or not. In the sec­ond, only those who vote in favour are dust-specked, and then only if there’s a ma­jor­ity. But these are kind of the same sce­nario: what’s at stake in the sec­ond sce­nario is at least half of 3^^^3 dust-specks, which is about the same as 3^^^3 dust-specks. So the ques­tion “would you vote in favour of 3^^^3 peo­ple, in­clud­ing your­self, be­ing dust-specked?” is the same as “would you be will­ing to pay one dust-speck in your eye to save a per­son from 50 years of tor­ture, con­di­tional on about 3^^^3 other peo­ple also be­ing will­ing?”

• Let me try and get this straight, you are pre­sent­ing me with a num­ber of moral dilem­mas and ask­ing me what I would do in them.

1) Me and 3^^^^3 − 1 other peo­ple all vote on whether we get dust specks in the eye or some other per­son gets tor­tured.

I vote for tor­ture. It is as­ton­ish­ingly un­likely that my vote will de­cide, but if it doesn’t then it doesn’t mat­ter what I vote, so the de­ci­sion is just the same as if it was all up to me.

2) Me and 3^^^^3 − 1 other peo­ple all vote on whether ev­ery­one who voted for this op­tion gets a dust speck in the eye or some other per­son gets tor­tured.

This is a differ­ent dilemma, since I have to weigh up three things in­stead of two, the chance that my vote will save about 3^^^^3 peo­ple from be­ing dust-specked if I vote for tor­ture, the chance that my vote will save on per­son from be­ing tor­tured if I vote for dust specks and the (much higher) chance that my vote will save me and only me from be­ing dust-specked if I vote for tor­ture.

I re­mem­ber read­ing some­where that the chance of my vote be­ing de­ci­sive in such a situ­a­tion is roughly pro­por­tional to the square root of the num­ber of peo­ple (please cor­rect me if this is wrong). As­sum­ing this is the case then I still vote for tor­ture, since the term for sav­ing ev­ery­one else from dust specks still dwarfs the other two.

3) I have to choose whether I will re­ceive a dust speck or whether some­one else will be tor­tured, but my de­ci­sion doesn’t mat­ter un­less at least half of 3^^^^3 − 1 other peo­ple would be will­ing to choose the dust speck.

Once again the dilemma has changed, this time I have lost my abil­ity to save other peo­ple from dust specks and the prob­a­bil­ity of me suc­cess­fully sav­ing some­one from tor­ture has mas­sively in­creased. I can safely ig­nore the case where the ma­jor­ity of oth­ers choose tor­ture, since my de­ci­sion doesn’t mat­ter then. Given that the oth­ers choose dust specks, I am not so self­ish as to save my­self from a dust speck rather than some­one else from tor­ture.

You try to make it look like sce­nar­ios 2 and 3 are the same, but they are ac­tu­ally very, very differ­ent.

The bot­tom line is that no amount of clever wran­gling you do with votes or con­di­tion­als can turn 3^^^^3 peo­ple into one per­son. If it could, I would be very wor­ried, since it would im­ply that the num­ber of peo­ple you harm doesn’t mat­ter, only the amount of harm you do. In other words, if I’m offered the choice be­tween one per­son dy­ing and ten peo­ple dy­ing, then it doesn’t mat­ter which I pick.

• As­sum­ing a roughly 50-50 split the in­verse square-root rule is right. Now my is­sue is why you in­cor­po­rate that fac­tor in sce­nario 2, but not sce­nario 3. I hon­estly thought I was just rephras­ing the prob­lem, but you seem to see it differ­ently? I should clar­ify that this isn’t you un­con­di­tion­ally re­ceiv­ing a speck if you’re will­ing to, but only if half the re­main­der are also so will­ing.

The point of vot­ing, for me, is not an at­tempt to in­duce scope in­sen­si­tivity by per­son­al­iz­ing the de­ci­sion, b