The Lifespan Dilemma

One of our most con­tro­ver­sial posts ever was “Tor­ture vs. Dust Specks”. Though I can’t seem to find the refer­ence, one of the more in­ter­est­ing uses of this dilemma was by a pro­fes­sor whose stu­dent said “I’m a util­i­tar­ian con­se­quen­tial­ist”, and the pro­fes­sor said “No you’re not” and told them about SPECKS vs. TORTURE, and then the stu­dent—to the pro­fes­sor’s sur­prise—chose TORTURE. (Yay stu­dent!)

In the spirit of always mak­ing these things worse, let me offer a dilemma that might have been more likely to un­con­vince the stu­dent—at least, as a con­se­quen­tial­ist, I find the in­evitable con­clu­sion much harder to swal­low.

I’ll start by briefly in­tro­duc­ing Parfit’s Repug­nant Con­clu­sion, sort of a lit­tle brother to the main dilemma. Parfit starts with a world full of a mil­lion happy peo­ple—peo­ple with plenty of re­sources apiece. Next, Parfit says, let’s in­tro­duce one more per­son who leads a life barely worth liv­ing—but since their life is worth liv­ing, adding this per­son must be a good thing. Now we re­dis­tribute the world’s re­sources, mak­ing it fairer, which is also a good thing. Then we in­tro­duce an­other per­son, and an­other, un­til fi­nally we’ve gone to a billion peo­ple whose lives are barely at sub­sis­tence level. And since (Parfit says) it’s ob­vi­ously bet­ter to have a mil­lion happy peo­ple than a billion peo­ple at sub­sis­tence level, we’ve gone in a cir­cle and re­vealed in­con­sis­tent prefer­ences.

My own anal­y­sis of the Repug­nant Con­clu­sion is that its ap­par­ent force comes from equiv­o­cat­ing be­tween senses of barely worth liv­ing. In or­der to vol­un­tar­ily cre­ate a new per­son, what we need is a life that is worth cel­e­brat­ing or worth birthing, one that con­tains more good than ill and more hap­piness than sor­row—oth­er­wise we should re­ject the step where we choose to birth that per­son. Once some­one is al­ive, on the other hand, we’re obliged to take care of them in a way that we wouldn’t be obliged to cre­ate them in the first place—and they may choose not to com­mit suicide, even if their life con­tains more sor­row than hap­piness. If we would be sad­dened to hear the news that such a per­son ex­isted, we shouldn’t kill them, but we should not vol­un­tar­ily cre­ate such a per­son in an oth­er­wise happy world. So each time we vol­un­tar­ily add an­other per­son to Parfit’s world, we have a lit­tle cel­e­bra­tion and say with hon­est joy “Whoopee!”, not, “Damn, now it’s too late to un­cre­ate them.”

And then the rest of the Repug­nant Con­clu­sion—that it’s bet­ter to have a billion lives slightly worth cel­e­brat­ing, than a mil­lion lives very worth cel­e­brat­ing—is just “re­pug­nant” be­cause of stan­dard scope in­sen­si­tivity. The brain fails to mul­ti­ply a billion small birth cel­e­bra­tions to end up with a larger to­tal cel­e­bra­tion of life than a mil­lion big cel­e­bra­tions. Alter­na­tively, av­er­age util­i­tar­i­ans—I sus­pect I am one—may just re­ject the very first step, in which the av­er­age qual­ity of life goes down.

But now we in­tro­duce the Repug­nant Con­clu­sion’s big sister, the Lifes­pan Dilemma, which—at least in my own opinion—seems much worse.

To start with, sup­pose you have a 20% chance of dy­ing in an hour, and an 80% chance of liv­ing for 1010,000,000,000 years -

Now I know what you’re think­ing, of course. You’re think­ing, “Well, 10^(10^10) years may sound like a long time, uni­mag­in­ably vaster than the 10^15 years the uni­verse has lasted so far, but it isn’t much, re­ally. I mean, most finite num­bers are very much larger than that. The realms of math are in­finite, the realms of nov­elty and knowl­edge are in­finite, and Fun The­ory ar­gues that we’ll never run out of fun. If I live for 1010,000,000,000 years and then die, then when I draw my last metaphor­i­cal breath—not that I’d still have any­thing like a hu­man body af­ter that amount of time, of course—I’ll go out rag­ing against the night, for a life so short com­pared to all the ex­pe­riences I wish I could have had. You can’t com­pare that to real im­mor­tal­ity. As Greg Egan put it, im­mor­tal­ity isn’t liv­ing for a very long time and then dy­ing. Im­mor­tal­ity is just not dy­ing, ever.”

Well, I can’t offer you real im­mor­tal­ity—not in this dilemma, any­way. How­ever, on be­half of my pa­tron, Omega, who I be­lieve is some­times also known as Nyarlathotep, I’d like to make you a lit­tle offer.

If you pay me just one penny, I’ll re­place your 80% chance of liv­ing for 10^(10^10) years, with a 79.99992% chance of liv­ing 10^(10^(10^10)) years. That’s 99.9999% of 80%, so I’m just shav­ing a tiny frac­tion 10-6 off your prob­a­bil­ity of sur­vival, and in ex­change, if you do sur­vive, you’ll sur­vive—not ten times as long, my friend, but ten to the power of as long. And it goes with­out say­ing that you won’t run out of mem­ory (RAM) or other phys­i­cal re­sources dur­ing that time. If you feel that the no­tion of “years” is am­bigu­ous, let’s just mea­sure your lifes­pan in com­put­ing op­er­a­tions in­stead of years. Really there’s not much of a differ­ence when you’re deal­ing with num­bers like 10^(1010,000,000,000).

My friend—can I call you friend? - let me take a few mo­ments to dwell on what a won­der­ful bar­gain I’m offer­ing you. Ex­po­nen­ti­a­tion is a rare thing in gam­bles. Usu­ally, you put $1,000 at risk for a chance at mak­ing $1,500, or some mul­ti­plica­tive fac­tor like that. But when you ex­po­nen­ti­ate, you pay lin­early and buy whole fac­tors of 10 - buy them in whole­sale quan­tities, my friend! We’re talk­ing here about 1010,000,000,000 fac­tors of 10! If you could use $1,000 to buy a 99.9999% chance of mak­ing $10,000 - gain­ing a sin­gle fac­tor of ten—why, that would be the great­est in­vest­ment bar­gain in his­tory, too good to be true, but the deal that Omega is offer­ing you is far be­yond that! If you started with $1, it takes a mere eight fac­tors of ten to in­crease your wealth to $100,000,000. Three more fac­tors of ten and you’d be the wealthiest per­son on Earth. Five more fac­tors of ten be­yond that and you’d own the Earth out­right. How old is the uni­verse? Ten fac­tors-of-ten years. Just ten! How many quarks in the whole visi­ble uni­verse? Around eighty fac­tors of ten, as far as any­one knows. And we’re offer­ing you here—why, not even ten billion fac­tors of ten. Ten billion fac­tors of ten is just what you started with! No, this is ten to the ten billionth power fac­tors of ten.

Now, you may say that your util­ity isn’t lin­ear in lifes­pan, just like it isn’t lin­ear in money. But even if your util­ity is log­a­r­ith­mic in lifes­pan—a pes­simistic as­sump­tion, surely; doesn’t money de­crease in value faster than life? - why, just the log­a­r­ithm goes from 10,000,000,000 to 1010,000,000,000.

From a fun-the­o­retic stand­point, ex­po­nen­ti­at­ing seems like some­thing that re­ally should let you have Sig­nifi­cantly More Fun. If you can af­ford to simu­late a mind a quadrillion bits large, then you merely need 2^(1,000,000,000,000,000) times as much com­put­ing power—a quadrillion fac­tors of 2 - to simu­late all pos­si­ble minds with a quadrillion bi­nary de­grees of free­dom so defined. Ex­po­nen­ti­a­tion lets you com­pletely ex­plore the whole space of which you were pre­vi­ously a sin­gle point—and that’s just if you use it for brute force. So go­ing from a lifes­pan of 10^(10^10) to 10^(10^(10^10)) seems like it ought to be a sig­nifi­cant im­prove­ment, from a fun-the­o­retic stand­point.

And Omega is offer­ing you this spe­cial deal, not for a dol­lar, not for a dime, but one penny! That’s right! Act now! Pay a penny and go from a 20% prob­a­bil­ity of dy­ing in an hour and an 80% prob­a­bil­ity of liv­ing 1010,000,000,000 years, to a 20.00008% prob­a­bil­ity of dy­ing in an hour and a 79.99992% prob­a­bil­ity of liv­ing 10^(1010,000,000,000) years! That’s far more fac­tors of ten in your lifes­pan than the num­ber of quarks in the visi­ble uni­verse raised to the mil­lionth power!

Is that a penny, friend? - thank you, thank you. But wait! There’s an­other spe­cial offer, and you won’t even have to pay a penny for this one—this one is free! That’s right, I’m offer­ing to ex­po­nen­ti­ate your lifes­pan again, to 10^(10^(1010,000,000,000)) years! Now, I’ll have to mul­ti­ply your prob­a­bil­ity of sur­vival by 99.9999% again, but re­ally, what’s that com­pared to the nigh-in­com­pre­hen­si­ble in­crease in your ex­pected lifes­pan?

Is that an avari­cious light I see in your eyes? Then go for it! Take the deal! It’s free!

(Some time later.)

My friend, I re­ally don’t un­der­stand your grum­bles. At ev­ery step of the way, you seemed ea­ger to take the deal. It’s hardly my fault that you’ve ended up with… let’s see… a prob­a­bil­ity of 1101000 of liv­ing 10^^(2,302,360,800) years, and oth­er­wise dy­ing in an hour. Oh, the ^^? That’s just a com­pact way of ex­press­ing tetra­tion, or re­peated ex­po­nen­ti­a­tion—it’s re­ally sup­posed to be Knuth up-ar­rows, ↑↑, but I pre­fer to just write ^^. So 10^^(2,302,360,800) means 10^(10^(10^...^10)) where the ex­po­nen­tial tower of tens is 2,302,360,800 lay­ers high.

But, tell you what—these deals are in­tended to be per­ma­nent, you know, but if you pay me an­other penny, I’ll trade you your cur­rent gam­ble for an 80% prob­a­bil­ity of liv­ing 1010,000,000,000 years.

Why, thanks! I’m glad you’ve given me your two cents on the sub­ject.

Hey, don’t make that face! You’ve learned some­thing about your own prefer­ences, and that’s the most valuable sort of in­for­ma­tion there is!

Any­way, I’ve just re­ceived tele­pathic word from Omega that I’m to offer you an­other bar­gain—hey! Don’t run away un­til you’ve at least heard me out!

Okay, I know you’re feel­ing sore. How’s this to make up for it? Right now you’ve got an 80% prob­a­bil­ity of liv­ing 1010,000,000,000 years. But right now—for free—I’ll re­place that with an 80% prob­a­bil­ity (that’s right, 80%) of liv­ing 10^^10 years, that’s 10^10^10^10^10^10^10^1010,000,000,000 years.

See? I thought that’d wipe the frown from your face.

So right now you’ve got an 80% prob­a­bil­ity of liv­ing 10^^10 years. But if you give me a penny, I’ll tetrate that sucker! That’s right—your lifes­pan will go to 10^^(10^^10) years! That’s an ex­po­nen­tial tower (10^^10) tens high! You could write that as 10^^^3, by the way, if you’re in­ter­ested. Oh, and I’m afraid I’ll have to mul­ti­ply your sur­vival prob­a­bil­ity by 99.99999999%.

What? What do you mean, no? The benefit here is vastly larger than the mere 10^^(2,302,360,800) years you bought pre­vi­ously, and you merely have to send your prob­a­bil­ity to 79.999999992% in­stead of 10-1000 to pur­chase it! Well, that and the penny, of course. If you turn down this offer, what does it say about that whole road you went down be­fore? Think of how silly you’d look in ret­ro­spect! Come now, pet­ti­ness aside, this is the real world, wouldn’t you rather have a 79.999999992% prob­a­bil­ity of liv­ing 10^^(10^^10) years than an 80% prob­a­bil­ity of liv­ing 10^^10 years? Those ar­rows sup­press a lot of de­tail, as the say­ing goes! If you can’t have Sig­nifi­cantly More Fun with tetra­tion, how can you pos­si­bly hope to have fun at all?

Hm? Why yes, that’s right, I am go­ing to offer to tetrate the lifes­pan and frac­tion the prob­a­bil­ity yet again… I was think­ing of tak­ing you down to a sur­vival prob­a­bil­ity of 1/​(10^^^20), or some­thing like that… oh, don’t make that face at me, if you want to re­fuse the whole gar­den path you’ve got to re­fuse some par­tic­u­lar step along the way.

Wait! Come back! I have even faster-grow­ing func­tions to show you! And I’ll take even smaller slices off the prob­a­bil­ity each time! Come back!

...ahem.

While I feel that the Repug­nant Con­clu­sion has an ob­vi­ous an­swer, and that SPECKS vs. TORTURE has an ob­vi­ous an­swer, the Lifes­pan Dilemma ac­tu­ally con­fuses me—the more I de­mand an­swers of my mind, the stranger my in­tu­itive re­sponses get. How are yours?

Based on an ar­gu­ment by Wei Dai. Dai pro­posed a re­duc­tio of un­bounded util­ity func­tions by (cor­rectly) point­ing out that an un­bounded util­ity on lifes­pan im­plies will­ing­ness to trade an 80% prob­a­bil­ity of liv­ing some large num­ber of years for a 1/​(3^^^3) prob­a­bil­ity of liv­ing some suffi­ciently longer lifes­pan. I looked at this and re­al­ized that there ex­isted an ob­vi­ous gar­den path, which meant that deny­ing the con­clu­sion would cre­ate a prefer­ence re­ver­sal. Note also the re­la­tion to the St. Peters­burg Para­dox, al­though the Lifes­pan Dilemma re­quires only a finite num­ber of steps to get us in trou­ble.