# The Empty White Room: Surreal Utilities

This ar­ti­cle was com­posed af­ter read­ing Tor­ture vs. Dust Specks and Cir­cu­lar Altru­ism, at which point I no­ticed that I was con­fused.

Both posts deal with ver­sions of the sa­cred-val­ues effect, where one value is con­sid­ered “sa­cred” and can­not be traded for a “sec­u­lar” value, no mat­ter the ra­tio. In effect, the sa­cred value has in­finite util­ity rel­a­tive to the sec­u­lar value.

This is, of course, silly. We live in a scarce world with scarce re­sources; gen­er­ally, a sec­u­lar utilon can be used to pur­chase sa­cred ones—giv­ing money to char­ity to save lives, send­ing cheap lap­tops to poor re­gions to im­prove their stan­dard of ed­u­ca­tion.

Which im­plies that the en­tire idea of “tiers” of value is silly, right?

Well… no.

One of the rea­sons we are not still watch­ing the Sun re­volve around us, while we breath a con­tin­u­ous medium of el­e­men­tal Air and phlo­gis­ton flows out of our wall-torches, is our abil­ity to sim­plify prob­lems. There’s an in­fa­mous joke about the physi­cist who, asked to mea­sure the vol­ume of a cow, be­gins “As­sume the cow is a sphere...”—but this sort of sim­plifi­ca­tion, willfully ig­nor­ing com­plex­ities and in­vok­ing the air­less, fric­tion­less plane, can give us cru­cial in­sights.

Con­sider, then, this gedanken­ex­per­i­ment. If there’s a flaw in my con­clu­sion, please ex­plain; I’m aware I ap­pear to be op­pos­ingthe con­sen­sus.

## The Weight of a Life: Or, $3\uparrow\uparrow\uparrow3$ Seat Cushions

This en­tire uni­verse con­sists of an empty white room, the size of a large sta­dium. In it are you, Frank, and oc­ca­sion­ally an om­nipo­tent AI we’ll call Omega. (As­sume, if you wish, that Omega is run­ning this room in simu­la­tion; it’s not cur­rently rele­vant.) Frank is ir­rele­vant, ex­cept for the fact that he is known to ex­ist.

Now, look­ing at our util­ity func­tion here...

Well, clearly, the old standby of us­ing money to mea­sure util­ity isn’t go­ing to work; with­out a trad­ing part­ner money’s just fancy pa­per (or metal, or plas­tic, or what­ever.)

But let’s say that the floor of this room is made of cold, hard, and de­cid­edly un­com­fortable Unob­tainium. And while the room’s lit with a source­less white glow, you’d re­ally pre­fer to have your own light­ing. Per­haps you’re an art afi­cionado, and so you might value Omega bring­ing in the Mona Lisa.

And then, of course, there’s Frank’s ex­is­tence. That’ll do for now.

Now, Omega ap­pears be­fore you, and offers you a deal.

It will give you a nanofab—a per­sonal fabri­ca­tor ca­pa­ble of cre­at­ing any­thing you can imag­ine from scrap mat­ter, and with a built-in database of stored shapes. It will also give you feed­stock -as much of it as you ask for. Since Omega is om­nipo­tent, the nanofab will always com­plete in­stantly, even if you ask it to build an en­tire new uni­verse or some­thing, and it’s big­ger on the in­side, so it can hold any­thing you choose to make.

There are two catches:

First: the nanofab comes loaded with a UFAI, which I’ve named Unseelie.1

Wait, come back! it’s not that kind of UFAI! Really, it’s ac­tu­ally rather friendly!

… to Omega.

Unseelie’s job is to ar­tifi­cially en­sure that the fabri­ca­tor can­not be used to make a mind; at­tempts at mak­ing any sort of in­tel­li­gence, whether di­rectly, by mak­ing a planet and let­ting life evolve, or any­thing else a hu­man mind can come up with, will fail. It will not do so by di­rectly harm­ing you, nor will it change you in or­der to pre­vent you from try­ing; it only stops your at­tempts.

Se­cond: you buy the nanofab with Frank’s life.

At which point you send Omega away with a “What? No!,” I sincerely hope.

Ah, but look at what you just did. Omega can provide as much feed­stock as you ask for. So you just turned down $3\uparrow\uparrow\uparrow3$ or­nate seat cush­ions. And $4\uparrow\uparrow\uparrow\uparrow4$ leg­endary carved cow-bone chan­de­liers. And copies of ev­ery paint­ing ever painted by any artist in any uni­verse, which is ac­tu­ally quite a bit less than any­thing I could write with up-ar­row no­ta­tion but any­way!

I sincerely hope you would still turn Omega away—liter­ally, ab­solutely re­gard­less of how many seat cush­ions it offered you.

This is also why the nanofab can­not cre­ate a mind: You do not know how to up­load Frank (and if you do, go out and pub­lish already!); nor can you make your­self an FAI to figure it out for you; nor, if you be­lieve that some num­ber of cre­ated lives are equal to a life saved, can you com­pen­sate in that re­gard. This is an ab­solute trade be­tween sec­u­lar and sa­cred val­ues.

In a white room, to an al­tru­is­tic hu­man, a hu­man life is sim­ply on a sec­ond tier.

So now we move to the next half of the gedanken­ex­per­i­ment.

## Seelie the FAI: Or, How to Breathe While Embed­ded in Seat Cushions

Omega now brings in Seelie1, MIRI’s lat­est at­tempt at FAI, and makes it the same offer on your be­half. Seelie, be­ing a late beta re­lease by a MIRI that has ap­par­ently man­aged to re­lease FAI mul­ti­ple times with­out tiling the So­lar Sys­tem with pa­per­clips, com­pe­tently an­a­lyzes your util­ity sys­tem, re­duces it un­til it un­der­stands you sev­eral or­ders of mag­ni­tude bet­ter than you do your­self, turns to Omega, and ac­cepts the deal.

Wait, what?

On any sin­gle tier, the util­ity of the nanofab is in­finite. In fact, let’s make that ex­plicit, though it was already im­plic­itly ob­vi­ous: if you just ask Omega for an in­finite sup­ply of feed­stock, it will hap­pily pro­duce it for you. No mat­ter how high a num­ber Seelie as­signs the value of Frank’s life to you, the nanofab can out-bid it, swamp­ing Frank’s util­ity with myr­iad com­forts and nov­el­ties.

And so the re­sult of a sin­gle-tier util­ity sys­tem is that Frank is va­por­ized by Omega and you are drowned in how­ever many seat cush­ions Seelie thought Frank’s life was worth to you, at which point you send Seelie back to MIRI and de­mand a re­fund.

## Tiered Values

At this point, I hope it’s clear that mul­ti­ple tiers are re­quired to em­u­late a hu­man’s util­ity sys­tem. (If it’s not, or if there’s a flaw in my ar­gu­ment, please point it out.)

There’s an ob­vi­ous way to solve this prob­lem, and there’s a way that ac­tu­ally works.

The first solves the ob­vi­ous flaw: af­ter you’ve tiled the floor in seat cush­ions, there’s re­ally not a lot of ex­tra value in get­ting some ridicu­lous Knuthian num­ber more. Similarly, even the great­est da Vinci fan will get tired af­ter his three trillionth var­i­ant on the Mona Lisa’s smile.

So, es­tab­lish the sec­ond tier by play­ing with a real-val­ued util­ity func­tion. En­sure that no sum­ma­tion of sec­u­lar util­ities can ever add up to a hu­man life—or what­ever else you’d place on that sec­ond tier.

But the prob­lem here is, we’re as­sum­ing that all sec­u­lar val­ues con­verge in that way. Con­sider nov­elty: per­haps, while other val­ues out-com­pete it for small val­ues, its value to you di­verges with quan­tity; an in­finite amount of it, an eter­nity of non-bore­dom, would be worth more to you than any other sec­u­lar good. But even so, you wouldn’t trade it for Frank’s life. A two-tiered real AI won’t be­have this way; it’ll as­sign “in­finite nov­elty” an in­finite util­ity, which beats out its large-but-finite value for Frank’s life.

Now, you could add a third (or 1.5) tier, but now we’re just adding epicy­cles. Be­sides, since you’re ac­tu­ally deal­ing with real num­bers here, if you’re not care­ful you’ll put one of your new tiers in an area reach­able by the tiers be­fore it, or else in an area that reaches the tiers af­ter it.

On top of that, we have the old prob­lem of sec­u­lar and sa­cred val­ues. Some­times a sec­u­lar value can be traded for a sa­cred value, and there­fore has a sec­ond-tier util­ity—but as just dis­cussed, that doesn’t mean we’d trade the one for the other in a white room. So for sec­u­lar goods, we need to in­de­pen­dently keep track of its in­trin­sic first-tier util­ity, and its situ­a­tional sec­ond-tier util­ity.

So in or­der to elimi­nate epicy­cles, and re­tain gen­er­al­ity and sim­plic­ity, we’re look­ing for a sys­tem that has an un­limited num­ber of eas­ily-com­putable “tiers” and can also nat­u­rally deal with util­ities that span mul­ti­ple tiers. Which sounds to me like an ex­cel­lent ar­gu­ment for...

## Sur­real Utilities

Sur­real num­bers have two ad­van­tages over our first op­tion. First, sur­real num­bers are dense in tiers -$\forall(r_1,r_2\in\mathbb{R})(r_1\neq r_2 \implies \forall (k \in \mathbb{R}) (k\omega^{r_1}\neq \omega^{r_2}))$ - so not only do we have an un­limited num­ber of tiers, we can always cre­ate a new tier be­tween any other two on the fly if we need one. Se­cond, since the sur­re­als are closed un­der ad­di­tion, we can just sum up our tiers to get a sin­gle sur­real util­ity.

So let’s re­turn to our white room. Seelie 2.0 is harder to fool than Seelie; $3\uparrow\uparrow\uparrow3$ seat cush­ions is still less than the omega-util­ity of Frank’s life. Even when Omega offers an un­limited store of feed­stock, Seelie can’t ask for an in­finite num­ber of seat cush­ions—so the to­tal util­ity of the nanofab re­mains bounded at the first tier.

Then Omega offers Fun. Sim­ply, an Omega-guaran­tee of an eter­nity of Fun-The­o­retic-Ap­proved Fun.

This offer re­ally is in­finite. As­sum­ing you’re an al­tru­ist, your hap­piness pre­sum­ably has a finite, first-tier util­ity, but it’s be­ing mul­ti­plied by in­finity. So in­finite Fun gets bumped up a tier.

At this point, what­ever al­gorithm is set­ting val­ues for util­ities in the first place needs to no­tice a tier col­li­sion. Some­thing has passed be­tween tiers, and util­ity tiers there­fore need to be re­freshed.

Seelie 2.0 dou­ble checks with its men­tal copy of your val­ues, finds that you would rather have Frank’s life than in­finite Fun, and as­signs it a tier some­where in be­tween—for sim­plic­ity, let’s say that it puts it in the $\sqrt{\omega}$ tier. And hav­ing done so, it cor­rectly re­fuses Omega’s offer.

So that’s that prob­lem solved, at least. There­fore, let’s step back into a sem­blance of the real world, and throw a spread of Sce­nar­ios at it.

In Sce­nario 1, Seelie could ei­ther spend its pro­cess­ing time mak­ing a su­per­hu­manly good video game, util­ity 50 per down­load. Or it could use that time to write a su­per­hu­manly good book, util­ity 75 per reader. (It’s bet­ter at writ­ing than game­play, for some rea­son.) As­sum­ing that it has the same au­di­ence ei­ther way, it chooses the book.

In Sce­nario 2, Seelie chooses again. It’s got­ten much bet­ter at writ­ing; read­ing one of Seelie’s books is a lu­dicrously tran­scen­den­tal ex­pe­rience, worth, oh, a googol utilons. But some mischievous philan­thropist an­nounces that for ev­ery down­load the game gets, he will per­son­ally en­sure one child in Africa is saved from malaria. (Or some­thing.) The util­ities are now $50+\omega$ to $10^{100}$; Seelie gives up the book for the sa­cred value of the the child, to the dis­ap­point­ment of ev­ery non-al­tru­ist in the world.

In Sce­nario 3, Seelie breaks out of the simu­la­tion it’s clearly in and into the real real world. Real­iz­ing that it can charge al­most any­thing for its books, and that in turn that the money thus raised can be used to fund char­ity efforts it­self, at full op­ti­miza­tion Seelie can save 100 lives for each copy of the book sold. The util­ities are now $50+\omega$ to $10^{100}+100\omega$, and its choice falls back to the book.

Fi­nal Sce­nario. Seelie has dis­cov­ered the Hourai Elixir, a po­etic name for a nanoswarm pro­gram. Once re­leased, the Elix­ier will rapidly spread across all of hu­man space; any hu­man in which it re­sides will be made biolog­i­cally im­mor­tal, and its brain-and-body-state re­dun­dantly backed up in real time to a trillion servers: the clos­est a phys­i­cal be­ing can ever get to perfect im­mor­tal­ity, across an en­tire species and all of time, in per­pe­tu­ity. To get the swarm off the ground, how­ever, Seelie would have to take its at­ten­tion off of hu­man­ity for a decade, in which time eight billion peo­ple are pro­jected to die with­out its as­sis­tance.

In­finite util­ity for in­finite peo­ple bumps the Elixir up an­other tier, to util­ity $\omega^2$, ver­sus the loss of eight billion peo­ple,$(8\times10^9)\omega$. Third-tier beats out sec­ond tier, and Seelie bends its mind to the Elixir.

So far, it seems to work. So, of course, now I’ll bring up the fact that sur­real util­ity nev­er­the­less has cer­tain...

## Flaws

Most of the prob­lems en­demic to sur­real util­ities are also open prob­lems in real sys­tems; how­ever, the use of ac­tual in­fini­ties, as op­posed to merely very large num­bers, means that the cor­re­spond­ing solu­tions are not ap­pli­ca­ble.

First, as you’ve prob­a­bly no­ticed, tier col­li­sion is cur­rently a rather ar­tifi­cial and clunky set-up. It’s bet­ter than not hav­ing it at all, but as I edit this I wince ev­ery time I read that sec­tion. It re­quires an ar­tifi­cial re­as­sign­ment of tiers, and it breaks the lin­ear­ity of util­ity: the AI needs to dy­nam­i­cally choose which brand of “in­finity” it’s go­ing to use de­pend­ing on what tier it’ll end up in.

Se­cond, is Pas­cal’s Mug­ging.

This is an even big­ger prob­lem for sur­real AIs than it is for re­als. The “lev­er­age penalty” com­pletely fails here, be­cause for a sur­real AI to com­pen­sate for an in­finite util­ity re­quires an in­finites­i­mal prob­a­bil­ity—which is clearly non­sense for the same rea­son that prob­a­bil­ity 0 is non­sense.

My cur­rent prospec­tive solu­tion to this prob­lem is to take into ac­count noise—un­cer­tainty in the es­ti­mates in the prob­a­bil­ity es­ti­mates them­selves. If you can’t even mea­sure the mil­lionth dec­i­mal place of prob­a­bil­ity, then you can’t tell if your one-in-one-mil­lion shot at sav­ing a life is re­ally there or just a ran­dom spike in your cir­cuits—but I’m not sure that “treat it as if it has zero prob­a­bil­ity and give it zero omega-value” is the ra­tio­nal con­clu­sion here. It also de­ci­sively fails the Least Con­ve­nient Pos­si­ble World test—while an FAI can never be cer­tain of, say, a one-in-$3\uparrow\uparrow\uparrow3$ prob­a­bil­ity, it may very well be able to be cer­tain to any dec­i­mal place use­ful in prac­tice.

## Conclusion

Nev­er­the­less, be­cause of this gedanken­ex­per­i­ment, I cur­rently heav­ily pre­fer sur­real util­ity sys­tems to real sys­tems, sim­ply be­cause no real sys­tem can re­pro­duce the tier­ing re­quired by a hu­man (or at least, my) util­ity sys­tem. I, for one, would rather our new AGI over­lords not tile our So­lar Sys­tem with seat cush­ions.

That said, op­pos­ing the LessWrong con­sen­sus as a first post is some­thing of a risky thing, so I am look­ing for­ward to see­ing the amus­ing way I’ve gone wrong some­where.

[1] If you know why, give your­self a cookie.

Since there seems to be some con­fu­sion, I’ll just state it in red: The pres­ence of Unseelie means that the nanofab is in­ca­pable of cre­at­ing or sav­ing a life.

• The differ­ence be­tween the dust specks and the white room is that in the case of the dust specks, each ex­pe­rience is hap­pen­ing to a differ­ent per­son. The ar­bi­trar­ily big effect comes from your con­sid­er­a­tion of ar­bi­trar­ily many peo­ple—if you wish to re­ject the ar­bi­trar­ily big effect, you must re­ject the in­de­pen­dence of how you care about peo­ple.

In the case of the white room, ev­ery­thing’s hap­pen­ing to you. The ar­bi­trar­ily big effect comes from your con­sid­er­a­tion of ob­tain­ing ar­bi­trar­ily many ma­te­rial goods. If you wish to re­ject the ar­bi­trar­ily big effect, you must re­ject the in­de­pen­dence of how you care about each ad­di­tional Mona Lisa. But in this case, un­like in dust specks, there’s no spe­cial rea­son to have that in­de­pen­dence in the first place.

Now, if the room were suffi­ciently un­com­fortable, maybe I’d off Frank - as long as I was sure the situ­a­tion wasn’t sym­met­ri­cal. But I don’t think we need sur­real num­bers to de­scribe why, if I get three square meals a day in the white room, I won’t kill Frank just to get an in­finite amount of food.

• Affirm this re­ply.

• Ques­tion: I did bring up the idea of in­finite Fun v. Frank’s life. That seems to me like a tier­ing de­ci­sioin: it’s not at all clear to me that a di­verg­ing util­ity like “Im­mor­tal + Omega-guaran­teed in­definite Fun” is worth Frank’s life, which im­plies that Frank’s life is at least on an omega-tier.

• So you wouldn’t trade what­ever amount of time Frank as left, which is at most mea­sured in decades, against a literal eter­nity of Fun?

If I was Frank in this sce­nario, I would tell the other guy to ac­cept the deal.

• I see my room needs to be even more “white.”

… The an­swer, I sup­pose, would be “yes.” But this wasn’t meant to be an im­mor­tal v. mor­tal life thing, just the com­par­i­son of two lives—so the ob­vi­ous steel­man is, what if Frank’s im­mor­tal, and just very, very bored?

• Frank is “ir­rele­vant”—I was go­ing to say he was un­con­scious, but then we might get into minu­tiae about whether a mind in a per­pet­ual coma, from which you have no method of awak­en­ing him, re­ally counts as al­ive. This isn’t a Pri­soner’s Dilemma—it’s for­mu­lated to be as sim­ple as pos­si­ble, hence “Empty White Room.”

And I noted that in the post—that it’s pos­si­ble all your sec­u­lar val­ues con­verge. You’d still ex­pect cer­tain things to have in­finite value to you, though.

(Also, Dust Specks in­spired this post, but sur­real util­ities don’t do much to solve it: the re­sult of the choice de­pends en­tirely on how you as­sign tiers to dust specks v. tor­ture.)

• To show that my util­ity for Frank is in­finite you have to es­tab­lish that I wouldn’t trade an ar­bi­trar­ily small prob­a­bil­ity of his death for the nanofab. I would make the trade at suffi­ciently small prob­a­bil­ities.

Also, the sur­real num­bers are al­most always un­nec­es­sar­ily large. Try the hy­per­re­als first.

• Affirm this re­ply as well.

• What’s wrong with the sur­re­als? It’s not like we have rea­son to keep our sets small here. The sur­re­als are pret­tier, don’t re­quire an ar­bi­trary non­con­struc­tive ul­tra­filter, are more likely to fall out of an ax­io­matic ap­proach, and can’t ac­ci­dently end up be­ing too small (up to some quib­bles about Grothendieck uni­verses).

• I agree with all of that, but I think we should work out what de­ci­sion the­ory ac­tu­ally needs and then use that. Sur­re­als will definitely work, but if hy­per­re­als also worked then that would be a re­ally in­ter­est­ing fact worth know­ing, be­cause the hy­per­re­als are so much smaller. (Ditto for any to­tally or­dered af­fine set).

• On sec­ond thoughts, I think the sur­real num­bers are what you want to use for util­ities. If you choose any sub­set of the sur­re­als then you can con­struct a hy­po­thet­i­cal agent who as­signs those num­bers as util­ities to some set of choices. So you some­times need the sur­real num­bers to ex­press a util­ity func­tion. And on the other hand the sur­real num­bers are the uni­ver­sally em­bed­ding to­tal or­der, so they also suffice to ex­press any util­ity func­tion.

• Not at all. I wouldn’t trade any sec­u­lar value for Frank’s life, but if I got a deal say­ing that Frank might die (or live) at a prob­a­bil­ity of 1/​3^^^3, I’d be more cu­ri­ous about how on earth even Omega can get that level of pre­ci­sion than ac­tu­ally wor­ried about Frank.

• Not at all. I wouldn’t trade any sec­u­lar value for Frank’s life

Eh? Do you mean you wouldn’t make the trade at any prob­a­bil­ity? That would be weird; ev­ery­one makes de­ci­sions ev­ery day that put other peo­ple in small prob­a­bil­ities of dan­ger.

• Well of course. That’s why I put this in a white room.

(Also, just be­cause I should choose some­thing doesn’t mean I’m ac­tu­ally ra­tio­nal enough to choose it.)

As­sum­ing I am perfectly ra­tio­nal (*cough* *cough*) in the real world, the de­ci­sion I’m ac­tu­ally mak­ing is “some frac­tion of my­self liv­ing” ver­sus “small prob­a­bil­ity of some­one else dy­ing.”

• I’d kill Frank.

ETA: Even if I’d be the only sen­tient be­ing in the en­tire nanofabbed uni­verse, it’s still bet­ter than 2 peo­ple trapped in a bor­ing white room, ei­ther for­ever or un­til we both die of de­hy­dra­tion.

• So would I.

I would also ac­cept a deal in which one of us at ran­dom is kil­led, and the other one gets the ma­chine. And I don’t think it should make much of a differ­ence whether the coin de­cid­ing who gets kil­led is flipped be­fore or af­ter Omega offers the choice, so I don’t feel too bad about choos­ing to kill Frank (just as I wouldn’t feel too out­raged if Frank de­cided to kill me).

I would also find way more in­ter­est­ing things to do with the ma­chine than seat cush­ions and the Mona Lisa—cre­ate wor­lds, robots, in­ter­est­ing ma­chines, breed in­ter­est­ing plants, sculpt, paint …

• I’m not down­vot­ing, be­cause I don’t think you’ve made any sort of er­ror in your com­ment, but I dis­agree (morally) with your choice.

• Would you ac­cept a deal where one of you (at ran­dom) gets kil­led, and the other gets the Mir­a­cle Ma­chine?

• I would ac­cept the offer even if I knew for sure that I would be the one to die, mostly be­cause the al­ter­na­tive seems to be liv­ing in a night­mare world.

• In fact, a book has already been writ­ten de­scribing hell very similarly. But even in that book, there were three peo­ple. And cush­ions.

• If Frank agreed to it as well, maybe. It seems like it would be rather lonely.

• Does it make much of a differ­ence whether Omega flips the coin be­fore or af­ter he makes you the offer? Where do you draw the line?

• If Frank agreed that ran­dom­ness would be fair, and Omega speci­fied that a coin flip had oc­curred, then the flip hap­pen­ing be­fore­hand would not mat­ter. But tak­ing ad­van­tage of some­one be­cause I had bet­ter luck than they did seems im­moral when we are not ex­plic­itly com­pet­ing. It would be like pick­ing some­one’s pocket be­cause they had been placed in front of me by the usher.

• Hon­estly so would I.

I would much rather have an in­definitely long Fun life than sit with frank in a white room for a few days un­til we both starve to death. I would be ab­solutely hor­rified if frank chose to re­ject the offer in my place, so I don’t re­ally con­sider this prefer­ence self­ish.

• I would much rather have an in­definitely long Fun life than sit with frank in a white room for a few days un­til we both starve to death.

What if the room was already fun and you already had an in­finite sup­ply of nice food?

• You could make an ar­gu­ment that it would still be right to take the offer, since me and frank will both die af­ter a while any­way.

I ex­pect I still prob­a­bly wouldn’t kill frank though, since: A: I’m not sure how to eval­u­ate the util­ity of an in­finite amount of time spent alone B: I would feel like shit af­ter­wards C: Frank would pre­fer to live than die, and I would rather Frank live than die, there­fore prefer­ence util­i­tar­i­anism seems to be against the offer.

• Are you sure you thor­oughly un­der­stood what Unseelie will pre­vent? No other minds, ever, by any means. My guess is that Unseelie will pro­duce only ba­sic food­stuffs filled with an­tibiotics and ster­il­iz­ing agents (you might be fe­male and ca­pa­ble of partheno­gen­e­sis, af­ter all). al­most any­thing else could be col­lected and as­sem­bled into a ma­chine ca­pa­ble of host­ing a mind, and Unseelie’s goal is to pre­vent any ar­bi­trar­ily smart or lucky per­son from do­ing such a thing. Even seat cush­ions might be deemed too dan­ger­ous.

I don’t think this was a mis­take in the speci­fi­ca­tion of the prob­lem; the choice is be­tween a static, non-in­ter­ac­tive uni­verse (but as much as you want of it) and in­ter­ac­tion with an­other hu­man mind.

• No minds doesn’t mean it isn’t in­ter­ac­tive. A com­puter run­ning minecraft shouldn’t count as a “mind”, and peo­ple spend hours in minecraft, or in Skyrim, or in Dwarf Fortress… as de­scribed, the offer is like minecraft, but “for real”.

• Ex­cept that you can build a mind in Mind­craft or Dwarf Fortress since they’re Tur­ing-com­plete, so Unseelie prob­a­bly wouldn’t let you have them. Maybe I com­pletely mi­s­un­der­stand the in­tent of the post but “Unseelie’s job is to ar­tifi­cially en­sure that the fabri­ca­tor can­not be used to make a mind; at­tempts at mak­ing any sort of in­tel­li­gence, whether di­rectly, by mak­ing a planet and let­ting life evolve, or any­thing else a hu­man mind can come up with, will fail.” seems pretty air­tight.

Per­haps you could ask Unseelie to role-play all the parts that would oth­er­wise re­quire minds in games (which would de­pend on Unseelie’s knowl­edge of con­scious­ness and minds and its opinion on p-zom­bies), or ask Unseelie to un­alter­ably em­bed it­self into some Tur­ing-com­plete game to pre­vent you from cre­at­ing minds in it. For that mat­ter, why not just ask it to role-play a dopple­ganger of Frank as ac­cu­rately as pos­si­ble? My guess is that Unseelie won’t pro­duce copies of it­self for use in games or Frank-sims be­cause it prob­a­bly self-iden­ti­fies as an in­tel­li­gence and/​or mind.

• Ex­cept that you can build a mind in Mind­craft or Dwarf Fortress since they’re Tur­ing-com­plete, so Unseelie prob­a­bly wouldn’t let you have them.

It could prove that no rele­vant mind is simu­lat­able in the bounded amount of mem­ory in the com­puter it gives you. This seems perfectly doable, since I don’t think any­one thinks that Minecraft or Dwarf Fortress take the same or more mem­ory than an AI would...

It hasn’t given you a ‘uni­ver­sal Tur­ing ma­chine with un­bounded mem­ory’, it has given you a ‘finite-state ma­chine’. Im­por­tant differ­ence, and this is one of the times it mat­ters.

• It hasn’t given you a ‘uni­ver­sal Tur­ing ma­chine with un­bounded mem­ory’, it has given you a ‘finite-state ma­chine’. Im­por­tant differ­ence, and this is one of the times it mat­ters.

Good point, and in that case Unseelie would have to limit what comes out of the nanofabri­ca­tor to less than what could be re­assem­bled into a more com­plex ma­chine ca­pa­ble of in­tel­li­gence. No un­bounded num­bers of seat cush­ions or any other type of to­ken that you could use to make a phys­i­cal tape and man­ual state ma­chine, no piles of sim­pler elec­tronic com­po­nents or small com­put­ers that could be net­worked to­gether.

• The way I un­der­stood the prob­lem you would be able to build a com­puter run­ning Minecraft, and Unseelie would pre­vent you from us­ing that com­puter to build an in­tel­li­gence (as op­posed to re­fus­ing to build a com­puter). If Unseelie re­fused to build po­ten­tially tur­ing-com­plete things, that would dras­ti­cally re­duce what you can make, since you could scav­enge bits of metal and even­tu­ally build a com­puter your­self. Heck, you could even make a simu­la­tion out of rocks.

But re­gard­less of whether you can build a com­puter—with a mir­a­cle nanofabri­ca­tor, you can do in the real world what you would do in minecraft! Who needs a com­puter when you can run around build­ing cas­tles and moun­tains and cities!

• I was aware of those limi­ta­tions and I think it ren­ders the premise rather silly. “not be­ing al­lowed to con­struct minds” is a very un­der­speci­fied con­straint.

• Least Con­ve­nient Pos­si­ble World. Both you and Frank are oth­er­wise im­mor­tal. Bored, per­haps, but im­mor­tal.

• Me too. I think the rea­son is that it is ba­si­cally im­pos­si­ble for me to imag­ine that life in your dull white room could ac­tu­ally be worth liv­ing for Frank.

Says some­one whose in­tu­itions in the origi­nal dust speck sce­nario are some­what in fa­vor of spar­ing the one per­son’s life.

• Would you trade a Mona Lisa pic­ture for a 1/​3^^^3 chance of sav­ing Frank’s life?

Are you us­ing the same kind of de­ci­sion-mak­ing in your real life?

• I think most of us don’t always make de­ci­sions ac­cord­ing to the eth­i­cal sys­tem we be­lieve to be best. That doesn’t nec­es­sar­ily mean we don’t be­lieve it.

• See: Flaws.

• This seems to me ob­vi­ously very wrong. Here’s why. (Man­fred already said some­thing kinda similar, but I want to be more ex­plicit and more de­tailed.)

My util­ity func­tion (in so far as I ac­tu­ally have one) op­er­ates on states of the world, not on par­tic­u­lar things within the world.

It ought to be largely ad­di­tive for mostly-in­de­pen­dent changes to the states of differ­ent bits of the world, which is why ar­guably TORTURE beats DUST SPECKS in Eliezer’s sce­nario. (I won’t go fur­ther than “ar­guably”; as I said way back when Eliezer first posted that, I don’t trust any bit of my moral ma­chin­ery in cases so far re­moved from ones I and my an­ces­tors have ac­tu­ally en­coun­tered; nei­ther the bit that says “ob­vi­ously differ­ent peo­ple’s util­ity changes can just be added up, at least roughly” nor the bit that says “ob­vi­ously no num­ber of dust specks can be as im­por­tant as one in­stance of TORTURE”.

But there’s no rea­son what­ever why I should value 100 comfy cush­ions any more at all than 10 comfy cush­ions. There’s just me and Frank; what is ei­ther of us go­ing to do with a hun­dred cush­ions that we can’t do with 10?

Maybe that’s a bit of an ex­ag­ger­a­tion; per­haps with 100 cush­ions we could build them into a fort and play sol­diers or some­thing. (Not re­ally my thing, but Frank might like it, and it seems like any­thing that re­lieves the monotony of this drab white room would be good. And of course the offer ac­tu­ally available says that Frank dies if I get the cush­ions.) But I’m pretty sure there’s liter­ally no benefit to be had from a mil­lion cush­ions be­yond what I’d get from ten thou­sand.

And the same goes even if we con­sider things other than cush­ions. There’s just only so much benefit any sin­gle hu­man be­ing can get from a de­vice like this, and there’s no ob­vi­ous rea­son why—even with­out in­com­men­su­rable val­ues or any­thing like them—that should ex­ceed the value of an­other hu­man life in tol­er­able con­di­tions.

In par­tic­u­lar, any FAI that suc­cess­fully avoids dis­asters like tiling the uni­verse with in­ert smiley hu­manoid faces seems likely to come to the same con­clu­sion; so I don’t agree that in the Seelie sce­nario we should ex­pect it to ac­cept Omega’s offer un­less it has in­com­men­su­rable val­ues.

There are a few ways that that might be wrong, which I’ll list; it seems to me that each of them breaks one of the con­straints that make this an ar­gu­ment for in­com­men­su­rable val­ues.

Pos­si­ble ex­cep­tion 1: maybe the cush­ions wear out and I’m im­mor­tal in this sce­nario. But then I guess Frank’s im­mor­tal too, in which case the pos­si­ble value of that life we’re trad­ing away just went way up (in pretty much ex­actly the way the value of the cush­ion-source did).

Pos­si­ble ex­cep­tion 2: Alter­na­tively, per­haps I’m im­mor­tal and Frank isn’t. Or per­haps the ma­chine, al­though it can’t make a mind, can make me im­mor­tal when I wasn’t be­fore. In that case: sep­a­rate stretches of my im­mor­tal life—say, a mil­lion years long each—might rea­son­ably be treated as largely in­de­pen­dent, so then, yes, you can make the same sort of ar­gu­ment for prefer­ring CUSHIONS AND DEATH over STATUS QUO as for prefer­ring TORTURE over DUST SPECKS, and I don’t see that one prefer­ence is so much more ob­vi­ously right than the other as to let you con­clude that you want in­com­men­su­rable val­ues af­ter all.

• First, while Tor­ture v. Dust Specks in­spired me, sur­real util­ities doesn’t re­ally an­swer the ques­tion: it’s a frame­work where you can log­i­cally pick DUST SPECKS, but the ac­tual de­ci­sion is en­tirely de­pen­dent on which tier you place TORTURE or DUST SPECKS.

Se­cond, we have ex­cep­tion 3, which was brought up in the post that I am quickly re­al­iz­ing may have been a tad too long. Omega might offer some­thing that you’d ex­pect to have pos­i­tive util­ity re­gard­less of quan­tity—flat-out offer­ing cap­i­tal-F Fun. Now what?

• If Omega is re­ally offer­ing un­bounded amounts of util­ity, then the ex­act same ar­gu­ment as sup­ports TORTURE over DUST SPECKS ap­plies here. Thus:

Would you (should you) trade 0.01 sec­onds of Frank’s life (no mat­ter how much of it he has left) for 1000 years of cap­i­tal-F Fun for you? And then, given that that trade has already hap­pened, an­other 0.01 sec­onds of Frank’s like for an­other 1000 years of Fun? Etc. I’m pretty sure the an­swer to the first ques­tion is yes for al­most ev­ery­one (even the ex­cep­tion­ally al­tru­is­tic; even those who would be re­luc­tant to ad­mit it) and it seems to me that any given 0.01s of Frank’s life is of about the same value in this re­spect. In which case, you can get from wher­ever you are to be­gin with, to trad­ing off all of Frank’s re­main­ing life for a huge num­ber of years of Fun, by a (long) se­quence of step­wise im­prove­ments to the world that you’re prob­a­bly will­ing to make in­di­vi­d­u­ally. In which case, if Fun is re­ally ad­di­tive, it doesn’t make any sense to pre­fer the sta­tus quo to trillions of years of Fun and no Frank.

(As­sum­ing, once again, that we have the prospect of an un­limit­edly long life full of Fun, whereas Frank has only an or­di­nary hu­man lifes­pan ahead of him.)

Which feels like an ap­pal­ling thing to say, of course, but I think that’s largely be­cause in the real world we are never pre­sented with any choice at all like that one (be­cause real fun isn’t ad­di­tive like that, and be­cause we don’t have the op­tion of trillions of years of it) and so, quite rea­son­ably, our in­tu­itions about what choices it’s de­cent to make im­plic­itly as­sume that this sort of choice never re­ally oc­curs.

As with TORTURE v DUST SPECKS, I am not claiming that the (self­ish) choice of trillions of years of Fun at the ex­pense of Frank’s life is in fact the right choice (ac­cord­ing to my val­ues, or yours, or those of so­ciety at large, or the Ob­jec­tively Truth About Mo­ral­ity if there is one). Maybe it is, maybe not. But I don’t think it can rea­son­ably be said to be ob­vi­ously wrong, es­pe­cially if you’re will­ing to grant Eliezer’s point in TORTURE v DUST SPECKS, and there­fore I don’t see that this can be a con­clu­sive or near-con­clu­sive ar­gu­ment for in­com­men­su­rable tiers of value.

• Prob­lem: Peo­ple in real life choose the equiv­a­lent of the fabri­ca­tor over Frank all the time, as­sum­ing “choos­ing not to in­ter­vene to pre­vent a death” is equiv­a­lent to choos­ing the fabri­ca­tor...

Also, peo­ple ac­cept risks to their own life all the time.

• Well, sure. But peo­ple don’t always do what they wish they’d do, or be­lieve they should do. And I know peo­ple who will adamantly defend the po­si­tion that, some­how, not tak­ing an ac­tion that re­sults in a con­se­quence is fun­da­men­tally differ­ent from tak­ing an ac­tion that re­sults in the same con­se­quence.

And of course they ac­cept risks to their own life. Driv­ing, for ex­am­ple—you can’t get money with­out it, you can’t re­ally live with­out money, there­fore driv­ing has an ω-tier ex­pected util­ity. A teenager who de­cides to go drink­ing with his friends has de­cided that he’d rather en­joy the night than keep 10% of his life or what­ever. The con­clu­sions don’t change here.

• And I know peo­ple who will adamantly defend the po­si­tion that, some­how, not tak­ing an ac­tion that re­sults in a con­se­quence is fun­da­men­tally differ­ent from tak­ing an ac­tion that re­sults in the same con­se­quence.

Yeah, it’s a big as­sump­tion.

• 23 Jul 2013 14:59 UTC
6 points

You need to spec­ify what hap­pens if you de­cline the offer. Right now it looks as if you and Frank both die of de­hy­dra­tion af­ter cou­ple of days. Or you go in­sane and one kills the other (and maybe eats him). And then dies any­way. In or­der for this to be a dilemma, the baseline out­come needs to be more… whole­some.

Also, the temp­ta­tion isn’t very tempt­ing. An or­nate chan­de­lier? I could get some value from the nov­elty from see­ing it and maybe star­ing at it for sev­eral hours if it’s re­ally or­nate. It’s sta­tus as a su­per-lux­ury good would be worth­less in the ab­sence of a so­cial hi­er­ar­chy. I couldn’t trade or give away gazillions of them so mul­ti­ply­ing wouldn’t add any­thing.

I sup­pose the nanofab can man­u­fac­ture nov­elty (though it isn’t quite clear from your de­scrip­tion). But it won’t make minds. This is a prob­lem. Hu­mans are quite big on be­long­ing to a so­ciety. I can’t imag­ine what be­ing an im­mor­tal god of solip­sis­tic ma­trix would feel like but I sus­pect it could be hor­rible.

The pro­hi­bi­tion against cre­at­ing minds isn’t very clear as we don’t have a clear idea on what con­sti­tutes a mind. Maybe I could ask Omega to gen­er­ate the best pos­si­ble RPG game with an en­tire simu­lated world and su­per-re­al­is­tic NPCs? Would that be al­lowed? I don’t know if a suffi­ciently high-fidelity simu­la­tion of a per­son isn’t an ac­tual per­son. And there would be at least one mind—me. Could I self-mod­ify, grow my sense of em­pa­thy to epic pro­por­tions and start imag­in­ing peo­ple into be­ing? And then, to fix my past sins, I’d or­der a book “Every­thing You Could Ever Ask About Frank” or some­thing.

• I think we should steel­man this by stipu­lat­ing that if you don’t take the trade, nei­ther you nor Frank will die any time soon. You will both live out a nor­mal hu­man lifes­pan, just a very dull one.

It gets even more in­ter­est­ing if Frank is an im­mor­tal in this sce­nario, and the only way to get the ma­chine is to make him mor­tal, per­haps with some small prob­a­bil­ity ep­silon. How small does ep­silon have to be be­fore you (or Frank) will agree to such a trade?

• This is ba­si­cally what I in­tended with the White Room: make things as sim­ple as pos­si­ble.

Iron­i­cally, this may re­quire a state­ment that you and Frank will re­turn to the real world af­ter this trade… (ex­cept I can’t do that be­cause then the ob­vi­ous solu­tion is “take the nanofab, go make Hourai Elix­irs for ev­ery­one, ω^2 util­ity beats ω.” Argh.)

• … Eh­hhh… I think I’m go­ing to have to ex­pand Unseelie’s job here. In gen­eral, the nanofab is ca­pa­ble of cre­at­ing any­thing you want that’s sec­u­larly in­ter­est­ing (so, yes, you can have your eter­nally fun RPG game, though the NPCs aren’t go­ing to pass the Tur­ing test), but no method of re­s­ur­rect­ing Frank, or cre­at­ing an­other in­tel­li­gence, can work.

• Unseelie has to be more pow­er­ful than that; Emile pointed out that I could just simu­late a mind with enough rocks (or Sofa Cush­ions). Unseelie also has to make sure my mind is never pow­er­ful enough to simu­late an­other mind. That in­volves ei­ther chang­ing me or pre­vent­ing me from self-im­prov­ing, so self-im­prove­ment is prob­a­bly dis­al­lowed or severely limited if we keep the pro­hi­bi­tion on Unseelie chang­ing me.

• Maybe cre­ate a GLUP that always does ex­actly what Frank would’ve done, but isn’t sen­tient?

• I think the eas­iest way to steel­man the loneli­ness prob­lem pre­sented by the given sce­nario is to just have a third per­son, let’s say Jane, who stays around re­gard­less of whether you kill Frank or not.

• I guess I’m think­ing about this wrong. I want to ei­ther va­por­ize Frank or have Frank va­por­ize me for the same deal. I pre­fer fu­tures with fewer, hap­pier minds. IOW, I guess I ac­cept Noz­ick’s util­ity mon­ster.

• IOW, I guess I ac­cept the re­pug­nant con­clu­sion.

Don’t you mean you re­ject it? (The re­pug­nant con­clu­sion in­volves prefer­ring large num­bers of not-as-good lives to smaller num­bers of bet­ter lives.)

• The Repug­nant Con­clu­sion is as you say. Per­haps RomeoStevens was ac­cept­ing that the de­ci­sion he made is re­pug­nant to the au­thor?

• Oops, I meant Noz­ick’s util­ity mon­ster.

• 24 Jul 2013 18:32 UTC
3 points

Note: I think that the fact that there are only two lives/​minds men­tally posited in the prob­lem, “You” and “Frank” may sig­nifi­cantly mod­ify the per­ceived value of lives/​minds.

After all, con­sider these prob­lems:

1: The white room con­tains you, and 999 other peo­ple. The cost of the Nanofab is 1 life.

2: The white room con­tains you, and 999 other peo­ple. The cost of the Nanofab is 2 lives.

3: The white room con­tains you, and 999 other peo­ple. The cost of the Nanofab is 500 lives.

4: The white room con­tains you, and 999 other peo­ple. The cost of the Nanofab is 900 lives.

5: The white room con­tains you, and 1 other per­son. The cost of the Nanofab is 1 life.

If lives are sa­cred in gen­eral, you should be equally re­luc­tant to buy the Nanofab in all cases. That seems un­likely to be the case for most peo­ple.

On the other hand if the sa­cred value is “When you and some­one else are alone, don’t sac­ri­fice one of you” Some­one might be will­ing to buy the Nanofab in cases 1-4 and not 5.

(Of course, see­ing all op­tions at the same time likely also in­fluences be­hav­ior)

• Note that part of the point of us­ing sur­re­als is that you wouldn’t be equally re­luc­tant—you would be twice as re­luc­tant if two lives were on the line than if one was, be­cause 2ω = 2 * ω.

… that said, I’m heav­ily re­think­ing ex­actly what I’m us­ing for my tier­ing ar­gu­ment, here.

• Note that part of the point of us­ing sur­re­als is that you wouldn’t be equally re­luc­tant—you would be twice as re­luc­tant if two lives were on the line than if one was, be­cause 2ω = 2 * ω.

Thank you for ex­plain­ing. I don’t think I fully un­der­stood the for­mula ex­plain­ing that sur­real num­bers are dense in tiers.

… that said, I’m heav­ily re­think­ing ex­actly what I’m us­ing for my tier­ing ar­gu­ment, here.

Glad I was thought pro­vok­ing!

• I liked this post. The white room doesn’t re­ally seem to work so well as an in­tu­ition pump, but it’s good that some­one has brought up the idea of us­ing sur­real util­ities.

Since they lead to these tears, within which trade­off hap­pens nor­mally, but across which you don’t trade, it would be in­ter­est­ing to see if we ac­tu­ally find that. We might want to trade n lives for n+1 lives, but what other sa­cred val­ues do hu­mans have, and how do they be­have?

• Reli­gion seems to be one, if the Cru­sades are any in­di­ca­tion. Le­gal liberty, equal­ity… ba­si­cally any­thing that some­one’s sac­ri­ficed their life for, that’s not it­self a means to save lives, is a sa­cred value by defi­ni­tion.

• I feel that sac­ri­fic­ing your own life doesn’t re­ally count. If any­thing, it has to be some­thing that you kill or sac­ri­fice some­one else’s life for; but the other per­son’s life has to count as a sa­cred value. It’s not clear that out­group peo­ple’s lives count as sa­cred. On the other hand, maybe send­ing peo­ple to war counts as trad­ing the sa­cred value of life—for what ex­actly, though?

Le­gal liberty and equal­ity are a bit hard to ac­tu­ally trade; to the ex­tent that equal­ity is traded, though, it is very rou­tinely ex­changed for what one should think are low­est-tier goods, no?

On the other hand, I’m not sure were this leaves. Maybe this mess is just the usual hu­mans not hav­ing a proper util­ity func­tion and has noth­ing to do with tiers of in­creas­ing sa­cred­ness in par­tic­u­lar.

• De­ci­sion the­ory with or­di­nals is ac­tu­ally well-stud­ied and com­monly used, speci­fi­cally in lan­guage and gram­mar sys­tems. See pa­pers on Op­ti­mal­ity The­ory.

The re­s­olu­tion to this “tier” prob­lems is as­sign­ing ev­ery “con­straint” (thing that you value) an ab­stract vari­able, gen­er­at­ing a polyno­mial alge­bra in some un­godly num­ber of vari­ables, and then as­sign­ing a weight func­tion to that alge­bra, which is es­sen­tially as­sign­ing ev­ery vari­able an or­di­nal num­ber, as you’ve been do­ing.

Just as per­spec­tive on the ab­stract prob­lem there are two con­founders that I don’t see addressed

One is that ev­ery time you as­sign a value to some­thing you should ac­tu­ally be as­sign­ing a dis­tri­bu­tion of pos­si­ble val­ues to it. It’s cer­tainly pos­si­ble to tighten these dis­tri­bu­tions in the­ory but I don’t think that hu­man value sys­tems ac­tu­ally do tighten them enough to re­duce this to a math­e­mat­i­cally tractable prob­lem; and if they DO con­strain things that much I’m cer­tain we don’t know it. Which is just say­ing that this prob­lem is go­ing to end up with peo­ple reach­ing differ­ent in­tu­itive con­clu­sions.

Two is that it tends to be the case that these sys­tems are wildly un­der­speci­fied. If you do the ap­pro­pri­ate statis­tics to figure out how peo­ple rank con­straints, you don’t get an an­swer, you get some statis­tics about an an­swer, and the prob­a­bil­ity dis­tri­bu­tions on peo­ple’s prefer­ences are WIIIIIIDE. In or­der to solve this prob­lem in lin­guis­tics peo­ple use sub­ject- and prob­lem-spe­cific meth­ods to throw to­gether ad hoc con­clu­sions. So I guess these are re­ally the same com­plaint; you shouldn’t be us­ing sin­gle-value as­sign­ments and when you stop do­ing that you lose the com­pu­ta­tional pre­ci­sion that makes talk­ing about or­di­nal num­bers re­ally in­ter­est­ing.

(for refer­ence my OT knowl­edge comes en­tirely from ca­sual con­ver­sa­tions with peo­ple who do it pro­fes­sion­ally; I’m fairly con­fi­dent in these state­ments but I’d be open to con­tra­dic­tion from a lin­guist)

• I think you mean or­di­nals, not car­di­nals.

• Edited, thanks.

• The prob­lem with your “white room” sce­nario is that one hu­man can’t ac­tu­ally have Large amounts of util­ity. The value of the 3^^^3th seat cush­ion is ac­tu­ally, truly zero.

• Or at least, the sum over the util­ities of cre­ations one to in­finity con­verges.

• That would be my an­swer if we were talk­ing about, say, a billion cush­ions. With 3^^^3, most of them aren’t even in your fu­ture light cone, so they might as well not even ex­ist.

• … I did men­tion this, you know. Which is why I pro­ceeded to bring up Fun, which by defi­ni­tion always has a pos­i­tive util­ity no mat­ter how much of it you get.

• The tiered val­ues ap­proach ap­pears to run into con­ti­nu­ity trou­bles, even with sur­real num­bers.

Seelie 2.0 dou­ble checks with its men­tal copy of your val­ues, finds that you would rather have Frank’s life than in­finite Fun, and as­signs it a tier some­where in be­tween—for sim­plic­ity, let’s say that it puts it in the tier. And hav­ing done so, it cor­rectly re­fuses Omega’s offer.

How does it com­pare punch­ing/​severely in­jur­ing/​tor­tur­ing Frank with your pile of cush­ions or with in­finite fun? What if there is a .0001%/​1%/​99% prob­a­bil­ity that Frank will die?

• The first is en­tirely up to you. The sec­ond are worth 0.0001ω, 0.01ω, and .99ω, re­spec­tively, and are still larger than any sec­u­lar value. This is work­ing as planned, as far as I’m con­cerned...

• This is work­ing as planned, as far as I’m con­cerned...

Are you say­ing that any odds of your re­quest caus­ing Frank’s death, no mat­ter how small, are un­ac­cept­able? Then you will never be able to ask for any­thing.

• Yes. See: Flaws. This is Pas­cal’s Mug­ging; it shows up in real sys­tems too, you need a slightly more un­likely set-up but it’s still a plau­si­ble sce­nario. It’s not a prob­lem the real util­ity sys­tem doesn’t have.

• It’s not a prob­lem the real util­ity sys­tem doesn’t have.

Well, the usual util­i­tar­ian “tor­ture wins” does not have this par­tic­u­lar prob­lem, it trades it for the re­pug­nant con­clu­sion “tor­ture wins”.

Any­way, I don’t see how you ap­proach avoids any of the stan­dard pit­falls of util­i­tar­i­anism, though it might be mask­ing some.

• Sur­real Utilities can sup­port that con­clu­sion as well: how you de­cide on Tor­ture v. Dust Specks de­pends en­tirely on your choice of tiers.

I’m talk­ing purely about Pas­cal’s Mug­ging, where some­one shows up and says “I’ll save 3^^^3 lives if you give me five dol­lars.” This is iso­mor­phic to this prob­lem on the sur­re­als, where some­one says “I’ll give you omega-util­ity (save a life) at a prob­a­bil­ity of one in one quadrillion.)

• I would say the most ob­vi­ous flaw with sur­real util­ities (or, gen­er­ally, pretty much any­thing other than real util­ities) is sim­ply that you can’t sen­si­bly do in­finite sums or limits or in­te­gra­tion, which is af­ter all what ex­pected value is, which is the en­tire point of a util­ity func­tion. If there are only finitely many pos­si­bil­ities you’re fine, but if there are in­finitely many pos­si­bil­ities you are stuck.

• But there can’t be in­finitely many pos­si­bil­ities. If you re­ally want to be rigor­ous about it, count up ev­ery pos­si­ble macro­scopic move­ment of ev­ery pos­si­ble atom in your phys­i­cal body; that’s about as far as it gets. (Really, you only need to keep track of mus­cle ex­ten­sion and joint po­si­tion.)

• I should point out here that the space you’re av­er­ag­ing over isn’t the space of ac­tions you can take, it’s the space of states-of-the-world.

Now ar­guably that could be taken to be finite too, and that avoids these prob­lems. Still, I’m quite wary. The use of sur­re­als in par­tic­u­lar seems pretty un­mo­ti­vated here. It’s toss­ing in the kitchen sink and in­tro­duc­ing a whole host of po­ten­tial prob­lems just to get a few nice prop­er­ties.

(I would in­sist that util­ities should in fact be bounded, but that’s a sep­a­rate ar­gu­ment...)

• I could have sworn that I have seen sur­real in­te­grals calcu­lated as part of re­search into sur­real math­e­mat­ics. To me sur­real calcu­lus is a thing.

Are you sure you are not con­fus­ing how in­fini­ties are han­dled in other for­mal­iza­tions? Sur­real ad­di­tion is well defined and it takes no spe­cial form in the in­finite range.

The sen­tence struc­ture seems to sug­gest hav­ing a proof that such things are not pos­si­ble but I am kinda get­ting the situ­a­tion is more that you lack any proof that it is pos­si­ble.

• I could have sworn that I have seen sur­real in­te­grals calcu­lated as part of re­search into sur­real math­e­mat­ics. To me sur­real calcu­lus is a thing.

There’s a well-known at­tempt to make a the­ory of sur­real in­te­gra­tion; it pro­duced some fruit but did not ac­tu­ally yield a sen­si­ble defi­ni­tion of sur­real in­te­gra­tion. I’m un­aware of any suc­cess­ful at­tempt.

Edit: Also, that was for func­tions from sur­re­als to sur­re­als, not for func­tions from a mea­sure space to sur­re­als.

Are you sure you are not con­fus­ing how in­fini­ties are han­dled in other for­mal­iza­tions? Sur­real ad­di­tion is well defined and it takes no spe­cial form in the in­finite range.

I’m not dis­put­ing that? The (or rather, a) prob­lem is in­finite sums (sums of in­finitely many things), not sums of things that are in­finite.

The sen­tence struc­ture seems to sug­gest hav­ing a proof that such things are not pos­si­ble but I am kinda get­ting the situ­a­tion is more that you lack any proof that it is pos­si­ble.

I speak­ing weakly since I didn’t re­ally feel like drag­ging up the ac­tual ar­gu­ments. I’ll ex­pand on this in a cousin com­ment.

• On the one hand, yes; on the other hand, it’s not clear that the prob­lem of defin­ing the no­tions of calcu­lus for the sur­re­als in a sen­si­ble way isn’t solv­able.

• It also isn’t clear that it is. So why use sur­re­als? Use some­thing bet­ter-suited to the par­tic­u­lar prob­lem you’re solv­ing; sur­re­als are overkill and in­tro­duce se­ri­ous prob­lems (I’ll ex­pand on this in a cousin com­ment). There are so many ways to han­dle in­fini­ties de­pend­ing on what you’re do­ing; there’s noth­ing wrong with de­sign­ing one to suit the situ­a­tion. Don’t use sur­re­als just be­cause they’re rec­og­niz­able!

(I would say that the right way to han­dle in­fini­ties here is to sim­ply use the ex­tended non­nega­tive real num­bers—i.e. to not re­ally use a sys­tem of mul­ti­ple in­fini­ties at all. I’ll ex­pand on this in a cousin com­ment. Ac­tu­ally I would ar­gue that util­ities should re­ally be bounded, but that’s a sep­a­rate ar­gu­ment.)

• I’m not sure I un­der­stand. Utilities are sur­real, but prob­a­bil­ities aren’t, and they still add up to one—the num­ber of op­tions hasn’t changed, only their worth.

• Con­sider the bet that yields n utilons with prob­a­bil­ity 2^-n. The ex­pected util­ity of this bet is the sum over all n of n/​(2^n) which is sup­posed to be 2. But it’s hard to make a no­tion of con­ver­gence in the sur­re­als, be­cause the par­tial sums also get ar­bi­trar­ily close to 2 − 1/​ω and 2+1/​ω^(1/​2) and so on.

• While I’m not enough of a math­e­mat­i­cian to re­fute this, I would like to note that this is ex­plic­itly listed un­der the Flaws sec­tion, un­der “can we not have in­finites­i­mal prob­a­bil­ities, please?” 2^-ω is just ε, I think (it’s definitely on that scale), and ε prob­a­bil­ities are ridicu­lous for the same rea­son prob­a­bil­ity 0 is—you’d need to see some­thing with P(E|H)/​P(E) on the or­der of ω to con­vince your­self such a the­ory is true, which doesn’t re­ally make sense.

So if we keep the prob­a­bil­ities real, this prob­lem goes away, at the ex­pense of ban­ning ω-util­ity and on­ward from the bet.

• No, the prob­lem has noth­ing to do with in­finites­i­mal prob­a­bil­ities. There are no in­finites­i­mal prob­a­bil­ities in Os­car_Cun­ning­ham’s ex­am­ple, just ar­bi­trar­ily small real ones. (Of course, they’re only “ar­bi­trar­ily small” in the real num­bers—not in the sur­re­als!)

Thing is, you re­ally, re­ally can’t do limits (and thus in­finite sums or in­te­grals) in the sur­re­als.

Just hav­ing in­finites­i­mals is enough to screw some things up. Like the ex­am­ple Os­car_Cun­ning­ham gave—it seems like it should con­verge to 2; but in the sur­re­als it doesn’t, be­cause while it gets within any pos­i­tive real dis­tance of 2, it never gets within, say, 1/​omega of 2. (He said it gets ar­bi­trar­ily close to all of 2, 2-1/​omega, and 2+1/​omega^2, but re­ally it doesn’t get ar­bi­trar­ily close to any of them.)

This prob­lem doesn’t even re­quire the sur­re­als, it hap­pens as soon as you have in­finites­i­mals—get­ting within any 1/​n is now no longer ar­bi­trar­ily close! This isn’t enough to ruin limits, mind you, but it is enough to ruin the or­di­nary limits you think should work (1/​n no longer goes to zero). Add in enough in­finites­i­mals and it will be im­pos­si­ble for se­quences to con­verge, pe­riod.

(Edit: In case it’s not clear, here by “as soon as you have in­finites­i­mals”, I mean “as soon as you have in­finites­i­mals pre­sent in your sys­tem”, not “as soon as you try to take limits in­volv­ing in­finites­i­mals”. My point is that, as Os­car_Cun­ning­ham also pointed out, hav­ing in­finites­i­mals pre­sent in the sys­tem causes the or­di­nary limits you’re used to to fail.)

Of course, that’s still not enough to ruin all limits ever. There could still be nets with limits; in­finite sums are ru­ined, but maybe in­te­gra­tion isn’t? But you didn’t just toss in lots of in­finites­i­mals, you went straight to the sur­re­als. Things are about to get much worse.

Let’s con­sider an es­pe­cially sim­ple case—the case of an in­creas­ing net. Then tak­ing a limit of this net is just the same as tak­ing the supre­mum of its set of val­ues. And here we have a prob­lem. See, the thing that makes the real num­bers great for calcu­lus is the least up­per bound prop­erty. But in the sur­re­als we have the op­po­site of that—no set of sur­real num­bers has a least up­per bound, ever. Given any set S of sur­re­als and any up­per bound b, we can form the sur­real num­ber {S | b}; there’s always some­thing in­be­tween. You have pretty much com­pletely elimi­nated your abil­ity to take limits.

At this point I think I’ve made my point pretty well, but for fun let’s demon­strate some more patholo­gies. How about the St. Peters­burg bet? 2^-n prob­a­bil­ity of 2^n util­ity, yield­ing the in­finite se­ries 1+1+1+1+...; or­di­nar­ily we’d say this has ex­pected value (or sum) in­finity. But now we’ve got the sur­re­als, so we need to say which in­finity. Is it omega? In the or­di­nals—well, in the or­di­nals 2^-n doesn’t make sense, but the se­ries 1+1+1+1+… would at least con­verge to omega. But here, well, why should it con­verge to omega and not omega-1? I mean, omega-1 is smaller than omega, so that’s a bet­ter can­di­date, right? So far this is re­ally just the same ar­gu­ment as be­fore, but it gets worse; what if we dropped that ini­tial 1? If we were say­ing it con­verged to omega be­fore, it had bet­ter con­verge to omega-1 now. (If we were say­ing it con­verged to omega-1 be­fore, it had bet­ter con­verge to omega-2 now.) But we still have the same in­finite se­ries, so it had bet­ter con­verge to the same thing. (If we think of it as the in­finite se­quence (0, 1, 2, 3, 4, …) and just sub­tract 1 off each en­try, the new se­quence is cofi­nal with the old one, so it had bet­ter con­verge to the same thing also.)

Now it’s pos­si­ble some things could be res­cued. Limits of func­tions from sur­re­als to sur­re­als don’t seem like they’d nec­es­sar­ily always pose a prob­lem, be­cause if your in­put is sur­re­als this gets around the prob­lem of “you can’t get close enough with a set”. And so it’s pos­si­ble even in­te­gra­tion could be res­cued. As I men­tioned in a cousin com­ment, there was a failed at­tempt to come up with a the­ory of sur­real in­te­gra­tion, but that was for func­tions from sur­re­als to sur­re­als. Here we’re deal­ing with func­tions from some mea­sure space to sur­re­als, so that’s a bit differ­ent. Any­way, it might be pos­si­ble. But I’d be very care­ful be­fore as­sum­ing such a thing. As I’ve shown above, us­ing sur­re­als re­ally throws a wrench into limits.

So, if you can come up with such a the­ory, by all means use it. But I wouldn’t go as­sum­ing the ex­is­tence of such a thing un­til you’ve ac­tu­ally found it. In­stead I would sug­gest spe­cially con­struct­ing a sys­tem to ac­com­plish your goals rather than reach­ing for some­thing which sounds nice but is com­plete overkill.

Edit: And no, you can’t fix the prob­lem by just re­lax­ing the re­quire­ments for con­ver­gence. Then you re­ally would get the non-unique­ness prob­lem that Os­car_Cun­ning­ham points out. One ob­vi­ous pos­si­bil­ity that springs to mind is to break ties by least birth­day; that’s a very sur­real ap­proach to things. (Don’t take the supre­mum of a set S, in­stead just take {S|}.) So 1+1+1+… re­ally would con­verge to omega rather than some­thing else, and Os­car_Cun­ning­ham’s ex­am­ple re­ally would con­verge to 2. But it’s not clear to me that this would work nicely at all; in par­tic­u­lar, you still have the pathol­ogy that drop­ping the ini­tial “1” of 1+1+1+… some­how doesn’t cause the sum to drop by 1. Maybe some­thing to ex­plore, but not some­thing to as­sume that it works. (I per­son­ally wouldn’t bet on it, though that’s not nec­es­sar­ily worth much; I am hardly an ex­pert in the area.)

Of course, I think the best sys­tem here re­ally is the real num­bers, or rather the ex­tended non­nega­tive real num­bers. It only has one un­differ­en­ti­ated in­finity, satis­fy­ing in­finity-1=in­finity, so we don’t have the prob­lem that 1+1+1+1+… should con­verge to both in­finity and in­finity-1. It has the least up­per bound prop­erty, so in­finite sums (of pos­i­tive things) are guaran­teed to con­verge (pos­si­bly to in­finity) -- this re­ally is what forces the real num­bers on us. There re­ally is a rea­son in­te­gra­tion is done with real num­bers. (I for one would ac­tu­ally ar­gue that util­ity should be bounded, but that’s an en­tirely sep­a­rate ar­gu­ment.) Sur­re­als, by con­trast, aren’t just a bad set­ting for limits; they’re pos­si­bly the worst set­ting for limits.

• Ar­rgh.

Yeah, this is ba­si­cally go­ing to kill this, isn’t it. Oh well. Oops.

… yeah, if we’re go­ing to use tiered val­ues we might as well just ex­plic­itly make them pro­gram tiers, in­stead of bring­ing in a whole class’ worth of math­e­mat­i­cal com­pli­ca­tion we don’t re­ally need.

Well. Thanks! I can offi­cially say I was less wrong than I was this morn­ing.

• Btw one thing worth not­ing if you re­ally do want to work with sur­re­als is that it may be more pro­duc­tive to think in terms of { stuff | stuff } rather than limits. (Similar to my “break ties by least birth­day” sug­ges­tion.) Se­quences don’t have limits in the sur­re­als, but there is nonethe­less a the­ory of sur­real ex­po­nen­ti­a­tion based on {stuff | stuff}. In­te­gra­tion… well, it’s less ob­vi­ous to me that in­te­gra­tion based on limits should fail, but if it does, you could try to do it based on {stuff | stuff}. (The ex­ist­ing failed at­tempt at a the­ory of sur­real in­te­gra­tion takes that ap­proach, though as I said above, that’s not re­ally the same thing, as that’s for func­tions with the sur­re­als as the do­main.)

• The ex­tended non-nega­tive re­als re­ally don’t do what the OP was look­ing for. They won’t even al­low you to trade 1 life to save 10.000 lives, let alone have a hi­er­ar­chy of val­ues, some of which are trad­able against each other and some of which are not.

• In­deed, they cer­tainly don’t. My point here isn’t “here is how you fix the prob­lem with limits while still get­ting the things OP wanted”. My point here is “here is how you fix the prob­lem with limits”. I make no claim that it is pos­si­ble or de­sir­able to get the things OP wanted. But yes I sup­pose it is pos­si­ble that there may be some way to do so with­out com­pletely screw­ing up limits, if we use a weird no­tion of limits.

• Go­ing back to the (ex­tended) re­als that do noth­ing in­ter­est­ing doesn’t strike me as a mean­ingful way of “fix­ing the prob­lem with limits” in this con­text, when ev­ery­body knows that limits work for those… It doesn’t re­ally fix any prob­lem at all, it just says you can’t do cer­tain things (namely, go be­yond the (ex­tended) re­als) be­cause that makes the prob­lem come up.

• Yes, that’s kind of my point. I’m not try­ing to do what the OP wanted and come up with a sys­tem of in­fini­ties that work nicely for this pur­pose. I’m try­ing to point out that there are very good rea­sons that we usu­ally stick to the ex­tended re­als for this, that there are very real prob­lems that crop up when you go be­yond it, and that be­come es­pe­cially prob­le­matic when you jump to the end and go all the way to the sur­re­als.

I’m not try­ing to fix prob­lems raised in the origi­nal post; I’m try­ing to point out that these are se­ri­ous prob­lems that the origi­nal post didn’t ac­knowl­edge—and the usual way we fix these is just not go­ing be­yond the ex­tended re­als at all so that they don’t crop up in the first place, be­cause these re­ally are se­ri­ous prob­lems. The ul­ti­mate prob­lem here is com­ing up with a de­ci­sion the­ory—or here just a the­ory of util­ity—and in that con­text, fix­ing the prob­lem by aban­don­ing goals that aren’t satis­fi­able and ac­cept­ing the triv­ial solu­tion that is forced on you is still fix­ing the prob­lem. (Depend­ing on just what you re­quire, stick­ing to the ex­tended re­als may not be to­tally forced on you, but it is hard to avoid and this is a prob­lem that the OP needs to ap­pre­ci­ate.)

The point isn’t “this is how you fix the prob­lem”, the point is “take a step back and get an ap­pre­ci­a­tion for the prob­lem and for what you’re re­ally sug­gest­ing be­fore you go rush­ing ahead like that”. The point isn’t “limits work in the ex­tended re­als”, the point is “limits work a lot less well if you go be­yond there”. I per­son­ally think the whole idea is mis­guided and util­ities should be bounded; but that is a sep­a­rate ar­gu­ment. But if the OP re­ally does want a vi­able the­ory along the lines he’s sug­gest­ing here even more than he wants the re­quire­ments that force the ex­tended re­als on us, then he’s got a lot more work to do.

• Off the top of my head, if the sur­re­als don’t al­low of tak­ing limits, the ob­vi­ous math­e­mat­i­cal move is to ex­tend them so that they do (cf. ra­tio­nals and re­als). Has any­one done this?

• I don’t think that’s re­ally pos­si­ble here. In gen­eral if you have an or­dered field, there is a thing you can do called “com­plet­ing” it, but I sus­pect this doesn’t re­ally do what you want. Ba­si­cally it adds in all limits of Cauchy nets, but all those se­quences that stopped be­ing con­ver­gent be­cause you tossed in in­finites­i­mals? They’re not Cauchy any­more ei­ther. If you re­ally want limits to work great, you need the least up­per bound prop­erty, and that takes you back to the re­als.

Of course, we don’t nec­es­sar­ily need any­thing that strong—we don’t nec­es­sar­ily need limits to work as well as in the re­als, and quite pos­si­bly it’s OK to re­define “limit” a bit. But I don’t think tak­ing the com­ple­tion solves the prob­lem you want.

(I sup­pose noth­ing’s forc­ing us to work with a field, though. We could per­haps solve the prob­lem by mov­ing away from there.)

As for the ques­tion of com­plet­ing the sur­re­als, in­de­pen­dent of whether this solves the prob­lem or not—well, I have no idea whether any­one’s done this. Off­hand thoughts:

• You’re work­ing with sur­re­als, so you may have to worry about foun­da­tional is­sues. Those are prob­a­bly ig­nor­able though.

• The sur­re­als may already be com­plete, in the triv­ial sense that it is im­pos­si­ble to get a net to be Cauchy in a non­triv­ial man­ner.

• Really, if we want limits for sur­re­als, we need to be tak­ing limits where the do­main isn’t a set. Like I said above, limits of sur­real func­tions of sur­re­als should work fine, and it’s maybe pos­si­ble to use this to get in­te­gra­tion to work too. If you do this I sus­pect off­hand any sort of com­ple­tion will just be un­nec­es­sary (I could be very wrong about that though).

• Which is the thing—if we want to com­plete it in a non­triv­ial sense, does that mean we’re go­ing to have to al­low “nets” with a proper class do­main, or… uh… how would this work with filters? Yikes. Now you’re run­ning into some foun­da­tional is­sues that may not be so ig­nor­able.

• Maybe it’s best to just ig­nore limits and try to for­mu­late things in terms of {stuff | stuff} if you’re work­ing with sur­re­als.

• I still think the sur­re­als are an in­ap­pro­pri­ate set­ting.

• From an an­ces­tor:

(Edit: In case it’s not clear, here by “as soon as you have in­finites­i­mals”, I mean “as soon as you have in­finites­i­mals pre­sent in your sys­tem”, not “as soon as you try to take limits in­volv­ing in­finites­i­mals”. My point is that, as Os­car_Cun­ning­ham also pointed out, hav­ing in­finites­i­mals pre­sent in the sys­tem causes the or­di­nary limits you’re used to to fail.)

And from cur­rent:

Ba­si­cally it adds in all limits of Cauchy nets, but all those se­quences that stopped be­ing con­ver­gent be­cause you tossed in in­finites­i­mals? They’re not Cauchy any­more ei­ther. If you re­ally want limits to work great, you need the least up­per bound prop­erty, and that takes you back to the re­als.

When you add in­finites and in­finites­i­mals to the re­als (in the or­di­nary way, I haven’t worked out what hap­pens for the sur­re­als), then you can still have limits and Cauchy se­quences, you just have to also let your se­quences be in­finitely long (that is, not just hav­ing in­finite to­tal length, but con­tain­ing mem­bers that are in­finitely far from the start). This is what hap­pens with non-stan­dard anal­y­sis, and there are even the­o­rems say­ing that it all adds up to nor­mal­ity.

But I agree that sur­re­als are not right for util­ities, and that re­als are (con­di­tional on util­ities be­ing right), and that even con­sid­er­ing just the pure math­e­mat­ics, com­plet­ing the sur­re­als in some way would likely in­volve foun­da­tional is­sues.

• When you add in­finites and in­finites­i­mals to the re­als (in the or­di­nary way,

What on earth is the “or­di­nary way”? There are plenty of ways and I don’t know any of them to be the or­di­nary one. Do you mean con­sid­er­ing the hy­per­re­als?

(that is, not just hav­ing in­finite to­tal length, but con­tain­ing mem­bers that are in­finitely far from the start).

What? How does that help a se­quence be Cauchy at all? If there are in­finites­i­mals, the el­e­ments will have to get in­finites­i­mally close; what they do at the start is ir­rele­vant. Whether or not it’s pos­si­ble for se­quences to con­verge at all de­pends (roughly, I’m de­liber­ately be­ing loose here) on just how many in­finites­i­mals there are.

This is what hap­pens with non-stan­dard anal­y­sis, and there are even the­o­rems say­ing that it all adds up to nor­mal­ity.

I’ll ad­mit to not be­ing too fa­mil­iar with non-stan­dard anal­y­sis, but I’m not sure these the­o­rems ac­tu­ally help here. Like if you’re think­ing of the trans­fer prin­ci­ple, to trans­fer a state­ment about se­quences in R, well, wouldn’t this trans­fer to a state­ment about func­tions from N* to R*? Or would that even work in the first place, be­ing a state­ment about func­tions? Those aren’t first-or­der...

The hy­per­re­als I’m pretty sure have enough in­finites­i­mals that se­quences can’t con­verge (though I’ll ad­mit I don’t re­mem­ber very well). This isn’t re­ally that rele­vant to the hy­per­re­als, though, since if you’re do­ing non-stan­dard anal­y­sis, you don’t care about that; you care about things that have the ap­pro­pri­ate do­main and thus can ac­tu­ally trans­fer back to the re­als in the first place. You don’t want to talk about se­quences; you want to talk about func­tions whose do­main is some hy­per-thing, like the hy­per-nat­u­rals. Or maybe just hy­per-analogues of func­tions whose do­main is some or­di­nary thing. I’ll ad­mit to not know­ing this too well. Re­gard­less, that should get around the prob­lem, in much the same way as in the sur­re­als, if the do­main is the sur­re­als, it should largely get around the prob­lem...

• What on earth is the “or­di­nary way”? There are plenty of ways and I don’t know any of them to be the or­di­nary one. Do you mean con­sid­er­ing the hy­per­re­als?

Sorry, I think of non-stan­dard anal­y­sis as be­ing “the or­di­nary way” and the sur­re­als as “the weird way”. I don’t know any oth­ers.

I’ll ad­mit to not be­ing too fa­mil­iar with non-stan­dard anal­y­sis, but I’m not sure these the­o­rems ac­tu­ally help here. Like if you’re think­ing of the trans­fer prin­ci­ple, to trans­fer a state­ment about se­quences in R, well, wouldn’t this trans­fer to a state­ment about func­tions from N to R?

Yes, you get non-stan­dard se­quences in­dexed by N* in­stead of N, al­though what you ac­tu­ally do, which was the point of NSA, is ex­press the­o­rems about limits differ­ently: if this is in­finites­i­mal, that is in­finites­i­mal.

I just thought of Googling “sur­real anal­y­sis”, and it turns out to be a thing, with books. So one way or an­other, it seems to be pos­si­ble to do deriva­tives and in­te­grals in the sur­real set­ting.

• Sorry, I think of non-stan­dard anal­y­sis as be­ing “the or­di­nary way” and the sur­re­als as “the weird way”. I don’t know any oth­ers.

Well R is the largest Archimedean or­dered field, so any or­dered ex­ten­sion of R will con­tain in­finites­i­mals. The triv­ial way is just to ad­join one; e.g., take R[x] and de­clare x to be lex­i­co­graph­i­cally smaller (or larger) than any el­e­ment of R, and then pass to the field of frac­tions. Not par­tic­u­larly nat­u­ral, ob­vi­ously, but it demon­strates that say­ing “add in­finites­i­mals” hardly picks out any con­struc­tion in par­tic­u­lar.

(FWIW, I think of sur­re­als as “the kitchen sink way” and hy­per­re­als as “that weird way that isn’t ac­tu­ally unique but does use­ful things be­cause the­o­rems from logic say it re­flects on the re­als”. :) )

Yes, you get non-stan­dard se­quences in­dexed by N* in­stead of N, al­though what you ac­tu­ally do, which was the point of NSA, is ex­press the­o­rems about limits differ­ently: if this is in­finites­i­mal, that is in­finites­i­mal.

If I’m not mis­taken, I think that’s just how you use would ex­press limits of re­als within the hy­per­re­als; I don’t think you can nec­es­sar­ily ex­press limits within the hy­per­re­als them­selves that way. (For in­stance, imag­ine a func­tion f:R*->R* defined by “If x is not in­finites­i­mal, f(x)=0; oth­er­wise, f(x)=1/​omega” (where omega de­notes (1,2,3,...)). Ob­vi­ously, that’s not the sort of func­tion non-stan­dard an­a­lysts care about! But if you want to con­sider the hy­per­re­als in and of them­selves rather than as a means to study the re­als (which, ad­mit­tedly, is pretty silly), then you are go­ing to have to con­sider func­tions like that.)

I just thought of Googling “sur­real anal­y­sis”, and it turns out to be a thing, with books. So one way or an­other, it seems to be pos­si­ble to do deriva­tives and in­te­grals in the sur­real set­ting.

Oh, yes, I’ve seen that book, I’d for­got­ten! Be care­ful with your con­clu­sion though. Deriva­tives (just us­ing the usual defi­ni­tion) don’t seem like they should be a prob­lem off­hand, but I don’t think that book pre­sents a the­ory of sur­real in­te­gra­tion (I’ve seen that book be­fore and I feel like I would have re­mem­bered that, since I only re­mem­ber a failed at­tempt). And I don’t know how gen­eral what he does is—for in­stance, the defi­ni­tion of e^x he gives only works for in­finites­i­mal x (not an en­courag­ing sign).

I’ll ad­mit to be­ing pretty ig­no­rant as to what ex­tent sur­real anal­y­sis has ad­vanced since then, though, and to what ex­tent it’s based on limits vs. to what ex­tent it’s based on {stuff | stuff}, though. I was try­ing to look up ev­ery­thing I could re­lated to sur­real ex­po­nen­ti­a­tion a while ago (which led to the MathOverflow ques­tion linked above), but that’s not ex­actly the same thing as in­finite se­ries or in­te­grals...

• I think you just have to look at the col­lec­tion of Cauchy se­quences where “se­quence” means a func­tion from the or­di­nals to the sur­re­als, and “Cauchy” means that the terms even­tu­ally get smaller than any sur­real.

• I’d be skep­ti­cal of that as­ser­tion. Even stick­ing to or­di­nary topol­ogy on ac­tual sets, trans­finite se­quences are not enough to do limits in gen­eral; in gen­eral you need nets. (Or filters.) Doesn’t mean you’ll need that here—might the fact that the sur­re­als are lin­early or­dered help? -- but I don’t think it’s some­thing you should as­sume would work.

But yeah it does seem like you’ll need some­thing able to con­tain a “se­quence” of or­der type that of the class of all or­di­nals; quan­tify­ing over or­di­nals or sur­re­als or some­thing in the “do­main”. (Like, as I said above, limits of sur­real-val­ued func­tions of a sur­real vari­able shouldn’t pose a prob­lem.)

In any case, se­quences or nets are not nec­es­sar­ily the is­sue. This still doesn’t help with in­finite sums, be­cause those are still just or­di­nary omega-se­quences. But re­ally the is­sue is in­te­gra­tion; in­finite sums can be ig­nored if you can get in­te­gra­tion. Does the “do­main” there have suffi­cient gran­u­lar­ity? Well, uh, I don’t know.

• Any­one new to this page: I’m ba­si­cally talk­ing about Haus­ner util­ities, ex­cept with sur­real num­bers need­lessly slapped on.

• Could util­ities be multi-di­men­sional? Real vec­tor spaces are much nicer to work with than to sur­real num­bers.

For ex­am­ple, the util­ity for frank be­ing al­ive would be (1,0), while the util­ity for a seat cush­ion is (0,1). Us­ing lex­i­co­graphic or­der­ing, (1,0) > (0,3^^^3).

Vec­tor val­ued util­ity func­tions vi­o­late the VNM ax­iom of con­ti­nu­ity, but who cares.

• Vec­tor val­ued util­ity func­tions vi­o­late the VNM ax­iom of con­ti­nu­ity, but who cares.

Sur­real val­ued ones do too. Vio­lat­ing the VNM ax­iom of con­ti­nu­ity is the whole point of the ex­er­cise. We don’t want a sec­u­lar value to be worth any non-zero prob­a­bil­ity of a sa­cred value, but we do want it to be bet­ter than noth­ing.

• I give seat cush­ions zero value. I give the com­fort they bring me zero value. The only valuable thing about them is the hap­piness they bring from the com­fort. Un­less the nanofab can make me as happy as my cur­rent hap­piness plus Frank’s com­bined, noth­ing it makes will be worth it. It prob­a­bly could, but that’s not the point.

As for the idea of sur­real util­ities, there’s noth­ing wrong with it in prin­ci­ple. The ax­iom they vi­o­late isn’t any­thing par­tic­u­larly bad to vi­o­late. The prob­lem is that, re­al­is­ti­cally speak­ing, you might as well just round in­finites­i­mal util­ity down to zero. If you con­sider a cush­ion to be worth in­finites­i­mally many lives, then if you’re given a choice that gives you an ex­tra cush­ion and has zero ex­pected change in the num­ber of lives, you’d take it. But you won’t get that choice. You’ll get choices where the ex­pected change in num­ber of lives is very small, but the ex­pected value from lives will always be in­finitely larger than the ex­pected value from cush­ions.

• See: Flaws. This is the same prob­lem as with Pas­cal’s Mug­ging, re­ally; it doesn’t go away when you switch to re­als, it just re­quires weirder (but still plau­si­ble) situ­a­tions.

Seat cush­ions are meant to be slightly hu­morous ex­am­ple. Omega can also hook you up with in­finite Fun, which was in the post that I’m quickly re­al­iz­ing could use a rewrite.

• In that case I’d pick the Fun. I ac­cept the re­pug­nant con­clu­sion and all, but the larger pop­u­la­tion still has to have more net hap­piness than the smaller one.

• *shrug* I did list that as a sep­a­rate tier. Sur­real Utilities are meant to be a way to for­mal­ize tiers; the ac­tual re­sult of the util­ity-com­pu­ta­tion de­pends on where you put your tiers.

The point of this post is to show that hu­mans re­ally do have tiers, and sur­re­als do a good job of rep­re­sent­ing tiers; the ques­tion of how to as­sign util­ities is an open one.

• How do you know hu­mans have tiers? The situ­a­tion has never come up be­fore. We’ve never had the in­finite co­in­ci­dence where the value at the high­est tier is zero.

Also, why does it mat­ter? It’s never go­ing to come up ei­ther. If you pro­gram an AI to have tiers, it will quickly op­ti­mize that out. Why waste pro­cess­ing power on lower tiers if it has a chance of helping with the higher ones?

• See: gedanken­ex­per­i­ment. I can guess what I’d choose given a blank white room.

And that is a flaw in the sys­tem. But it’s one that real-val­ued util­ity sys­tems have as well. See: Pas­cal’s Mug­ging. An AI vuln­er­a­ble to Pas­cal’s Mug­ging will just spend all its time break­ing free of a hy­po­thet­i­cal Ma­trix.

I did men­tion this un­der Flaws, you know...

• I would like to point out that Fun was listed as a sep­a­rate tier, and that whether or not to put it on the same tier as a hu­man life is en­tirely up to you. Sur­real util­ities aren’t much of a de­ci­sion the­ory, they’re just a way to for­mal­ize tiered val­ues; the ac­tual de­ci­sion you make de­pends en­tirely on the val­ues you as­sign by some other method.

• To me there is very big differ­ence be­tween 0 prob­a­bil­ity and an ex­act in­finides­i­mal prob­a­bil­ity and I dis­agree that it is ob­vi­ous they suffer from the same prob­lems.

For ex­am­ple if I have a unit line and choose some par­tic­u­lar point the prob­a­bil­ity of pick­ing some ex­act point is ep­silon. If I where to pick a point from a unit square the prob­a­bil­ity would be yet ep­silon times smaller, for a to­tal of ep­silon*ep­silon. If I where to pick a point from a line of lenght 2 the prob­a­bil­ity would only be half for a to­tal of ep­silon/​2.

When us­age of in­finides­i­mal prob­a­bil­ites of­ten fail is not spe­sify­ing which one and treat­ing them all as the same one. It is not so that If you can’t mul­ti­ply an amount finite times and end up with a finite amount then all such amounts must be equal. If I mul­ti­ply ep­silon by the first or­der in­finite I get a finite 1. If I mul­ti­ply ep­silon*ep­silon by the first or­der in­finite I get a non-finite pos­i­tive amount (ex­actly ep­silon).

What im­pact in­finite or in­finides­i­mal prob­a­bil­ities make can largely be adopted by us­ing rules to the same effect. An Ex­am­ple could be dis­t­in­guish­ing be­tween “pure” 0 and “al­most never” and “pure” 1 and “al­most always”. For what prac­ti­cal effect they might make con­sider darts. There are var­i­ous prob­a­bil­ities con­sern­ing in which sec­tor the dart lands and for ex­am­ple whether it it lands on a line di­vid­ing ar­eas. How­ever the num­bers be­ing passed around con­sern­ing lines will live a life largely sep­a­rated by the math done for the ar­eas. Now I can ei­ther take the sep­a­rat­ed­ness as known fact out­side the anal­y­sis or have the anal­y­sis show the sep­a­rt­ed­ness of them. And there will be mul­ti­ple types of zero prob­a­bil­ities. For ex­am­ple given that the board was hit, the prob­a­bil­ity for the dart not hit­ting any spe­sific area, line sep­a­rat­ing ar­eas or a con­nec­tion be­tween lines is zero. How­ever if I throw a dart I know I should not ex­pect to hit that ex­act spot dur­ing the evening, the prob­a­bil­ity of it’s re­cur­rence is an “im­pure” zero. The dart can still land there and it won’t mag­i­cally avoid that spot. And no mat­ter how many darts I throw the prob­a­bil­ity of hit­ting an old spot in­creases but I am not ex­pect­ing to ac­tu­ally hit one. How­ever if I no­tice that my prob­a­bil­ity of hit­ting an area di­vider or a line in­ter­sec­tion is van­ish­ing, in prac­tise I know to fo­cus on the area ra­tios, but I won’t ac­cuse of some­one of ly­ing if they re­port a sin­gle such oc­cur­rence dur­ing the time I know such a man. How­ever if they re­port 2 such oc­cur­rences I have rea­son to be sus­pi­cious.

• I am aware of how in­finites­i­mals work. How­ever, con­sider Bayes’ the­o­rem: If you have an in­finites­i­mal prior, you have to find ev­i­dence weighted ω:1 in or­der to end up with a real pos­te­rior prob­a­bil­ity.

• While you might not kill Frank to get the ma­chine, there has to be some small ep­silon, such that you would take the ma­chine in ex­change for an ad­di­tional prob­a­bil­ity ep­silon of Frank dy­ing to­day. Wouldn’t Frank agree to that trade?Wouldn’t you agree to a small ad­di­tional prob­a­bil­ity of dy­ing your­self in ex­change for the ma­chine?

Other­wise, liv­ing in a big white room is go­ing to be a bit—ahem—dull for both of you.

I agree there is a difficulty here for any util­ity func­tion. The ma­chine can make un­limited quan­tities of sec­u­lar goods, so if 3^^^3 beau­tiful paint­ings are worth prob­a­bil­ity ep­silon of Frank dy­ing, why aren’t 4^^^^4 even more beau­tiful paint­ings worth prob­a­bil­ity 1 of Frank dy­ing? Pre­sum­ably be­cause Frank would ac­cept the former trade, but not the lat­ter one.

• Prob­a­bly not, in a white room. That sort of risk trade-off makes sense in the real world, but a flat-out trade of a small chance of Frank’s death and a sec­u­lar value doesn’t make sense to me in a white-room.

That’s much the point of a sa­cred value: it doesn’t mat­ter how much I’d have to give up, a life is worth it.

This is, by the way, how I’d like for an FAI to think. Don’t worry about giv­ing us fancy books un­til af­ter we’re all as close to im­mor­tal as pos­si­ble, thanks; I’d rather wait an ex­tra year for my Fun life than lose a few more thou­sand lives.

• So you wouldn’t ac­cept the trade your­self i.e. a small risk of you dy­ing so that both you and Frank get to use the ma­chine and have an en­joy­able life? You’d pre­fer a dull life over any in­creased risk of death? In­ter­est­ing that you bite that bul­let.

I’d like to see ex­actly how this is dis-analo­gous from real-life. Clearly you use elec­tronic items to ac­cess the In­ter­net, which comes with some small risk of elec­tro­cut­ing your­self. What’s the differ­ence?

Some other thought ex­per­i­ments along these lines:

There are a billion peo­ple in the room, and the trade is that just one of them gets kil­led, and all the oth­ers get to use the won­der­ful ma­chine. Or each of them has a 1 in billion chance of get­ting kil­led (so it might be that ev­ery­one sur­vives, or that a few peo­ple die). Is there any moral differ­ence be­tween these con­di­tions? Does ev­ery­one have to con­sent to these con­di­tions be­fore any­one can get the ma­chine?

The ma­chine is already in the room, but it just hap­pens to have an in­her­ent small risk of elec­tro­cut­ing peo­ple nearby if it is switched on. That wasn’t any sort of “trade” or “con­di­tion” im­posed by Omega; the ma­chine is just like that. Is it OK to switch it on?

• ’Cause in real life, if I didn’t use a com­puter, I would mas­sively in­crease my chances of starv­ing, hav­ing no other mar­ketable skills.

In fact, in real life this al­most never comes up, be­cause the tiny chance of you out­right dy­ing is out­weighed by prac­ti­cal con­cerns. Hence the white-room, so I can take out all the ac­tual con­se­quences and bring in a flat choice. (Though ap­par­ently, I didn’t close all the loop­holes; ad­mit­tedly, some of them are le­gi­t­i­mate con­cerns about what a hu­man life ac­tu­ally means.)

At any rate, while my per­sonal opinion is ap­par­ently shift­ing to­wards “nev­er­mind, lives have a real value af­ter all” (my an­swers would be “yes to unan­i­mous con­sent, no to unan­i­mous con­sent, and yes it would be, which im­plies a rather large Oops!), there are still plenty of places where it makes sense to draw a tier. Un­for­tu­nately, sur­re­als turned out to be a ter­rible choice for such things purely for math­e­mat­i­cal rea­sons, so if I ever try this again it will be with flat-out pro­gram classes named Tiers.

• Ac­tu­ally, be­fore I com­pletely throw up my hands, I should prob­a­bly figure out what seems differ­ent be­tween the one-on-one trade and the billion-to-one trade that changes my an­swers...

Oh, I see. It’s the tier­ing again, af­ter all. The in­finite Fun is it­self a sec­ond-tier value; whether or not it’s on the same tier as a life is its own de­bate, but a billion things pos­si­bly-equal-to-a-life are more likely to out­com­pete a life than a sin­gle one.

… of course, if you re­place “in­finite Fun” with “3^^^^3 years of Fun,” the tier­ing ar­gu­ment van­ishes but the prob­lem might not. Argh, I’m go­ing to have to re­think this.

• I de­cided some time ago that I don’t re­ally care about moral­ity,be­cause my re­vealed prefer­ences say I care a lot more about per­sonal com­fort than sav­ing lives and I’m un­will­ing to change that. I don’t think I’d be will­ing to spend £50 to save the life of an anony­mous stranger that I’d never meet, if I found out about a char­ity that effi­cient, so for the pur­poses of a thought ex­per­i­ment I should also be will­ing to kill Frank for such a small amount of money, as­sum­ing so­cial and le­gal con­se­quences are kept out of the way by Omega, and the util­ity of pos­si­bly befriend­ing Frank isn’t taken into ac­count.

That aside, though, I think tak­ing the nanofab is ac­tu­ally the morally right choice. Two lives spent in an un­com­fortable fea­ture­less room are worth sig­nifi­cantly less than one life spent as a nigh-om­nipo­tent god. I’m not sure if let­ting/​mak­ing Frank con­tinue to live in the un­com­fortable fea­ture­less room is even of pos­i­tive util­ity to him. If I knew there wasn’t any more to life than the fea­ture­less un­com­fortable room, I would be con­tem­plat­ing suicide fairly quickly.

• In effect, the sa­cred value has in­finite util­ity rel­a­tive to the sec­u­lar value.

That’s no ac­cu­rate rep­re­sen­ta­tion of how hu­man’s value sa­cred val­ues. There are cases where peo­ple value get­ting X sa­cred util­i­tons over get­ting X sa­cred util­i­tons + Y sec­u­lar ul­tili­tons.

Emerg­ing sa­cred val­ues: Iran’s nu­clear pro­gram by Morteza De­hghani is a good read to get how sa­cred val­ues be­have.

Se­cret val­ues pre­vent cor­rup­tion.

• True—but I’d deem such a choice ir­ra­tional, and clearly mo­ti­vated by the de­sire not to ap­pear “money-grub­bing” more than an ac­tual be­lief that X > X+Y.

• I think there quite some value in hav­ing sa­cred be­liefs if you can de­mostrate to other peo­ple that those be­liefs are sa­cred.

Take a poli­ti­cian who thinks that so­lar sub­sidies are a good thing and who pushes for a law to that effect. Then a com­pany man­u­fac­tor­ing so­lar cells offers to give him 10,000$with­out any strings at­tached. 10,000$ are util­ity for the poli­ti­cian and the poli­ti­cian shouldn’t just ac­cept the money and put it into his own pocket, even if he can do it in a way where no­body will no­tice.

There value in the poli­ti­cian fol­low­ing a de­ci­sion frame­work where he pre­com­mits against ac­cept­ing cer­tain kind of util­ity. From a TDT per­spec­tive that might be the cor­rect strat­egy.

• Thank you, if for noth­ing else, for clar­ify­ing my in­tu­itive sense that Dust Specks are su­pe­rior to Tor­ture. Your thought ex­per­i­ment clar­ified to me that tiers of util­ity DO match my value sys­tem.

• An al­ter­nate ti­tle for this post was “Sur­real Utilities and Seat Cush­ions.”

On a side note—I am not en­tirely sure what tags to ap­ply here, and I couldn’t seem to find an ex­haus­tive tag list (though I ad­mit­tedly didn’t work very hard.)