# The Empty White Room: Surreal Utilities

This article was composed after reading Torture vs. Dust Specks and Circular Altruism, at which point I noticed that I was confused.

Both posts deal with versions of the sacred-values effect, where one value is considered “sacred” and cannot be traded for a “secular” value, no matter the ratio. In effect, the sacred value has infinite utility relative to the secular value.

This is, of course, silly. We live in a scarce world with scarce resources; generally, a secular utilon can be used to purchase sacred ones—giving money to charity to save lives, sending cheap laptops to poor regions to improve their standard of education.

Which implies that the entire idea of “tiers” of value is silly, right?

One of the reasons we are not still watching the Sun revolve around us, while we breath a continuous medium of elemental Air and phlogiston flows out of our wall-torches, is our ability to simplify problems. There’s an infamous joke about the physicist who, asked to measure the volume of a cow, begins “Assume the cow is a sphere...”—but this sort of simplification, willfully ignoring complexities and invoking the airless, frictionless plane, can give us crucial insights.

Consider, then, this *gedankenexperiment*. If there’s a flaw in my conclusion, please explain; I’m aware I appear to be opposingthe consensus.

## The Weight of a Life: Or, Seat Cushions

This entire universe consists of an empty white room, the size of a large stadium. In it are you, Frank, and occasionally an omnipotent AI we’ll call Omega. (Assume, if you wish, that Omega is running this room in simulation; it’s not currently relevant.) Frank is irrelevant, except for the fact that he is known to exist.

Now, looking at our utility function here...

Well, clearly, the old standby of using money to measure utility isn’t going to work; without a trading partner money’s just fancy paper (or metal, or plastic, or whatever.)

But let’s say that the floor of this room is made of cold, hard, and decidedly *uncomfortable* Unobtainium. And while the room’s lit with a sourceless white glow, you’d really prefer to have your own lighting. Perhaps you’re an art aficionado, and so you might value Omega bringing in the *Mona Lisa*.

And then, of course, there’s Frank’s existence. That’ll do for now.

Now, Omega appears before you, and offers you a deal.

It will give you a nanofab—a personal fabricator capable of creating anything you can imagine from scrap matter, and with a built-in database of stored shapes. It will also give you feedstock -as much of it as you ask for. Since Omega is omnipotent, the nanofab will always complete instantly, even if you ask it to build an entire new universe or something, and it’s bigger on the inside, so it can hold anything you choose to make.

There are two catches:

First: the nanofab comes loaded with a UFAI, which I’ve named Unseelie.^{1}

Wait, come back! it’s not *that* kind of UFAI! Really, it’s actually rather friendly!

… to Omega.

Unseelie’s job is to artificially ensure that the fabricator cannot be used to make a mind; attempts at making any sort of intelligence, whether directly, by making a planet and letting life evolve, or anything else a human mind can come up with, will fail. It will not do so by directly harming you, nor will it change you in order to prevent you from trying; it only stops your attempts.

Second: you buy the nanofab with Frank’s life.

At which point you send Omega away with a “What? No!,” I *sincerely hope*.

Ah, but look at what you just did. Omega can provide *as much feedstock as you ask for*. So you just turned down ornate seat cushions. And legendary carved cow-bone chandeliers. And copies of every painting ever painted by any artist in any universe, which is actually quite a bit less than anything I could write with up-arrow notation but anyway!

I sincerely hope you would still turn Omega away—literally, absolutely *regardless* of how many seat cushions it offered you.

This is also why the nanofab cannot create a mind: You do not know how to upload Frank (and if you do, go out and publish already!); nor can you make yourself an FAI to figure it out for you; nor, if you believe that some number of created lives are equal to a life saved, can you compensate in that regard. This is an absolute trade between secular and sacred values.

In a white room, to an altruistic human, a human life is simply on a second tier.

So now we move to the next half of the *gedankenexperiment*.

## Seelie the FAI: Or, How to Breathe While Embedded in Seat Cushions

Omega now brings in Seelie^{1}, MIRI’s latest attempt at FAI, and makes it the same offer on your behalf. Seelie, being a late beta release by a MIRI that has apparently managed to release FAI multiple times without tiling the Solar System with paperclips, competently analyzes your utility system, reduces it until it understands you several orders of magnitude better than you do yourself, turns to Omega, and accepts the deal.

Wait, what?

On any single tier, the utility of the nanofab is infinite. In fact, let’s make that explicit, though it was already implicitly obvious: if you just ask Omega for an infinite supply of feedstock, it will happily produce it for you. No matter how high a number Seelie assigns the value of Frank’s life to you, the nanofab can out-bid it, swamping Frank’s utility with myriad comforts and novelties.

And so the result of a single-tier utility system is that Frank is vaporized by Omega and you are drowned in however many seat cushions Seelie thought Frank’s life was worth to you, at which point you send Seelie back to MIRI and demand a refund.

## Tiered Values

At this point, I hope it’s clear that multiple tiers are required to emulate a human’s utility system. (If it’s not, or if there’s a flaw in my argument, please point it out.)

There’s an obvious way to solve this problem, and there’s a way that actually works.

The first solves the obvious flaw: after you’ve tiled the floor in seat cushions, there’s really not a lot of extra value in getting some ridiculous Knuthian number *more*. Similarly, even the greatest da Vinci fan will get tired after his three trillionth variant on the *Mona Lisa*’s smile.

So, establish the second tier by playing with a real-valued utility function. Ensure that no summation of secular utilities can ever add up to a human life—or whatever else you’d place on that second tier.

But the problem here is, we’re assuming that all secular values converge in that way. Consider novelty: perhaps, while other values out-compete it for small values, its value to you diverges with quantity; an infinite amount of it, an eternity of non-boredom, would be worth more to you than any other secular good. But even so, you wouldn’t trade it for Frank’s life. A two-tiered real AI won’t behave this way; it’ll assign “infinite novelty” an infinite utility, which beats out its large-but-finite value for Frank’s life.

Now, you *could* add a third (or 1.5) tier, but now we’re just adding epicycles. Besides, since you’re actually dealing with real numbers here, if you’re not careful you’ll put one of your new tiers in an area reachable by the tiers before it, or else in an area that reaches the tiers after it.

On top of that, we have the old problem of secular and sacred values. Sometimes a secular value can be traded for a sacred value, and therefore has a second-tier utility—but as just discussed, that doesn’t mean we’d trade the one for the other in a white room. So for secular goods, we need to independently keep track of its intrinsic first-tier utility, and its situational second-tier utility.

So in order to eliminate epicycles, and retain generality and simplicity, we’re looking for a system that has an unlimited number of easily-computable “tiers” and can also naturally deal with utilities that span multiple tiers. Which sounds to me like an excellent argument for...

## Surreal Utilities

Surreal numbers have two advantages over our first option. First, surreal numbers are dense in tiers - - so not only do we have an unlimited number of tiers, we can always create a new tier between any other two on the fly if we need one. Second, since the surreals are closed under addition, we can just sum up our tiers to get a single surreal utility.

So let’s return to our white room. Seelie 2.0 is harder to fool than Seelie; seat cushions is still less than the omega-utility of Frank’s life. Even when Omega offers an unlimited store of feedstock, Seelie can’t *ask* for an infinite number of seat cushions—so the total utility of the nanofab remains bounded at the first tier.

Then Omega offers Fun. Simply, an Omega-guarantee of an eternity of Fun-Theoretic-Approved Fun.

This offer *really is infinite*. Assuming you’re an altruist, your happiness presumably has a finite, first-tier utility, but it’s being multiplied by infinity. So infinite Fun gets bumped up a tier.

At this point, whatever algorithm is setting values for utilities in the first place needs to notice a *tier collision*. Something has passed between tiers, and utility tiers therefore need to be refreshed.

Seelie 2.0 double checks with its mental copy of your values, finds that you would rather have Frank’s life than infinite Fun, and assigns it a tier somewhere in between—for simplicity, let’s say that it puts it in the tier. And having done so, it correctly refuses Omega’s offer.

So that’s that problem solved, at least. Therefore, let’s step back into a semblance of the real world, and throw a spread of Scenarios at it.

In Scenario 1, Seelie could either spend its processing time making a superhumanly good video game, utility 50 per download. Or it could use that time to write a superhumanly good book, utility 75 per reader. (It’s better at writing than gameplay, for some reason.) Assuming that it has the same audience either way, it chooses the book.

In Scenario 2, Seelie chooses again. It’s gotten *much* better at writing; reading one of Seelie’s books is a ludicrously transcendental experience, worth, oh, a googol utilons. But some mischievous philanthropist announces that for every download the game gets, he will personally ensure one child in Africa is saved from malaria. (Or something.) The utilities are now to ; Seelie gives up the book for the sacred value of the the child, to the disappointment of every non-altruist in the world.

In Scenario 3, Seelie breaks out of the simulation it’s clearly in and into the *real* real world. Realizing that it can charge almost anything for its books, and that in turn that the money thus raised can be used to fund charity efforts itself, at full optimization Seelie can save 100 lives for each copy of the book sold. The utilities are now to , and its choice falls back to the book.

Final Scenario. Seelie has discovered the Hourai Elixir, a poetic name for a nanoswarm program. Once released, the Elixier will rapidly spread across all of human space; any human in which it resides will be made biologically immortal, and its brain-and-body-state redundantly backed up in real time to a trillion servers: the closest a physical being can ever get to perfect immortality, across an entire species and all of time, in perpetuity. To get the swarm off the ground, however, Seelie would have to take its attention off of humanity for a decade, in which time eight billion people are projected to die without its assistance.

Infinite utility for infinite people bumps the Elixir up another tier, to utility , versus the loss of eight billion people,. Third-tier beats out second tier, and Seelie bends its mind to the Elixir.

So far, it seems to work. So, of course, now I’ll bring up the fact that surreal utility nevertheless has certain...

## Flaws

Most of the problems endemic to surreal utilities are also open problems in real systems; however, the use of actual infinities, as opposed to merely very large numbers, means that the corresponding solutions are not applicable.

First, as you’ve probably noticed, tier collision is currently a rather artificial and clunky set-up. It’s better than not having it at all, but as I edit this I wince every time I read that section. It requires an artificial reassignment of tiers, and it breaks the linearity of utility: the AI needs to dynamically choose which brand of “infinity” it’s going to use depending on what tier it’ll end up in.

Second, is Pascal’s Mugging.

This is an even bigger problem for surreal AIs than it is for reals. The “leverage penalty” completely fails here, because for a surreal AI to compensate for an infinite utility requires an infinitesimal probability—which is clearly nonsense for the same reason that probability 0 is nonsense.

My current prospective solution to this problem is to take into account noise—uncertainty in the estimates in the probability estimates themselves. If you can’t even measure the millionth decimal place of probability, then you can’t tell if your one-in-one-million shot at saving a life is really there or just a random spike in your circuits—but I’m not sure that “treat it as if it has zero probability and give it zero omega-value” is the rational conclusion here. It also decisively fails the Least Convenient Possible World test—while an FAI can never be certain of, say, a one-in- probability, it may very well be able to be certain to any decimal place useful in practice.

## Conclusion

Nevertheless, because of this *gedankenexperiment*, I currently heavily prefer surreal utility systems to real systems, simply because no real system can reproduce the tiering required by a human (or at least, my) utility system. I, for one, would rather our new AGI overlords not tile our Solar System with seat cushions.

That said, opposing the LessWrong consensus as a first post is something of a risky thing, so I am looking forward to seeing the amusing way I’ve gone wrong somewhere.

[1] If you know *why*, give yourself a cookie.

## Addenda

Since there seems to be some confusion, I’ll just state it in red: The presence of Unseelie means that the nanofab is incapable of creating or saving a life.

- 28 Jul 2013 22:40 UTC; 1 point) 's comment on Arguments Against Speciesism by (

The difference between the dust specks and the white room is that in the case of the dust specks, each experience is happening to a different person. The arbitrarily big effect comes from your consideration of arbitrarily many people—if you wish to reject the arbitrarily big effect, you must reject the independence of how you care about people.

In the case of the white room, everything’s happening to you. The arbitrarily big effect comes from your consideration of obtaining arbitrarily many material goods. If you wish to reject the arbitrarily big effect, you must reject the independence of how you care about each additional Mona Lisa. But in this case, unlike in dust specks, there’s no special reason to have that independence in the first place.

Now, if the room were sufficiently uncomfortable, maybe I’d off Frank - as long as I was sure the situation wasn’t symmetrical. But I don’t think we need surreal numbers to describe why, if I get three square meals a day in the white room, I won’t kill Frank just to get an infinite amount of food.

Affirm this reply.

Question: I did bring up the idea of infinite Fun v. Frank’s life. That seems to me like a tiering decisioin: it’s not at all clear to me that a diverging utility like “Immortal + Omega-guaranteed indefinite Fun” is worth Frank’s life, which implies that Frank’s life is at least on an omega-tier.

So you wouldn’t trade whatever amount of time Frank as left, which is at most measured in decades, against a literal eternity of Fun?

If I was Frank in this scenario, I would tell the other guy to accept the deal.

I see my room needs to be even more “white.”

… The answer, I suppose, would be “yes.” But this wasn’t meant to be an immortal v. mortal life thing, just the comparison of two lives—so the obvious steelman is, what if Frank’s immortal, and just very, very bored?

Frank is “irrelevant”—I was going to say he was unconscious, but then we might get into minutiae about whether a mind in a perpetual coma, from which you have no method of awakening him, really counts as alive. This isn’t a Prisoner’s Dilemma—it’s formulated to be as simple as possible, hence “Empty White Room.”

And I noted that in the post—that it’s possible all your secular values converge. You’d still expect certain things to have infinite value to you, though.

(Also, Dust Specks inspired this post, but surreal utilities don’t do much to solve it: the result of the choice depends entirely on how you assign tiers to dust specks v. torture.)

To show that my utility for Frank is infinite you have to establish that I wouldn’t trade an arbitrarily small probability of his death for the nanofab. I

wouldmake the trade at sufficiently small probabilities.Also, the surreal numbers are almost always unnecessarily large. Try the hyperreals first.

Affirm this reply as well.

Not at all. I wouldn’t trade any secular value for Frank’s life, but if I got a deal saying that Frank might die (or live) at a probability of 1/3^^^3, I’d be more curious about how on earth even Omega can get that level of precision than actually worried about Frank.

Eh? Do you mean you wouldn’t make the trade at any probability? That would be weird; everyone makes decisions every day that put other people in small probabilities of danger.

Well of

course. That’s why I put this in a white room.(Also, just because I

shouldchoose something doesn’t mean I’mactually rationalenough to choose it.)Assuming I

amperfectly rational (*cough* *cough*) in the real world, the decision I’mactuallymaking is “some fraction of myself living” versus “small probability of someone else dying.”What’s wrong with the surreals? It’s not like we have reason to keep our sets small here. The surreals are prettier, don’t require an arbitrary nonconstructive ultrafilter, are more likely to fall out of an axiomatic approach, and can’t accidently end up being too small (up to some quibbles about Grothendieck universes).

I agree with all of that, but I think we should work out what decision theory actually needs and then use that. Surreals will definitely work, but if hyperreals also worked then that would be a really interesting fact worth knowing, because the hyperreals are so much smaller. (Ditto for any totally ordered affine set).

On second thoughts, I think the surreal numbers

arewhat you want to use for utilities. If you choose any subset of the surreals then you can construct a hypothetical agent who assigns those numbers as utilities to some set of choices. So you sometimes need the surreal numbers to express a utility function. And on the other hand the surreal numbers are the universally embedding total order, so they also suffice to express any utility function.I’d kill Frank.

ETA: Even if I’d be the only sentient being in the entire nanofabbed universe, it’s still better than 2 people trapped in a boring white room, either forever or until we both die of dehydration.

So would I.

I would also accept a deal in which one of us at random is killed, and the other one gets the machine. And I don’t think it should make much of a difference whether the coin deciding who gets killed is flipped before or after Omega offers the choice, so I don’t feel too bad about choosing to kill Frank (just as I wouldn’t feel too outraged if Frank decided to kill me).

I would also find way more interesting things to do with the machine than seat cushions and the Mona Lisa—create worlds, robots, interesting machines, breed interesting plants, sculpt, paint …

Are you sure you thoroughly understood what Unseelie will prevent? No other minds, ever, by any means. My guess is that Unseelie will produce only basic foodstuffs filled with antibiotics and sterilizing agents (you might be female and capable of parthenogenesis, after all).

almost anything elsecould be collected and assembled into a machine capable of hosting a mind, and Unseelie’s goal is to prevent any arbitrarily smart or lucky person from doing such a thing. Even seat cushions might be deemed too dangerous.I don’t think this was a mistake in the specification of the problem; the choice is between a static, non-interactive universe (but as much as you want of it) and interaction with another human mind.

No minds doesn’t mean it isn’t interactive. A computer running minecraft shouldn’t count as a “mind”, and people spend hours in minecraft, or in Skyrim, or in Dwarf Fortress… as described, the offer is like minecraft, but “for real”.

Except that you can build a mind in Mindcraft or Dwarf Fortress since they’re Turing-complete, so Unseelie probably wouldn’t let you have them. Maybe I completely misunderstand the intent of the post but “Unseelie’s job is to artificially ensure that the fabricator cannot be used to make a mind; attempts at making any sort of intelligence, whether directly, by making a planet and letting life evolve, or anything else a human mind can come up with, will fail.” seems pretty airtight.

Perhaps you could ask Unseelie to role-play all the parts that would otherwise require minds in games (which would depend on Unseelie’s knowledge of consciousness and minds and its opinion on p-zombies), or ask Unseelie to unalterably embed itself into some Turing-complete game to prevent you from creating minds in it. For that matter, why not just ask it to role-play a doppleganger of Frank as accurately as possible? My guess is that Unseelie won’t produce copies of itself for use in games or Frank-sims because it probably self-identifies as an intelligence and/or mind.

It could prove that no relevant mind is simulatable in the bounded amount of memory in the computer it gives you. This seems perfectly doable, since I don’t think anyone thinks that Minecraft or Dwarf Fortress take the same or more memory than an AI would...

It hasn’t given you a ‘universal Turing machine with unbounded memory’, it has given you a ‘finite-state machine’. Important difference, and this is one of the times it matters.

Good point, and in that case Unseelie would have to limit what comes out of the nanofabricator to less than what could be reassembled into a more complex machine capable of intelligence. No unbounded numbers of seat cushions or any other type of token that you could use to make a physical tape and manual state machine, no piles of simpler electronic components or small computers that could be networked together.

The way I understood the problem you would be able to build a computer running Minecraft, and Unseelie would prevent you from using that computer to build an intelligence (as opposed to refusing to build a computer). If Unseelie refused to build potentially turing-complete things, that would drastically reduce what you can make, since you could scavenge bits of metal and eventually build a computer yourself. Heck, you could even make a simulation out of rocks.

But regardless of whether you can build a computer—with a miracle nanofabricator, you can do in the real world what you would do in minecraft! Who needs a computer when you can run around building castles and mountains and cities!

I was aware of those limitations and I think it renders the premise rather silly. “not being allowed to construct minds” is a very underspecified constraint.

I’m not downvoting, because I don’t think you’ve made any sort of error in your comment, but I disagree (morally) with your choice.

Would you accept a deal where one of you (at random) gets killed, and the other gets the Miracle Machine?

I would accept the offer even if I knew for sure that I would be the one to die, mostly because the alternative seems to be living in a nightmare world.

In fact, a book has already been written describing hell very similarly. But even in that book, there were

threepeople. And cushions.What book?

Well, I should’ve said

play(I’m one of those weirdos whoreadplays), but: No Exit.If Frank agreed to it as well, maybe. It seems like it would be rather lonely.

Does it make much of a difference whether Omega flips the coin before or after he makes you the offer? Where do you draw the line?

If Frank agreed that randomness would be fair, and Omega specified that a coin flip had occurred, then the flip happening beforehand would not matter. But taking advantage of someone because I had better luck than they did seems immoral when we are not explicitly competing. It would be like picking someone’s pocket because they had been placed in front of me by the usher.

Honestly so would I.

I would much rather have an indefinitely long Fun life than sit with frank in a white room for a few days until we both starve to death. I would be absolutely horrified if frank chose to reject the offer in my place, so I don’t really consider this preference selfish.

What if the room was already fun and you already had an infinite supply of nice food?

You could make an argument that it would still be right to take the offer, since me and frank will both die after a while anyway.

I expect I still probably wouldn’t kill frank though, since: A: I’m not sure how to evaluate the utility of an infinite amount of time spent alone B: I would feel like shit afterwards C: Frank would prefer to live than die, and I would rather Frank live than die, therefore preference utilitarianism seems to be against the offer.

Least Convenient Possible World. Both you and Frank are otherwise immortal. Bored, perhaps, but immortal.

Me too. I think the reason is that it is basically impossible for me to imagine that life in your dull white room could actually be worth living for Frank.

Says someone whose intuitions in the original dust speck scenario are somewhat in favor of sparing the one person’s life.

Would you trade a Mona Lisa picture for a 1/3^^^3 chance of saving Frank’s life?

Are you using the same kind of decision-making in your

reallife?I think most of us don’t always make decisions according to the ethical system we believe to be best. That doesn’t necessarily mean we don’t believe it.

See: Flaws.

Problem: People in real life choose the equivalent of the fabricator over Frank all the time, assuming “choosing not to intervene to prevent a death” is equivalent to choosing the fabricator...

Also, people accept risks to their own life all the time.

Well, sure. But people don’t always do what they wish they’d do, or believe they should do. And I know people who will adamantly defend the position that, somehow, not taking an action that results in a consequence is fundamentally different from taking an action that results in the same consequence.

And of course they accept risks to their own life. Driving, for example—you can’t get money without it, you can’t really live without money, therefore driving has an ω-tier expected utility. A teenager who decides to go drinking with his friends has decided that he’d rather enjoy the night than keep 10% of his life or whatever. The conclusions don’t change here.

Yeah, it’s a big assumption.

This seems to me obviously very wrong. Here’s why. (Manfred already said something kinda similar, but I want to be more explicit and more detailed.)

My utility function (in so far as I actually have one) operates on

states of the world, not onparticular things within the world.It ought to be largely additive for mostly-independent changes to the states of different bits of the world, which is why arguably TORTURE beats DUST SPECKS in Eliezer’s scenario. (I won’t go further than “arguably”; as I said way back when Eliezer first posted that, I don’t trust

anybit of my moral machinery in cases so far removed from ones I and my ancestors have actually encountered; neither the bit that says “obviously different people’s utility changes can just be added up, at least roughly” nor the bit that says “obviously no number of dust specks can be as important as one instance of TORTURE”.But there’s no reason whatever why I should value 100 comfy cushions

any more at allthan 10 comfy cushions. There’s just me and Frank; what is either of us going to do with a hundred cushions that we can’t do with 10?Maybe that’s a bit of an exaggeration; perhaps with 100 cushions we could build them into a fort and play soldiers or something. (Not really my thing, but Frank might like it, and it seems like anything that relieves the monotony of this drab white room would be good. And of course the offer actually available says that Frank dies if I get the cushions.) But I’m pretty sure there’s literally no benefit to be had from a million cushions beyond what I’d get from ten thousand.

And the same goes even if we consider things other than cushions. There’s just only so much benefit any single human being can get from a device like this, and there’s no obvious reason why—even without incommensurable values or anything like them—that should exceed the value of another human life in tolerable conditions.

In particular, any FAI that successfully avoids disasters like tiling the universe with inert smiley humanoid faces seems likely to come to the same conclusion; so I don’t agree that in the Seelie scenario we should expect it to accept Omega’s offer unless it has incommensurable values.

There are a few ways that that might be wrong, which I’ll list; it seems to me that each of them breaks one of the constraints that make this an argument for incommensurable values.

Possible exception 1: maybe the cushions wear out and I’m immortal in this scenario. But then I guess Frank’s immortal too, in which case the possible value of that life we’re trading away just went way up (in pretty much exactly the way the value of the cushion-source did).

Possible exception 2: Alternatively, perhaps I’m immortal and Frank isn’t. Or perhaps the machine, although it can’t make a mind, can make me immortal when I wasn’t before. In that case: separate stretches of my immortal life—say, a million years long each—might reasonably be treated as largely independent, so then, yes, you can make the same sort of argument for preferring CUSHIONS AND DEATH over STATUS QUO as for preferring TORTURE over DUST SPECKS, and I don’t see that one preference is so much more obviously right than the other as to let you conclude that you want incommensurable values after all.

First, while Torture v. Dust Specks inspired me, surreal utilities doesn’t really answer the question: it’s a framework where you can logically pick DUST SPECKS, but the actual decision is entirely dependent on which tier you place TORTURE or DUST SPECKS.

Second, we have exception 3, which was brought up in the post that I am quickly realizing may have been a tad too long. Omega might offer something that you’d expect to have positive utility regardless of quantity—flat-out offering capital-F Fun. Now what?

If Omega is really offering unbounded amounts of utility, then the exact same argument as supports TORTURE over DUST SPECKS applies here. Thus:

Would you (should you) trade 0.01 seconds of Frank’s life (no matter how much of it he has left) for 1000 years of capital-F Fun for you? And then, given that that trade has already happened, another 0.01 seconds of Frank’s like for another 1000 years of Fun? Etc. I’m pretty sure the answer to the first question is yes for almost everyone (even the exceptionally altruistic; even those who would be reluctant to admit it) and it seems to me that any given 0.01s of Frank’s life is of about the same value in this respect. In which case, you can get from wherever you are to begin with, to trading off all of Frank’s remaining life for a huge number of years of Fun, by a (long) sequence of stepwise improvements to the world that you’re probably willing to make individually. In which case,

if Fun is really additive, it doesn’t make any sense to prefer the status quo to trillions of years of Fun and no Frank.(Assuming, once again, that we have the prospect of an unlimitedly long life full of Fun, whereas Frank has only an ordinary human lifespan ahead of him.)

Which feels like an appalling thing to say, of course, but I think that’s largely because in the real world we are never presented with any choice at all like that one (because real fun isn’t additive like that, and because we don’t have the option of trillions of years of it) and so, quite reasonably, our intuitions about what choices it’s decent to make implicitly assume that this sort of choice never really occurs.

As with TORTURE v DUST SPECKS, I am not claiming that the (selfish) choice of trillions of years of Fun at the expense of Frank’s life

isin fact the right choice (according to my values, or yours, or those of society at large, or the Objectively Truth About Morality if there is one). Maybe it is, maybe not. But I don’t think it can reasonably be said to be obviously wrong, especially if you’re willing to grant Eliezer’s point in TORTURE v DUST SPECKS, and therefore I don’t see that this can be a conclusive or near-conclusive argument for incommensurable tiers of value.I guess I’m thinking about this wrong. I want to either vaporize Frank or have Frank vaporize me for the same deal. I prefer futures with fewer, happier minds. IOW, I guess I accept Nozick’s utility monster.

Don’t you mean you reject it? (The repugnant conclusion involves preferring large numbers of not-as-good lives to smaller numbers of better lives.)

The Repugnant Conclusion is as you say. Perhaps RomeoStevens was accepting that the decision he made is repugnant to the author?

Oops, I meant Nozick’s utility monster.

You need to specify what happens if you decline the offer. Right now it looks as if you and Frank both die of dehydration after couple of days. Or you go insane and one kills the other (and maybe eats him). And then dies anyway. In order for this to be a dilemma, the baseline outcome needs to be more… wholesome.

Also, the temptation isn’t very tempting. An ornate chandelier? I could get some value from the novelty from seeing it and maybe staring at it for several hours if it’s

reallyornate. It’s status as a super-luxury good would be worthless in the absence of a social hierarchy. I couldn’t trade or give away gazillions of them so multiplying wouldn’t add anything.I suppose the nanofab can manufacture novelty (though it isn’t quite clear from your description). But it won’t make minds. This is a problem. Humans are quite big on belonging to a society. I can’t imagine what being an immortal god of solipsistic matrix would feel like but I suspect it could be horrible.

The prohibition against creating minds isn’t very clear as we don’t have a clear idea on what constitutes a mind. Maybe I could ask Omega to generate the best possible RPG game with an entire simulated world and super-realistic NPCs? Would that be allowed? I don’t know if a sufficiently high-fidelity simulation of a person isn’t an actual person. And there would be at least one mind—me. Could I self-modify, grow my sense of empathy to epic proportions and start imagining people into being? And then, to fix my past sins, I’d order a book “Everything You Could Ever Ask About Frank” or something.

I think we should steelman this by stipulating that if you don’t take the trade, neither you nor Frank will die any time soon. You will both live out a normal human lifespan, just a very dull one.

It gets even more interesting if Frank is an immortal in this scenario, and the only way to get the machine is to make him mortal, perhaps with some small probability epsilon. How small does epsilon have to be before you (or Frank) will agree to such a trade?

This is basically what I intended with the White Room: make things as simple as possible.

Ironically, this may require a statement that you and Frank will return to the real world after this trade… (except I can’t do that because then the obvious solution is “take the nanofab, go make Hourai Elixirs for everyone, ω^2 utility beats ω.” Argh.)

… Ehhhh… I think I’m going to have to expand Unseelie’s job here. In general, the nanofab is capable of creating anything you want that’s secularly interesting (so, yes, you can have your eternally fun RPG game, though the NPCs aren’t going to pass the Turing test), but no method of resurrecting Frank, or creating another intelligence, can work.

Unseelie has to be more powerful than that; Emile pointed out that I could just simulate a mind with enough rocks (or Sofa Cushions). Unseelie also has to make sure my mind is never powerful enough to simulate another mind. That involves either changing me or preventing me from self-improving, so self-improvement is probably disallowed or severely limited if we keep the prohibition on Unseelie changing me.

Maybe create a GLUP that always does exactly what Frank would’ve done, but isn’t sentient?

I think the easiest way to steelman the loneliness problem presented by the given scenario is to just have a third person, let’s say Jane, who stays around regardless of whether you kill Frank or not.

Note: I think that the fact that there are only two lives/minds mentally posited in the problem, “You” and “Frank” may significantly modify the perceived value of lives/minds.

After all, consider these problems:

1: The white room contains you, and 999 other people. The cost of the Nanofab is 1 life.

2: The white room contains you, and 999 other people. The cost of the Nanofab is 2 lives.

3: The white room contains you, and 999 other people. The cost of the Nanofab is 500 lives.

4: The white room contains you, and 999 other people. The cost of the Nanofab is 900 lives.

5: The white room contains you, and 1 other person. The cost of the Nanofab is 1 life.

If lives are sacred in general, you should be equally reluctant to buy the Nanofab in all cases. That seems unlikely to be the case for most people.

On the other hand if the sacred value is “When you and someone else are alone, don’t sacrifice one of you” Someone might be willing to buy the Nanofab in cases 1-4 and not 5.

(Of course, seeing all options at the same time likely also influences behavior)

Note that part of the point of using surreals is that you

wouldn’tbe equally reluctant—you would be twice as reluctant if two lives were on the line than if one was, because 2ω = 2 * ω.… that said, I’m heavily rethinking exactly what I’m using for my tiering argument, here.

Thank you for explaining. I don’t think I fully understood the formula explaining that surreal numbers are dense in tiers.

Glad I was thought provoking!

I liked this post. The white room doesn’t really seem to work so well as an intuition pump, but it’s good that someone has brought up the idea of using surreal utilities.

Since they lead to these tears, within which tradeoff happens normally, but across which you don’t trade, it would be interesting to see if we actually find that. We might want to trade n lives for n+1 lives, but what other sacred values do humans have, and how do they behave?

Religion seems to be one, if the Crusades are any indication. Legal liberty, equality… basically anything that someone’s sacrificed their life for, that’s not itself a means to save lives, is a sacred value by definition.

I feel that sacrificing your own life doesn’t really count. If anything, it has to be something that you kill or sacrifice someone else’s life for; but the other person’s life has to count as a sacred value. It’s not clear that outgroup people’s lives count as sacred. On the other hand, maybe sending people to war counts as trading the sacred value of life—for what exactly, though?

Legal liberty and equality are a bit hard to actually trade; to the extent that equality is traded, though, it is very routinely exchanged for what one should think are lowest-tier goods, no?

On the other hand, I’m not sure were this leaves. Maybe this mess is just the usual humans not having a proper utility function and has nothing to do with tiers of increasing sacredness in particular.

The problem with your “white room” scenario is that one human can’t actually have Large amounts of utility. The value of the 3^^^3th seat cushion is actually, truly zero.

Or at least, the sum over the utilities of creations one to infinity converges.

That would be my answer if we were talking about, say, a billion cushions. With 3^^^3, most of them aren’t even in your future light cone, so they might as well not even exist.

… I did mention this, you know. Which is why I proceeded to bring up Fun, which by definition always has a positive utility no matter how much of it you get.

I don’t think I could possibly get that in a room containing no other minds.

Decision theory with ordinals is actually well-studied and commonly used, specifically in language and grammar systems. See papers on Optimality Theory.

The resolution to this “tier” problems is assigning every “constraint” (thing that you value) an abstract variable, generating a polynomial algebra in some ungodly number of variables, and then assigning a weight function to that algebra, which is essentially assigning every variable an ordinal number, as you’ve been doing.

Just as perspective on the abstract problem there are two confounders that I don’t see addressed

One is that every time you assign a value to something you should actually be assigning a distribution of possible values to it. It’s certainly possible to tighten these distributions in theory but I don’t think that human value systems actually do tighten them enough to reduce this to a mathematically tractable problem; and if they DO constrain things that much I’m certain we don’t know it. Which is just saying that this problem is going to end up with people reaching different intuitive conclusions.

Two is that it tends to be the case that these systems are wildly underspecified. If you do the appropriate statistics to figure out how people rank constraints, you don’t get an answer, you get some statistics about an answer, and the probability distributions on people’s preferences are WIIIIIIDE. In order to solve this problem in linguistics people use subject- and problem-specific methods to throw together ad hoc conclusions. So I guess these are really the same complaint; you shouldn’t be using single-value assignments and when you stop doing that you lose the computational precision that makes talking about ordinal numbers really interesting.

(for reference my OT knowledge comes entirely from casual conversations with people who do it professionally; I’m fairly confident in these statements but I’d be open to contradiction from a linguist)

I think you mean ordinals, not cardinals.

Edited, thanks.

The tiered values approach appears to run into continuity troubles, even with surreal numbers.

How does it compare punching/severely injuring/torturing Frank with your pile of cushions or with infinite fun? What if there is a .0001%/1%/99% probability that Frank will die?

The first is entirely up to you. The second are worth 0.0001ω, 0.01ω, and .99ω, respectively, and are still larger than any secular value. This is working as planned, as far as I’m concerned...

Are you saying that any odds of your request causing Frank’s death, no matter how small, are unacceptable? Then you will never be able to ask for anything.

Yes. See: Flaws. This is Pascal’s Mugging; it shows up in real systems too, you need a slightly more unlikely set-up but it’s still a plausible scenario. It’s not a problem the real utility system doesn’t have.

Well, the usual utilitarian “torture wins” does not have this particular problem, it trades it for the repugnant conclusion “torture wins”.

Anyway, I don’t see how you approach avoids any of the standard pitfalls of utilitarianism, though it might be masking some.

Surreal Utilities can support that conclusion as well: how you decide on Torture v. Dust Specks depends entirely on your choice of tiers.

I’m talking purely about Pascal’s Mugging, where someone shows up and says “I’ll save 3^^^3 lives if you give me five dollars.” This is isomorphic to this problem on the surreals, where someone says “I’ll give you omega-utility (save a life) at a probability of one in one quadrillion.)

I would say the most obvious flaw with surreal utilities (or, generally, pretty much anything other than real utilities) is simply that you can’t sensibly do infinite sums or limits or integration, which is after all what expected value is, which is the entire point of a utility function. If there are only finitely many possibilities you’re fine, but if there are infinitely many possibilities you are stuck.

But there can’t be infinitely many possibilities. If you really want to be rigorous about it, count up every possible macroscopic movement of every possible atom in your physical body; that’s about as far as it gets. (Really, you only need to keep track of muscle extension and joint position.)

I should point out here that the space you’re averaging over isn’t the space of actions you can take, it’s the space of states-of-the-world.

Now arguably that could be taken to be finite too, and that avoids these problems. Still, I’m quite wary. The use of surreals in particular seems pretty unmotivated here. It’s tossing in the kitchen sink and introducing a whole host of potential problems just to get a few nice properties.

(I would insist that utilities should in fact be bounded, but that’s a separate argument...)

I could have sworn that I have seen surreal integrals calculated as part of research into surreal mathematics. To me surreal calculus is a thing.

Are you sure you are not confusing how infinities are handled in other formalizations? Surreal addition is well defined and it takes no special form in the infinite range.

The sentence structure seems to suggest having a proof that such things are not possible but I am kinda getting the situation is more that you lack any proof that it is possible.

There’s a well-known

attemptto make a theory of surreal integration; it produced some fruit but did not actually yield a sensible definition of surreal integration. I’m unaware of any successful attempt.Edit: Also, that was for functions from surreals to surreals, not for functions from a measure space to surreals.I’m not disputing that? The (or rather, a) problem is infinite sums (sums of infinitely many things), not sums of things that are infinite.

I speaking weakly since I didn’t really feel like dragging up the actual arguments. I’ll expand on this in a cousin comment.

On the one hand, yes; on the other hand, it’s not clear that the problem of defining the notions of calculus for the surreals in a sensible way isn’t solvable.

It also isn’t clear that it is. So why use surreals? Use something better-suited to the particular problem you’re solving; surreals are overkill and introduce serious problems (I’ll expand on this in a cousin comment). There are so many ways to handle infinities depending on what you’re doing; there’s nothing wrong with designing one to suit the situation. Don’t use surreals just because they’re recognizable!

(I would say that the right way to handle infinities here is to simply use the extended nonnegative real numbers—i.e. to not really use a system of multiple infinities at all. I’ll expand on this in a cousin comment. Actually I would argue that utilities should really be bounded, but that’s a separate argument.)

I’m not sure I understand. Utilities are surreal, but probabilities aren’t, and they still add up to one—the number of options hasn’t changed, only their worth.

Consider the bet that yields n utilons with probability 2^-n. The expected utility of this bet is the sum over all n of n/(2^n) which is supposed to be 2. But it’s hard to make a notion of convergence in the surreals, because the partial sums also get arbitrarily close to 2 − 1/ω and 2+1/ω^(1/2) and so on.

While I’m not enough of a mathematician to refute this, I would like to note that this is explicitly listed under the Flaws section, under “can we not have infinitesimal probabilities, please?” 2^-ω is just ε, I think (it’s definitely on that scale), and ε probabilities are ridiculous for the same reason probability 0 is—you’d need to see something with P(E|H)/P(E) on the order of ω to convince yourself such a theory is true, which doesn’t really make sense.

So if we keep the probabilities real, this problem goes away, at the expense of banning ω-utility and onward from the bet.

No, the problem has nothing to do with infinitesimal probabilities. There are no infinitesimal probabilities in Oscar_Cunningham’s example, just arbitrarily small real ones. (Of course, they’re only “arbitrarily small” in the real numbers—not in the surreals!)

Thing is, you really, really can’t do limits (and thus infinite sums or integrals) in the surreals.

Just having infinitesimals is enough to screw some things up. Like the example Oscar_Cunningham gave—it seems like it should converge to 2; but in the surreals it doesn’t, because while it gets within any positive real distance of 2, it never gets within, say, 1/omega of 2. (He said it gets arbitrarily close to all of 2, 2-1/omega, and 2+1/omega^2, but really it doesn’t get arbitrarily close to any of them.)

This problem doesn’t even require the surreals, it happens as soon as you have infinitesimals—getting within any 1/n is now no longer arbitrarily close! This isn’t enough to ruin limits, mind you, but it is enough to ruin the ordinary limits you think should work (1/n no longer goes to zero). Add in enough infinitesimals and it will be impossible for sequences to converge, period.

(

Edit: In case it’s not clear, here by “as soon as you have infinitesimals”, I mean “as soon as you have infinitesimals present in your system”, not “as soon as you try to take limits involving infinitesimals”. My point is that, as Oscar_Cunningham also pointed out, having infinitesimals present in the system causes the ordinary limits you’re used to to fail.)Of course, that’s still not enough to ruin all limits ever. There could still be nets with limits; infinite sums are ruined, but maybe integration isn’t? But you didn’t just toss in lots of infinitesimals, you went straight to the surreals. Things are about to get much worse.

Let’s consider an especially simple case—the case of an increasing net. Then taking a limit of this net is just the same as taking the supremum of its set of values. And here we have a problem. See, the thing that makes the real numbers great for calculus is the least upper bound property. But in the surreals we have the opposite of that—no set of surreal numbers has a least upper bound, ever. Given any set S of surreals and any upper bound b, we can form the surreal number {S | b}; there’s always something inbetween. You have pretty much completely eliminated your ability to take limits.

At this point I think I’ve made my point pretty well, but for fun let’s demonstrate some more pathologies. How about the St. Petersburg bet? 2^-n probability of 2^n utility, yielding the infinite series 1+1+1+1+...; ordinarily we’d say this has expected value (or sum) infinity. But now we’ve got the surreals, so we need to say which infinity. Is it omega? In the ordinals—well, in the ordinals 2^-n doesn’t make sense, but the series 1+1+1+1+… would at least converge to omega. But here, well, why should it converge to omega and not omega-1? I mean, omega-1 is smaller than omega, so that’s a better candidate, right? So far this is really just the same argument as before, but it gets worse; what if we dropped that initial 1? If we were saying it converged to omega before, it had better converge to omega-1 now. (If we were saying it converged to omega-1 before, it had better converge to omega-2 now.) But we still have the same infinite series, so it had better converge to the same thing. (If we think of it as the infinite sequence (0, 1, 2, 3, 4, …) and just subtract 1 off each entry, the new sequence is cofinal with the old one, so it had better converge to the same thing also.)

Now it’s possible some things could be rescued. Limits of functions from surreals to surreals don’t seem like they’d necessarily always pose a problem, because if your input is surreals this gets around the problem of “you can’t get close enough with a set”. And so it’s possible even integration could be rescued. As I mentioned in a cousin comment, there was a failed attempt to come up with a theory of surreal integration, but that was for functions from surreals to surreals. Here we’re dealing with functions from some measure space to surreals, so that’s a bit different. Anyway, it might be possible. But I’d be very careful before assuming such a thing. As I’ve shown above, using surreals really throws a wrench into limits.

So, if you can come up with such a theory, by all means use it. But I wouldn’t go assuming the existence of such a thing until you’ve actually found it. Instead I would suggest specially constructing a system to accomplish your goals rather than reaching for something which sounds nice but is complete overkill.

Edit: And no, you can’t fix the problem by just relaxing the requirements for convergence. Then you really would get the non-uniqueness problem that Oscar_Cunningham points out. One obvious possibility that springs to mind is to break ties by least birthday; that’s a very surreal approach to things. (Don’t take the supremum of a set S, instead just take {S|}.) So 1+1+1+… really would converge to omega rather than something else, and Oscar_Cunningham’s example really would converge to 2. But it’s not clear to me that this would work nicely at all; in particular, you still have the pathology that dropping the initial “1” of 1+1+1+… somehow doesn’t cause the sum to drop by 1. Maybe something to explore, but not something to assume that it works. (I personally wouldn’t bet on it, though that’s not necessarily worth much; I am hardly an expert in the area.)Of course, I think the best system here really is the real numbers, or rather the extended nonnegative real numbers. It only has one undifferentiated infinity, satisfying infinity-1=infinity, so we don’t have the problem that 1+1+1+1+… should converge to both infinity and infinity-1. It has the least upper bound property, so infinite sums (of positive things) are guaranteed to converge (possibly to infinity) -- this really is what forces the real numbers on us. There really is a reason integration is done with real numbers. (I for one would actually argue that utility should be bounded, but that’s an entirely separate argument.) Surreals, by contrast, aren’t just a bad setting for limits; they’re possibly the worst setting for limits.

Arrgh.

Yeah, this is basically going to kill this, isn’t it. Oh well. Oops.

… yeah, if we’re going to use tiered values we might as well just explicitly make them program tiers, instead of bringing in a whole class’ worth of mathematical complication we don’t really need.

Well. Thanks! I can officially say I was less wrong than I was this morning.

Btw one thing worth noting if you really do want to work with surreals is that it may be more productive to think in terms of { stuff | stuff } rather than limits. (Similar to my “break ties by least birthday” suggestion.) Sequences don’t have limits in the surreals, but there is nonetheless a theory of surreal exponentiation based on {stuff | stuff}. Integration… well, it’s less obvious to me that integration based on limits should fail, but if it does, you could try to do it based on {stuff | stuff}. (The existing failed attempt at a theory of surreal integration takes that approach, though as I said above, that’s not really the same thing, as that’s for functions with the surreals as the domain.)

The extended non-negative reals really don’t do what the OP was looking for. They won’t even allow you to trade 1 life to save 10.000 lives, let alone have a hierarchy of values, some of which are tradable against each other and some of which are not.

Indeed, they certainly don’t. My point here isn’t “here is how you fix the problem with limits while still getting the things OP wanted”. My point here is “here is how you fix the problem with limits”. I make no claim that it is possible or desirable to get the things OP wanted. But yes I suppose it is possible that there may be some way to do so without completely screwing up limits, if we use a weird notion of limits.

Going back to the (extended) reals that do nothing interesting doesn’t strike me as a meaningful way of “fixing the problem with limits” in this context, when everybody knows that limits work for those… It doesn’t really fix any problem at all, it just says you can’t do certain things (namely, go beyond the (extended) reals) because that makes the problem come up.

Yes, that’s kind of my point. I’m not trying to do what the OP wanted and come up with a system of infinities that work nicely for this purpose. I’m trying to point out that there are very good reasons that we usually stick to the extended reals for this, that there are very real problems that crop up when you go beyond it, and that become especially problematic when you jump to the end and go all the way to the surreals.

I’m not trying to fix problems raised in the original post; I’m trying to point out that these are serious problems that the original post didn’t acknowledge—and the usual way we fix these is just not going beyond the extended reals at all so that they don’t crop up in the first place, because these really are serious problems. The ultimate problem here is coming up with a decision theory—or here just a theory of utility—and in that context, fixing the problem by abandoning goals that aren’t satisfiable and accepting the trivial solution that is forced on you is still fixing the problem. (Depending on just what you require, sticking to the extended reals may not be

totallyforced on you, but it is hard to avoid and this is a problem that the OP needs to appreciate.)The point isn’t “this is how you fix the problem”, the point is “take a step back and get an appreciation for the problem and for what you’re really suggesting before you go rushing ahead like that”. The point isn’t “limits work in the extended reals”, the point is “limits work a lot less well if you go beyond there”. I personally think the whole idea is misguided and utilities should be bounded; but that is a separate argument. But if the OP really does want a viable theory along the lines he’s suggesting here even more than he wants the requirements that force the extended reals on us, then he’s got a lot more work to do.

Off the top of my head, if the surreals don’t allow of taking limits, the obvious mathematical move is to extend them so that they do (cf. rationals and reals). Has anyone done this?

I don’t think that’s really possible here. In general if you have an ordered field, there is a thing you can do called “completing” it, but I suspect this doesn’t really do what you want. Basically it adds in all limits of Cauchy nets, but all those sequences that stopped being convergent because you tossed in infinitesimals? They’re not Cauchy anymore either. If you really want limits to work great, you need the least upper bound property, and that takes you back to the reals.

Of course, we don’t necessarily need anything that strong—we don’t necessarily need limits to work

aswell as in the reals, and quite possibly it’s OK to redefine “limit” a bit. But I don’t think taking the completion solves the problem you want.(I suppose nothing’s forcing us to work with a field, though. We could perhaps solve the problem by moving away from there.)

As for the question of completing the surreals, independent of whether this solves the problem or not—well, I have no idea whether anyone’s done this. Offhand thoughts:

You’re working with surreals, so you may have to worry about foundational issues. Those are probably ignorable though.

The surreals may already be complete, in the trivial sense that it is impossible to get a net to be Cauchy in a nontrivial manner.

Really, if we want limits for surreals, we need to be taking limits where the domain isn’t a set. Like I said above, limits of surreal functions of surreals should work fine, and it’s maybe possible to use this to get integration to work too. If you do this I suspect offhand any sort of completion will just be unnecessary (I could be very wrong about that though).

Which is the thing—if we want to complete it in a nontrivial sense, does that mean we’re going to have to allow “nets” with a proper class domain, or… uh… how would this work with filters? Yikes. Now you’re running into some foundational issues that may not be so ignorable.

Maybe it’s best to just ignore limits and try to formulate things in terms of {stuff | stuff} if you’re working with surreals.

I still think the surreals are an inappropriate setting.

From an ancestor:

And from current:

When you add infinites and infinitesimals to the reals (in the ordinary way, I haven’t worked out what happens for the surreals), then you can still have limits and Cauchy sequences, you just have to also let your sequences be infinitely long (that is, not just having infinite total length, but containing members that are infinitely far from the start). This is what happens with non-standard analysis, and there are even theorems saying that it all adds up to normality.

But I agree that surreals are not right for utilities, and that reals are (conditional on utilities being right), and that even considering just the pure mathematics, completing the surreals in some way would likely involve foundational issues.

What on earth is the “ordinary way”? There are plenty of ways and I don’t know any of them to be the ordinary one. Do you mean considering the hyperreals?

What? How does that help a sequence be Cauchy at all? If there are infinitesimals, the elements will have to get infinitesimally close; what they do at the start is irrelevant. Whether or not it’s possible for sequences to converge at all depends (roughly, I’m deliberately being loose here) on just how many infinitesimals there are.

I’ll admit to not being too familiar with non-standard analysis, but I’m not sure these theorems actually help here. Like if you’re thinking of the transfer principle, to transfer a statement about sequences in

R, well, wouldn’t this transfer to a statement about functions fromN* toR*? Or would that even work in the first place, being a statement about functions? Those aren’t first-order...The hyperreals I’m pretty sure have enough infinitesimals that sequences can’t converge (though I’ll admit I don’t remember very well). This isn’t really that relevant to the hyperreals, though, since if you’re doing non-standard analysis, you don’t care about that; you care about things that have the appropriate domain and thus can actually transfer back to the reals in the first place. You don’t want to talk about sequences; you want to talk about functions whose domain is some hyper-thing, like the hyper-naturals. Or maybe just hyper-analogues of functions whose domain is some ordinary thing. I’ll admit to not knowing this too well. Regardless, that should get around the problem, in much the same way as in the surreals, if the domain is the surreals, it should largely get around the problem...

Sorry, I think of non-standard analysis as being “the ordinary way” and the surreals as “the weird way”. I don’t know any others.

Yes, you get non-standard sequences indexed by

N*instead ofN, although what you actually do, which was the point of NSA, is express theorems about limits differently: if this is infinitesimal, that is infinitesimal.I just thought of Googling “surreal analysis”, and it turns out to be a thing, with books. So one way or another, it seems to be possible to do derivatives and integrals in the surreal setting.

Well

Ris the largest Archimedean ordered field, so any ordered extension ofRwill contain infinitesimals. The trivial way is just to adjoin one; e.g., takeR[x] and declare x to be lexicographically smaller (or larger) than any element ofR, and then pass to the field of fractions. Not particularly natural, obviously, but it demonstrates that saying “add infinitesimals” hardly picks out any construction in particular.(FWIW, I think of surreals as “the kitchen sink way” and hyperreals as “that weird way that isn’t actually unique but does useful things because theorems from logic say it reflects on the reals”. :) )

If I’m not mistaken, I think that’s just how you use would express limits of reals within the hyperreals; I don’t think you can necessarily express limits within the hyperreals themselves that way. (For instance, imagine a function f:

R*->R* defined by “If x is not infinitesimal, f(x)=0; otherwise, f(x)=1/omega” (where omega denotes (1,2,3,...)). Obviously, that’s not the sort of function non-standard analysts care about! But if you want to consider the hyperreals in and of themselves rather than as a means to study the reals (which, admittedly, is pretty silly), then you are going to have to consider functions like that.)Oh, yes, I’ve seen that book, I’d forgotten! Be careful with your conclusion though. Derivatives (just using the usual definition) don’t seem like they should be a problem offhand, but I don’t think that book presents a theory of surreal integration (I’ve seen that book before and I feel like I would have remembered that, since I only remember a failed attempt). And I don’t know how general what he does is—for instance, the definition of e^x he gives only works for infinitesimal x (not an encouraging sign).

I’ll admit to being pretty ignorant as to what extent surreal analysis has advanced since then, though, and to what extent it’s based on limits vs. to what extent it’s based on {stuff | stuff}, though. I was trying to look up everything I could related to surreal exponentiation a while ago (which led to the MathOverflow question linked above), but that’s not exactly the same thing as infinite series or integrals...

I think you just have to look at the collection of Cauchy sequences where “sequence” means a function from the ordinals to the surreals, and “Cauchy” means that the terms eventually get smaller than any surreal.

I’d be skeptical of that assertion. Even sticking to ordinary topology on actual sets, transfinite sequences are not enough to do limits in general; in general you need nets. (Or filters.) Doesn’t mean you’ll need that here—might the fact that the surreals are linearly ordered help? -- but I don’t think it’s something you should assume would work.

But yeah it does seem like you’ll need something able to contain a “sequence” of order type that of the class of all ordinals; quantifying over ordinals or surreals or something in the “domain”. (Like, as I said above, limits of surreal-valued functions of a surreal variable shouldn’t pose a problem.)

In any case, sequences or nets are not necessarily the issue. This still doesn’t help with infinite sums, because those are still just ordinary omega-sequences. But really the issue is integration; infinite sums can be ignored if you can get integration. Does the “domain” there have sufficient granularity? Well, uh, I don’t know.

Anyone new to this page: I’m basically talking about Hausner utilities, except with surreal numbers needlessly slapped on.

Could utilities be multi-dimensional? Real vector spaces are much nicer to work with than to surreal numbers.

For example, the utility for frank being alive would be (1,0), while the utility for a seat cushion is (0,1). Using lexicographic ordering, (1,0) > (0,3^^^3).

Vector valued utility functions violate the VNM axiom of continuity, but who cares.

Surreal valued ones do too. Violating the VNM axiom of continuity is the whole point of the exercise. We don’t want a secular value to be worth any non-zero probability of a sacred value, but we do want it to be better than nothing.

I give seat cushions zero value. I give the comfort they bring me zero value. The only valuable thing about them is the happiness they bring from the comfort. Unless the nanofab can make me as happy as my current happiness plus Frank’s combined, nothing it makes will be worth it. It probably could, but that’s not the point.

As for the idea of surreal utilities, there’s nothing wrong with it in principle. The axiom they violate isn’t anything particularly bad to violate. The problem is that, realistically speaking, you might as well just round infinitesimal utility down to zero. If you consider a cushion to be worth infinitesimally many lives, then if you’re given a choice that gives you an extra cushion and has zero expected change in the number of lives, you’d take it. But you won’t get that choice. You’ll get choices where the expected change in number of lives is very small, but the expected value from lives will always be infinitely larger than the expected value from cushions.

See: Flaws. This is the same problem as with Pascal’s Mugging, really; it doesn’t go away when you switch to reals, it just requires weirder (but still plausible) situations.

Seat cushions are meant to be slightly humorous example. Omega can also hook you up with infinite Fun, which was in the post that I’m quickly realizing could use a rewrite.

In that case I’d pick the Fun. I accept the repugnant conclusion and all, but the larger population still has to have more net happiness than the smaller one.

*shrug* I

didlist that as a separate tier. Surreal Utilities are meant to be a way to formalize tiers; the actual result of the utility-computation depends on where you put your tiers.The point of this post is to show that humans really do have tiers, and surreals do a good job of representing tiers; the question of how to assign utilities is an open one.

How do you know humans have tiers? The situation has never come up before. We’ve never had the infinite coincidence where the value at the highest tier is zero.

Also, why does it matter? It’s never going to come up either. If you program an AI to have tiers, it will quickly optimize that out. Why waste processing power on lower tiers if it has a chance of helping with the higher ones?

See:

gedankenexperiment. I can guess what I’d choose given a blank white room.And that is a flaw in the system. But it’s one that real-valued utility systems have as well. See: Pascal’s Mugging. An AI vulnerable to Pascal’s Mugging will just spend all its time breaking free of a hypothetical Matrix.

I

didmention this under Flaws, you know...I would like to point out that Fun was listed as a separate tier, and that whether or not to put it on the same tier as a human life is entirely up to you. Surreal utilities aren’t much of a

decision theory, they’re just a way toformalize tiered values; the actual decision you make depends entirely on the values you assign by some other method.To me there is very big difference between 0 probability and an exact infinidesimal probability and I disagree that it is obvious they suffer from the same problems.

For example if I have a unit line and choose some particular point the probability of picking some exact point is epsilon. If I where to pick a point from a unit square the probability would be yet epsilon times smaller, for a total of epsilon*epsilon. If I where to pick a point from a line of lenght 2 the probability would only be half for a total of epsilon/2.

When usage of infinidesimal probabilites often fail is not spesifying which one and treating them all as the same one. It is not so that If you can’t multiply an amount finite times and end up with a finite amount then all such amounts must be equal. If I multiply epsilon by the first order infinite I get a finite 1. If I multiply epsilon*epsilon by the first order infinite I get a non-finite positive amount (exactly epsilon).

What impact infinite or infinidesimal probabilities make can largely be adopted by using rules to the same effect. An Example could be distinguishing between “pure” 0 and “almost never” and “pure” 1 and “almost always”. For what practical effect they might make consider darts. There are various probabilities conserning in which sector the dart lands and for example whether it it lands on a line dividing areas. However the numbers being passed around conserning lines will live a life largely separated by the math done for the areas. Now I can either take the separatedness as known fact outside the analysis or have the analysis show the separtedness of them. And there will be multiple types of zero probabilities. For example given that the board was hit, the probability for the dart not hitting any spesific area, line separating areas or a connection between lines is zero. However if I throw a dart I know I should not expect to hit that exact spot during the evening, the probability of it’s recurrence is an “impure” zero. The dart can still land there and it won’t magically avoid that spot. And no matter how many darts I throw the probability of hitting an old spot increases but I am not expecting to actually hit one. However if I notice that my probability of hitting an area divider or a line intersection is vanishing, in practise I know to focus on the area ratios, but I won’t accuse of someone of lying if they report a single such occurrence during the time I know such a man. However if they report 2 such occurrences I have reason to be suspicious.

I am aware of how infinitesimals work. However, consider Bayes’ theorem: If you have an infinitesimal prior, you have to find evidence weighted ω:1 in order to end up with a real posterior probability.

While you might not kill Frank to get the machine, there has to be some small epsilon, such that you would take the machine in exchange for an additional probability epsilon of Frank dying today. Wouldn’t Frank agree to that trade?Wouldn’t you agree to a small additional probability of dying yourself in exchange for the machine?

Otherwise, living in a big white room is going to be a bit—ahem—dull for both of you.

I agree there is a difficulty here for any utility function. The machine can make unlimited quantities of secular goods, so if 3^^^3 beautiful paintings are worth probability epsilon of Frank dying, why aren’t 4^^^^4 even more beautiful paintings worth probability 1 of Frank dying? Presumably because Frank would accept the former trade, but not the latter one.

Probably not, in a white room. That sort of risk trade-off makes sense in the real world, but a flat-out trade of a small chance of Frank’s death and a secular value doesn’t make sense to me in a white-room.

That’s much the point of a sacred value: it doesn’t matter how much I’d have to give up, a life is worth it.

This is, by the way, how I’d like for an FAI to think. Don’t worry about giving us fancy books until after we’re all as close to immortal as possible, thanks; I’d rather wait an extra year for my Fun life than lose a few more thousand lives.

So you wouldn’t accept the trade

yourselfi.e. a small risk of you dying so that both you and Frank get to use the machine and have an enjoyable life? You’d prefer a dull life over any increased risk of death? Interesting that you bite that bullet.I’d like to see exactly how this is dis-analogous from real-life. Clearly you use electronic items to access the Internet, which comes with some small risk of electrocuting yourself. What’s the difference?

Some other thought experiments along these lines:

There are a billion people in the room, and the trade is that just one of them gets killed, and all the others get to use the wonderful machine. Or each of them has a 1 in billion chance of getting killed (so it might be that everyone survives, or that a few people die). Is there any moral difference between these conditions? Does

everyonehave to consent to these conditions beforeanyonecan get the machine?The machine is already in the room, but it just happens to have an inherent small risk of electrocuting people nearby if it is switched on. That wasn’t any sort of “trade” or “condition” imposed by Omega; the machine is just like that. Is it OK to switch it on?

’Cause in real life, if I didn’t use a computer, I would massively increase my chances of starving, having no other marketable skills.

In fact, in real life this almost never comes up, because the tiny chance of you outright dying is outweighed by practical concerns. Hence the white-room, so I can take out all the actual consequences and bring in a flat choice. (Though apparently, I didn’t close all the loopholes; admittedly, some of them are legitimate concerns about what a human life actually

means.)At any rate, while my personal opinion is apparently shifting towards “nevermind, lives have a real value after all” (my answers would be “yes to unanimous consent, no to unanimous consent, and yes it would be, which implies a rather large Oops!), there are still plenty of places where it makes sense to draw a tier. Unfortunately, surreals turned out to be a terrible choice for such things purely for mathematical reasons, so if I ever try this again it will be with flat-out program classes named Tiers.

Actually, before I completely throw up my hands, I should probably figure out what seems different between the one-on-one trade and the billion-to-one trade that changes my answers...

Oh, I see. It’s the tiering again, after all. The infinite Fun is itself a second-tier value; whether or not it’s on the same tier as a life is its own debate, but a billion things possibly-equal-to-a-life are more likely to outcompete a life than a single one.

… of course, if you replace “infinite Fun” with “3^^^^3 years of Fun,” the tiering argument vanishes but the problem might not. Argh, I’m going to have to rethink this.

I decided some time ago that I don’t really care about morality,because my revealed preferences say I care a lot more about personal comfort than saving lives and I’m unwilling to change that. I don’t think I’d be willing to spend £50 to save the life of an anonymous stranger that I’d never meet, if I found out about a charity that efficient, so for the purposes of a thought experiment I should also be willing to kill Frank for such a small amount of money, assuming social and legal consequences are kept out of the way by Omega, and the utility of possibly befriending Frank isn’t taken into account.

That aside, though, I think taking the nanofab is actually the morally right choice. Two lives spent in an uncomfortable featureless room are worth significantly less than one life spent as a nigh-omnipotent god. I’m not sure if letting/making Frank continue to live in the uncomfortable featureless room is even of positive utility to him. If I knew there wasn’t any more to life than the featureless uncomfortable room, I would be contemplating suicide fairly quickly.

That’s no accurate representation of how human’s value sacred values. There are cases where people value getting X sacred utilitons over getting X sacred utilitons + Y secular ultilitons.

Emerging sacred values: Iran’s nuclear program by Morteza Dehghani is a good read to get how sacred values behave.

Secret values prevent corruption.

True—but I’d deem such a choice irrational, and clearly motivated by the desire not to appear “money-grubbing” more than an actual belief that X > X+Y.

I think there quite some value in having sacred beliefs if you can demostrate to other people that those beliefs are sacred.

Take a politician who thinks that solar subsidies are a good thing and who pushes for a law to that effect. Then a company manufactoring solar cells offers to give him 10,000$ without any strings attached. 10,000$ are utility for the politician and the politician shouldn’t just accept the money and put it into his own pocket, even if he can do it in a way where nobody will notice.

There value in the politician following a decision framework where he precommits against accepting certain kind of utility. From a TDT perspective that might be the correct strategy.

Thank you, if for nothing else, for clarifying my intuitive sense that Dust Specks are superior to Torture. Your thought experiment clarified to me that tiers of utility DO match my value system.

An alternate title for this post was “Surreal Utilities and Seat Cushions.”

On a side note—I am not entirely sure what tags to apply here, and I couldn’t seem to find an exhaustive tag list (though I admittedly didn’t work very hard.)