Revisiting the Anthropic Trilemma II: axioms and assumptions

tl;dr: I present four axioms for anthropic reasoning under copying/​deleting/​merging, and show that these result in a unique way of doing it: averaging non-indexical utility across copies, adding indexical utility, and having all copies being mutually altruistic.

Some time ago, Eliezer constructed an anthropic trilemma, where standard theories of anthropic reasoning seemed to come into conflict with subjective anticipation. rwallace subsequently argued that subjective anticipation was not ontologically fundamental, so we should not expect it to work out of the narrow confines of everyday experience, and Wei illustrated some of the difficulties inherent in “copy-delete-merge” types of reasoning.

Wei also made the point that UDT shifts the difficulty in anthropic reasoning away from probability and onto the utility function, and ata argued that neither the probabilities nor the utility function are fundamental, that it was the decisions that resulted from them that were important—after all, if two theories give the same behaviour in all cases, what grounds do we have for distinguishing them? I then noted that this argument could be extended to subjective anticipation: instead of talking about feelings of subjective anticipation, we could replace it by questions such as “would I give up a chocolate bar now for one of my copies to have two in these circumstances?”

I then made a post where I applied by current intuitions to the anthropic trilemma, and showed how this results in complete nonsense, despite the fact that I used a bona fide utility function. What we need are some sensible criteria for which to divide utility and probability between copies, and this post is an attempt to figure that out. The approach is similar to expected utility, where a quadruped of natural axioms forced all decision processes to have a single format.

The assumptions are:

  1. No intrinsic value in the number of copies

  2. No preference reversals

  3. All copies make the same personal indexical decisions

  4. No special status to any copy.

The first assumption states that though I may want to have different number of copies for various external reasons (multiples copies to be well-backuped, or few copies to prevent any of them being kidnapped), I do not derive any intrinsic utility from having 1, 42 or 100 000 copies. The second one is the very natural requirement that there are no preference reversals: I would not pay anything today to have any of my future copies make a different decision, nor vice-versa. The third says that all my copies will make exactly the same decision as me in purely indexical situations (“Would Monsieur prefer a chocolate bar or else coffee right now, or maybe some dragon fruit in a few minutes? How about the other Monsieur?”). And the fourth claims that no copy gets a special intrinsic status (this does not mean that the copies cannot have special extrinsic status; for instance, one can prefer copies instantiated in flesh and blood to those on computer; but if one does, then downloading a computer copy into a flesh and blood body would instantly raise its status).

These assumptions all very intuitive (though the third one is perhaps a bit strong), and they are enough to specify uniquely how utility should work across copying, deleting, and merging.

Now, I will not be looking here at quantum effects, nor at correlated decisions (where several copies make the same identical decision). I will assume throughout that me and all of my copies are expected utility maximisers, and that my utility decomposes into a non-indexical part about general conditions in the universe (“I’d like it if everyone in the world could have a healthy meal everyday”) and an indexical part pertaining to myself specifically (“I’d like a chocolate bar”).

The copies need not be perfectly identical, and I will be using the SIA probabilities. Since each decision is a mixture of probability and utility, I can pick the probability theory I want, as long as I’m aware that those using different probability theories will have different utilities (but ultimately the same decisions). Hence I’m sticking with the SIA probabilities simply because I find them elegant and intuitive.

Then the results are:

  • All copies will have the same non-indexical utility in all universes, irrespective of the number of copies.

Imagine that one of my copies is confronted with Omega saying: “currently, there is either a single copy of you, or n copies, with a probability p. I have chosen only one copy of you to say this to. If you can guess whether there are n copies or one in this universe, then I will (do something purely non-indexical)”. The SIA odds state that the copy been talked to will put a probability p on there being n copies (the SIA increase in n copies cancelled by the fact only he is being talked to). From my current perspective, I would therefore want that copy to reason as if its non-indexical utility was the same as mine, irrespective of the number of copies. Therefore, by no preference reversals, it will have the same non-indexical utility as mine, in both possible universes.

  • All copies will have a personal indexical utility which is non-zero. Consequently, my current utility function has a positive term for my copies achieving their indexical goals.

This is simply because the copies will make the same pure indexical decisions as me, and must therefore have a term for this in their utility function. If they do so, then since utility is real-valued (and not non-standard real valued), they will in certain situations make a decision that increases their personal indexical utility and diminish their (and hence my) non-indexical utility. By no preference reversal, I must approve of this decision, and hence my current utility must contain a term for my copy’s indexical utility.

  • All my copies (and myself) must have the same utility function, and hence all copies must care about the personal indexical utility of the other copies, equally to how much that copy cares about its own personal indexical utility.

It’s already been established that all my copies have the same non-indexical utility. If the copies had different utilities for the remaining component, then one could be offered a deal that increased their own personal indexical utility and decreased that of another copy, and they would take this deal. We can squeeze the benefit side of this deal: offer them arbitrarily small increases to their own utility, in exchange for the same decrease in another copy’s utility.

Since I care about each copy’s personal indexical utility, at least to some extent, eventually such a deal will be to my disadvantage, once the increase gets small enough. Therefore I would want that copy to reject the deal. The only way of ensuring that would do so is to make all copies (including myself) share the same utility function.

So, let’s summarise where we are now. We’ve seen that all my copies share the same non-indexical utility. We’ve also established that they have a personal indexical utility that is the same as mine, and that they care about the other copy’s personal indexical utilities exactly as much as that copy does himself. So, strictly speaking, there are two components, the shared non-indexical utility, and a “shared indexical” utility, made up of some weighted sum of each copy’s “personal indexical” utility.

We haven’t assumed that the weighting is equal, nor what the weight is. Two intuitive ideas spring to mind: a equal average, and a total utility.

For an equal average, we assign each copy a personal indexical utility that is equal to what mine would be if there were not multiples copies, and the “shared indexical” utility is the average of these. If there were a hundred copies about, I would need to give them each a chocolate bar (or give a hundred chocolate bars to one of them) in order to get the same amount of utility as a single copy of me getting a single bar. This corresponds to the intuition “duplicate copies, doing the same thing, doesn’t increase my utility”.

For total utility, we assign each copy a personal indexical utility that is equal to what mine would be if there were not multiples copies, and the “shared indexical” utility is the total of these. If each of my hundred copies gets a chocolate bar each, this is the same as if I had a single copy, and he got a hundred bars. This is a more intuitive position if we see the copies as individual people. I personally find this less intuitive; however:

  • My copies’ “shared indexical” utility (and hence mine) is the sum, not average, of what the individual copies would have if they were the only existent copy.

Imagine that there is one copy now, that there will be n extra copies made in ten minutes, which will all be deleted in twenty minutes. I am confronted with situations such as “do you want to make this advantageous deal now, or a slightly less/​more advantageous deal in 1020 minutes?” By “all copies make the same purely indexical decisions” I would want to delay if, and only if, that is what I would want to do if there were no extra copies made at all. This is only possible if my personal indexical utility is the same throughout the creation and destruction of the other copies. Since no copy is special, all my copies must have the same personal indexical utility, irrespective of the number of copies. So their “shared indexical” utility must be the sum of this.

Thus, given those initial axioms, there is only one consistent way of spreading utility across copies (given SIA probabilities): non-indexical utility must average, personal indexical utility must add, and all copies must share exactly the same utility function.

In the next post, I’ll apply this reasoning to the anthropic trilemma, and also show that there is still hope—of a sort—for the more intuitive “average” view.