Updateless anthropics

Three weeks ago, I set out to find a new theory of anthropics, to try and set decision theory on a firm footing with respect to copying, deleting copies, merging them, correlated decisions, and the presence or absence of extra observers. I’ve since come full circle, and realised that UDT already has a built-in anthropic theory, that resolves a lot of the problems that had been confusing me.

The theory is simple, and is essentially a rephrasing of UDT: if you are facing a decision X, and trying to figure out the utility of X=a for some action a, then calculate the full expected utility of X being a, given the objective probabilities of each world (including those in which you don’t exist).

As usual, you have to consider the consequences of X=a for all agents who will make the same decision as you, whether they be exact copies, enemies, simulations or similar-minded people. However, your utility will have to do more work that is usually realised: notions such as selfishness or altruism with respect to your copies have to be encoded in the utility function, and will result in substantially different behaviour.

The rest of the post is a series of cases-studies illustrating this theory. Utility is assumed to be linear in cash for convenience.

Sleeping with the Presumptuous Philosopher

The first test case is the Sleeping Beauty problem.


In its simplest form, this involves a coin toss; if it comes out heads, one copy of Sleeping Beauty is created. If it comes out tails, two copies are created. Then the copies are asked at what odds they would be prepared to bet that the coin came out tails. You can assume either that the different copies care for each other in the manner I detailed here, or more simply that all winnings will be kept by a future merged copy (or an approved charity). Then the algorithm is simple: the two worlds have equal probability. Let X be the decision where sleeping beauty decides between a contract that pays out $1 if the coin is heads, versus one that pays out $1 if the coin is tails. If X=”heads” (to use an obvious shorthand), then Sleeping Beauty will expect to make $1*0.5, as she is offered the contract once. If X=”tails”, then the total return of that decision is $1*2*0.5, as copies of her will be offered the contract twice, and they will all make the same decision. So Sleeping Beauty will follow the SIA 2:1 betting odds of tails over heads.

Variants such as “extreme Sleeping Beauty” (where thousands of copies are created on tails) will behave in the same way; if it feels counter-intuitive to bet at thousands-to-one odds that a fair coin landed tails, it’s the fault of expected utility itself, as the rewards of being right dwarf the costs of being wrong.

But now let’s turn to the Presumptuous Philosopher, a thought experiment that is often confused with Sleeping Beauty. Here we have exactly the same setup as “extreme Sleeping Beauty”, but the agents (the Presumptuous philosophers) are mutually selfish. Here the return to X=”heads” remains $1*0.5. However the return to X=”tails” is also $1*0.5, since even if all the Presumptuous Philosophers in the “tails” universe bet on “tails”, each one will still only get $1 in utility. So the Presumptuous Philosopher should only take even SSA betting 1:1 odds on the result of the coin flip.

So SB is acts like she follows the self-indication assumption, (SIA), and while the PP is following the self-sampling assumption (SSA). This remains true if we change the setup so that one agent is given a betting opportunity in the tails universe. Then the objective probability of any one agent being asked is low, so both SB and PP model the “objective probability” of the tails world, given that they have been asked to bet, as being low. However, SB gains utility if any of her copies is asked to bet and receives a profit, so the strategy “if I’m offered $1 if I guess correctly whether the coin is heads or tails, I will say tails” gets her $1*0.5 utility whether or not she is the specific one who is asked. Betting heads nets her the same result, so SB will give SIA 1:1 odds in this case.

On the other hand, the PP will only gain utility in the very specific world where he himself is asked to bet. So his gain from the updateless “if I’m offered $1 if I guess correctly whether the coin is heads or tails, I will say tails” is tiny, as he’s unlikely to be asked to bet. Hence he will offer the SSA odds that make heads a much more “likely” proposition.

The Doomsday argument

Now, using SSA odds brings us back into the realm of the classical Doomsday argument. How is it that Sleeping Beauty is immune to the Doomsday argument while the Presumptuous Philosopher is not? Which one is right; is the world really about to end?

Asking about probabilities independently of decisions is meaningless here; instead, we can ask what would agents decide in particular cases. It’s not surprising that agents will reach different decisions on such questions as, for instance, existential risk mitigation, if they have different preferences.

Let’s do a very simplified model, where there are two agents in the world, and that one of them is approached at random to see if they would pay $Y to add a third agent. Each agent derives a (non-indexical) utility of $1 for the presence of this third agent, and nothing else happens in the world to increase or decrease anyone’s utility.

First, let’s assume that each agent is selfish about their indexical utility (their cash in the hand). If the decision is to not add a third agent, all will get $0 utility. If the decision is to add a third agent, then there are three agents in the world, and one them will be approached to lose $Y. Hence the expected utility is $(1-Y/​3).

Now let us assume the agents are altruistic towards each other’s indexical utilities. Then the expected utility of not adding a third agent is still $0. If the decision is to add a third agent, then there are three agents in the world, and one of them will be approached to lose $Y—but all will value that lose at the same amount. Hence the expected utility is $(1-Y).

So if $Y=$2, for instance, the “selfish” agents will add the third agent, and the “altruistic” ones will not. So generalising this to more complicated models describing existential risk mitigations schemes, we would expect SB-type agents to behave differently to PP-types in most models. There is no sense in asking which one is “right” and which one gives the more accurate “probability of doom”; instead ask yourself which better corresponds to your own utility model, hence what your decision will be.

Psy-Kosh’s non-anthropic problem

Cousin_it has a rephrasing of Psy-Kosh’s non-anthropic problem to which updateless anthropics can be illustratively applied:

You are one of a group of 10 people who care about saving African kids. You will all be put in separate rooms, then I will flip a coin. If the coin comes up heads, a random one of you will be designated as the “decider”. If it comes up tails, nine of you will be designated as “deciders”. Next, I will tell everyone their status, without telling the status of others. Each decider will be asked to say “yea” or “nay”. If the coin came up tails and all nine deciders say “yea”, I donate $1000 to VillageReach. If the coin came up heads and the sole decider says “yea”, I donate only $100. If all deciders say “nay”, I donate $700 regardless of the result of the coin toss. If the deciders disagree, I don’t donate anything.

We’ll set aside the “deciders disagree” and assume that you will all reach the same decision. The point of the problem was to illustrate a supposed preference inversion: if you coordinate ahead of time, you should all agree to say “nay”, but after you have been told you’re a decider, you should update in the direction of the coin coming up tails, and say “yea”.

From the updateless perspective, however, there is no mystery here: the strategy “if I were a decider, I would say nay” maximises utility both for the deciders and the non-deciders.

But what if the problem were rephrased in a more selfish way, with the non-deciders not getting any utility from the setup (maybe they don’t get to see the photos of the grateful saved African kids), while the deciders got the same utility as before? Then the strategy “if I were a decider, I would say yea” maximises your expect utility, because non-deciders get nothing, thus reducing the expected utility gains and losses in the world where the coin came out tails. This is similar to SIA odds, again.

That second model is similar to the way I argued for SIA with agents getting created and destroyed. That post has been superseded by this one, which pointed out the flaw in the argument which was (roughly speaking) not considering setups like Psy-Kosh’s original model. So once again, whether utility is broadly shared or not affects the outcome of the decision.

The Anthropic Trilemma

Eliezer’s anthropic trilemma was an interesting puzzle involving probabilities, copying, and subjective anticipation. It inspired me to come up with a way of spreading utility across multiple copies which was essentially a Sleeping Beauty copy-altruistic model. The decision process going with it is then the same as the updateless decision process outlined here. Though initially it was phrased in terms of SIA probabilities and individual impact, the isomorphism between the two can be seen here.