Anthropic Decision Theory IV: Solving Selfish and Average-Utilitarian Sleeping Beauty

A near-final version of my Anthropic Decision Theory paper is available on the arXiv. Since anthropics problems have been discussed quite a bit on this list, I’ll be presenting its arguments and results in this, subsequent, and previous posts 1 2 3 4 5 6.

In the previous post, I looked at a decision problem when Sleeping Beauty was selfless or a (copy-)total utilitarian. Her behaviour was reminiscent of someone following SIA-type odds. Here I’ll look at situations where her behaviour is SSA-like.

Altruistic average utilitarian Sleeping Beauty

In the incubator variant, consider the reasoning of an Outside/​Total agent who is an average utilitarian (and there are no other agents in the universe apart from the Sleeping Beauties).

“If the various Sleeping Beauties decide to pay £x for the coupon, they will make -£x in the heads world. In the tails world, they will each make £(1-x) each, so an average of £(1-x). This give me an expected utility of £0.5(-x+(1-x))= £(0.5-x), so I would want them to buy the coupon for any price less than £0.5.”

And this will then be the behaviour the agents will follow, by consistency. Thus they would be behaving as if they were following SSA odds, and putting equal probability on the heads versus tails world.

For a version of this that makes senses for the classical Sleeping Beauty problem, one could imagine that she to be awaknened a week after the experiment. Further imagine she would take her winnings and losses during the experiment in the form of chocolate, consumed immediatly. Then because of the amneia drug, she would only remember one instance of this in the tails world. Hence if she valued memory of pleasure, she would want to be average utilitarian towards the pleasures of her different versions, and would follow SSA odds.

Reference classes and copy-altruistic agents

Standard SSA has a problem with reference classes. For instance, the larger the reference class becomes, the more the results of SSA in small situations become similar to SIA. The above setup mimics the effect: if there is a very large population of outsider individuals that Sleeping Beauty is altruistic towards, then the gains to two extra copies will tend to add, rather than average: if Ω is large, then 2x/​(2+Ω) (averaged gain to two created agents each gaining x) is approximately twice x/​(1+Ω) (averaged gain to one created agent gaining $x$), so she will behave more closely to SIA odds.

This issue is not present for copy-altruistic average utilitarian Sleeping Beauties, as she doesn’t care about any outsiders.

Selfish Sleeping Beauty

In all of the above example, the goals of one Sleeping Beauty were always in accordance with the goals of her copies or the past and future versions of herself. But what happens when this fails? What happens when the different versions are entirely selfish towards each other? Very easy to understand in the incubator variant (the different created copies feel no mutual loyalty), it can also be understood in the standard Sleeping Beauty problem if she is a hedonist with a high discount rates.

Since the different copies do have different goals, the consistency axioms no longer apply. It seems that we cannot decide what the correct decision is in this case. There is, however, a tantalising similarity between this case and the altruistic average utilitarian Sleeping Beauty. The setups (including probabilities) are the same. By `setup’ we mean the different worlds, their probabilities, the number of agents in each world, and the decisions faced by these agents. Similarly, the possible ‘linked’ decisions are the same. See future posts for a proper definition of linked decisions; here it just means that all copies will have to make the same decision, being identical, so there is one universal ‘buy coupon’ or ‘reject coupon’. And, given this linking, the utilities derived by the agents is the same for either outcome in the two cases.

To see this, consider the selfish situation. Each Sleeping Beauty will make a single decision, whether to buy the coupon at the price offered. Not buying the coupon nets her £0 in all worlds. Buying the coupon at price £x nets her -£x in the heads world, and £(1-x) in the tails world. The linking is present but has no impact on these selfish agents: they don’t care what the other copies decide.

This is exactly the same for the altruistic average utilitarian Sleeping Beauties. In the heads world, buying the coupon at price £x nets her -£x worth of utility. In the tails world, it would net the current copy £(1-x) worth of individual utility. Since the copies are identical (linked decision), this would happen twice in the tails world, but since she only cares about the average, this grants both copies only £(1-x) worth of utility in total. The linking is present, and has an impact, but that impact is dissolved by the average utilitarianism of the copies.

Thus the two situations have the same setup, the same possible linked decisions and the same utility outcomes for each possible linked decision. It would seem there is nothing relevant to decision theory that distinguishes these two cases. This gives us the last axiom:

  • Isomorphic decisions: If two situations have the same setup, the same possible linked decisions and the same utility outcomes for each possible linked decision, and all agents are aware of these facts, then agents should make the same decisions in both situations.

This axiom immediately solves the selfish Sleeping Beauty problem, implying that agents there must behave as they do in the altruistic average utilitarian Sleeping Beauty problem, namely paying up to £0.50 for the coupon. In this way, the selfish agents also behave as if they were following SSA probabilities, and believed that heads and tails were equally likely.

Summary of results

We have broadly four categories of agents, and they follow two different types of decisions (SIA-like and SSA-like). In the Sleeping Beauty problem (and in more general problems), the categories decompose as:

  1. Selfless agents who will follow SIA-type odds.

  2. (Copy-)Altruistic total utilitarians who will follow SIA-type odds.

  3. (Copy-)Altruistic average utilitarians who will follow SSA-type odds.

  4. Selfish agents who will follow SSA-type odds.

For the standard Sleeping Beauty problem, the first three decisions derived from consistency. The same result can be established for the incubator variants using the Outside/​Total agent axioms. The selfish result, however, needs to make use of the Isomorphic decisions axiom.

EDIT: A good question from Wei Dai illustrates the issue of precommitment for selfish agents.