Anthropic Decision Theory II: Self-Indication, Self-Sampling and decisions

A near-final version of my Anthropic Decision Theory paper is available on the arXiv. Since anthropics problems have been discussed quite a bit on this list, I’ll be presenting its arguments and results in this, subsequent, and previous posts 1 2 3 4 5 6.

In the last post, we saw the Sleeping Beauty problem, and the question was what probability a recently awoken or created Sleeping Beauty should give to the coin falling heads or tails and it being Monday or Tuesday when she is awakened (or whether she is in Room 1 or 2). There are two main schools of thought on this, the Self-Sampling Assumption and the Self-Indication Assumption, both of which give different probabilities for these events.

The Self-Sampling Assumption

The self-sampling assumption (SSA) relies on the insight that Sleeping Beauty, before being put to sleep on Sunday, expects that she will be awakened in future. Thus her awakening grants her no extra information, and she should continue to give the same credence to the coin flip being heads as she did before, namely 12.

In the case where the coin is tails, there will be two copies of Sleeping Beauty, one on Monday and one on Tuesday, and she will not be able to tell, upon awakening, which copy she is. She should assume that both are equally likely. This leads to SSA:

  • All other things being equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class.

There are some issues with the concept of ‘reference class’, but here it is enough to set the reference class to be the set of all other Sleeping Beauties woken up in the experiment.

Given this, the probability calculations become straightforward:

  • PSSA(Heads) = 12

  • PSSA(Tails) = 12

  • PSSA(Monday|Heads) = 1

  • PSSA(Tuesday|Head) = 0

  • PSSA(Monday|Tails) = 12

  • PSSA(Tuesday|Tails) = 12

By Bayes’ theorem, these imply that:

  • PSSA(Monday) = 34

  • PSSA(Tuesday) = 14

The Self-Indication Assumption

There is another common way of doing anthropic probability, namely to use the self-indication assumption (SIA). This derives from the insight that being woken up on Monday after a heads, being woken up on Monday after a tails, and being woken up on Tuesday are all subjectively indistinguishable events, which each have a probability 12 of happening, therefore we should consider them equally probable. This is formalised as:

  • All other things being equal, an observer should reason as if they are randomly selected from the set of all possible observers.

Note that this definition of SIA is slightly different from that used by Bostrom; what we would call SIA he designated as the combined SIA+SSA. We shall stick with the definition above, however, as it is coming into general use. Note that there is no mention of reference classes, as one of the great advantages of SIA is that any reference class will do, as long as it contains the observers in question.

Given SIA, the three following observer situations are equiprobable (each has an ‘objective’ probability 12 of happening), and hence SIA gives them equal probabilities of 1/​3:

  • PSIA(MondayHeads) = 13

  • PSIA(MondayTails) = 13

  • PSIA(Tuesday ∩ Tails) = 13

This allows us to compute the probabilities:

  • PSIA(Monday) = 23

  • PSIA(Tuesday) = 13

  • PSIA(Heads) = 13

  • PSIA(Tails) = 23

SIA and SSA are sometimes referred to as the thirder and halfer positions respectively, referring to the probability they give for Heads.

Probabilities and decisions

SIA and SSA give probabilities in anthropic situations, but aren’t enough to give decisions. Consider the case where Sleeping Beauty has to vote on some policy, that is only implemented if voted for by all existent copies. When there are multiple copies, being identical, they will vote the same way.
Which poses the question as to how much impact each copy has on the final outcome. Do they have an individual impact, i.e. they are responsible for one n-th of the outcome if there are n copies voting? Or are they responsible for the total impact, since the result requires unanimity, and (since the copies are identical) if they voted the other way, so would the other copies?
In that situation, SIA with individual impact gives the same decision as SSA with total impact (SIA prefers worlds with large number of people in them, which also magnify the size of total impact). So probabilities are not enough, on their own, to solve anthropic problems. Hence we will be focusing not on anthropic probabilities but on anthropic decisions. It is astonishing that one can solve the later, without making use of the former.