Paradoxes in all anthropic probabilities

In a previous post, I re-discovered full non-indexical updating (FNC), an anthropic theory I’m ashamed to say I had once known and then forgot. Thanks to Wei Dai for reminding me of that.

There is a problem with FNC, though. In fact, there are problems with all anthropic probability theories. Both FNC and SIA violate conservation of expected evidence: you can be in a situation where you know with certainty that your future probability will be different in a particular direction, from your current one. SSA has a different problem: it allows you to make decisions that change the probability of past events.

These paradoxes are presented to illustrate the fact that anthropic probability is not a coherent concept, and that dealing with multiple copies of a single agent is in the realm of decision theory.

FNC and evidence non-conservation

Let’s presume that the bandwidth of the human brain is bits per minute. Then we flip a coin. Upon it coming up heads, we create identical copies of you. Upon it coming up tails, we create copies of you.

Then if we assume that the experiences of your different copies are random, for the first minute, you will give equal probability to heads and tails. That’s because there is a being with exactly the same observations as you, in both universes.

After two minutes, you will shift to odds in favour of tails: you’re certain there’s a being with your observations in the tails universe, and, with probability , there’s one in the heads universe.

After a full three minutes, you will finally stabilise on odds in favour of tails, and stay there.

Thus, during the first minute, you know that FNC will be giving you different odds in the coming minutes, and you can predict the direction those odds will take.

If the observations are non-random, then the divergence will be slower, and the FNC odds will be changing for a longer period.

SIA and evidence non-conservation

If we use SIA instead of FNC, then, in the above situation, the odds of tails will be and will stay there, so that setting is not an issue for SIA.

To show a problem with SIA, assume there is one copy of you, that we flip a coin, and, if comes out tails, we will immediately duplicate you (putting the duplicate in a separate room). If it comes out heads, we will wait a minute before duplicating you.

Then SIA implies in favour of tails during that minute, but equal odds afterwards.

You can’t get around this with tweaked references classes: one of the good properties of SIA is that it works the same whatever the reference class, as long as it includes agent currently subjectively indistinguishable from you.

SSA and changing the past

SSA has a lot of issues. It has the whole problem with reference classes; these are hard to define coherently, and agents in different reference classes with the same priors can agree to disagree (for instance, if we expect that there will be a single gender in the future, then if I’m in the reference class of males, I expect that single gender will be female—and the opposite will be expected for someone in the reference class of females). It violates causality: it assigns different probabilities to an event, purely depending on the future consequence of that event.

But I think I’ll focus on another way it violates causality: your current actions can change the probability of past events.

Suppose that the proverbial coin is flipped, and that if it comes up heads, one version of you is created, and, if it comes up tails, copies of you are created. You are the last of these copies: either the only one in the heads world, or the last one in the tails world, you don’t know. Under SSA, you assign odds of in favour of heads.

You have a convenient lever, however. If you pull it, then future copies of you will be created, in the heads world only (nothing will happen in the tails world). Therefore, of you pull it, the odds of the coin being tails—an event long past, and known to be past—will shift to from to in favour.