Conservation of Expected Evidence and Random Sampling in Anthropics

This is the second post in my series on Anthropics. The previous one is Anthropical Motte and Bailey in two versions of Sleeping Beauty. The next one is Antropical probabilities are fully explained by difference in possible outcomes.

Introduction

Since the first time I’ve heard about anthropics something felt off. Be it updating on awakening in Sleeping Beauty, accepting high probability of doom in the Doomsday Argument, or the premise of Grabby Aliens; following SSA or SIA, the whole pattern of reasoning seemed wrong to me.

That it’s cheating. That there is something obviously unlawful going on. That it doesn’t look at all like the way cognition engines produce map-territory correspondence.

It was hard to point out what exactly had been wrong, though. The discourse seemed to be focused around either accepting SIA or SSA, both of which are obviously absurd and wrong in some cases. Or discarding all the anthropic reasoning altogether—a position towards which I’m quite sympathetic, but which also seemed to be an overcorrection, as it tends to discard some sound reasoning.

It took me some time to formalize this feeling of wrongness into an actual rule that can be applied to anthropical problems, separating correct reasoning from wrong reasoning. In this post I want to explore this principle and its edge cases, and show how using it can make anthropics add up to normality.

The metaphysics of existence vs having a blue jacket

Let’s consider the Blue Jacket Experiment (BJE):

A fair coin is tossed. On Tails two people will be created, wearing blue jackets. On Heads two people will also be created but only one, randomly chosen, will have a blue jacket. You are created and notice that you are wearing a blue jacket. What’s the probability that the coin landed Heads?

Here the situation is quite obvious. As having a blue jacket is twice as likely when the coin is Tails, we can do a simple Bayesian update:

Notice that this is not an Anthropical Motte where we count Tails outcomes twice for the same coin toss. In a repeated experiment guessing Tails when you have a blue jacket gives about 23 accuracy. You can actually guess Tails per experiment better than chance this way.

So why doesn’t the same principle apply to the Incubator Sleeping Beauty (ISB) problem, as one would naively think? Why can’t I notice that I exist, update on it and guess the result of a coin toss with 23 accuracy per experiment?

There are two failure modes here. The first, is to keep confusing the Motte with the Bailey and bite the bullet, saying that indeed ISB works exactly as BJE. I hope, that my previous post and all the accentuation that we are talking about per experiment accuracy made that mistake as hard to make as it can get.

The second, is to decide that there is something fundamentally special about consciousness or first-person perspective. That there is a metaphysical difference between your existence and having a blue jacket. This line of reasoning leads people to magical thinking that the universe cares more about some people with some specific properties. I hope this post will push back against that failure mode and show that there is no weird metaphysics going on.

But what’s the answer then?

Well, the short answer is that in BJE you receive new evidence. You couldn’t be confident that you would have blue jacket, and now you know that you have one. On the other hand, there is no unaccounted-for information in the fact of existence. But this is confusing for some people. How do I know whether I have already accounted for my existence or not? Maybe I couldn’t be confident and should be surprised that I exist? So, let’s make a step back.

Let’s notice that it’s logically impossible to correctly guess Tails in Incubator Sleeping Beauty with 23 accuracy, as it’s possible for BJE. About 50% of coin tosses are Heads in both experiments. So, guessing Tails every time on a repeated experiment can’t possibly give you 23 accuracy among all the iterations. However, you can get 23 accuracy in some subset of all iterations.

In BJE it’s the subset of iterations in which you have a Blue Jacket. We can get a subset of iterations among which you can predict the Tails with 23 accuracy, because the number of all iterations is greater than number of iterations in which you have a Blue Jacket.

But in ISB there are no iterations in which you do not exist. The number of outcomes in which you are created equals the total number of iterations.

Thus, there is no possibility to get 23 accuracy in the subset of iterations where you exist.

This is the underlying principle of the Conservation of Expected Evidence. If you couldn’t have possibly expected to observe the outcome not A, you do not get any new information by observing outcome A and there is nothing to update on. You can’t expect to observe your own non-existence, but you can expect to observe yourself not having a Blue Jacket. That’s why you update in the latter and not the former case.

I think generally it’s a good heuristic. But there are still confusing edge cases, related to the way natural language works. For instance, death. You can’t expect to observe yourself being dead. But you can expect yourself to die. Is there some metaphysical difference between death and non-existence?

The metaphysics of non-existence vs death

Let’s consider the Assassination Experiment (AE):

A fair coin is tossed. On Tails two people will be created. On Heads two people will also be created but then one, randomly chosen, dies half an hour later. You are created and notice that half an hour has passed and you are still alive. What’s the probability that the coin landed Heads?

On one hand the situation is completely similar to BJE

But doesn’t this contradict the Conservation of Expected Evidence?

We can say that when you are still alive you can expect yourself to die, but you can’t expect yourself to not exist when you’ve never existed in the first place, because there is no one to expect anything. But this is still not exactly the actual rule.

What if you were created unconscious and then killed? Then you couldn’t possibly expect anything, could you? What about unconscious states in general as in Classic Sleeping Beauty? Sometimes we need to include them in our mathematical model—like when we are talking about Beauty’s chance to be asleep on a random day and sometimes not—when we are specifically talking about her awake states and attempts to guess the result of the coin toss.

Adding all these caveats makes the rule appear complex and artificial. What we want is to talk about possibility of expectation in principle, based on the simple fact that for a person created in this experiment it’s possible not to observe their survival, because the amount of all people survived is less than all people created.

The true rule has little to do with the specialness of first-person experience. And that’s why anthropic theories that focus on that always led to bizarre conclusions. The true rule is about the causal process that goes on inside the reality, or in our case, inside a specific mind experiment.

No metaphysics, just random sampling

Thankfully there is a simple way to capture this idea: whether there is a random sampling going on or not.

For example, wearing a blue jacket is an outcome of a random sample. Heads outcome in BJE leads to random choice between two people whom to give a blue jacket. Likewise, survival in AE is an outcome of random sample, regardless of whether people are created conscious or not.

But Heads outcome in Incubator Sleeping Beauty is not. You are not randomly selected among two immaterial souls to be instantiated. You are a sample of one. And as there is no random choice happening, you are not twice as likely to exist when the coin is Tails and there is no new information you get when you are created.

Whether there is random selection going on or not, equals whether you gain new information or not, equals whether you follow or contradict the conservation of expected evidence, when you update on this information. It doesn’t matter what kind of evidence we are talking about. Both existence and having a blue jacket follow the same rule.

To demonstrate this, let’s change the condition of the BJE a bit to get Fixed Blue Jacket Experiment (FBJE):

A fair coin is tossed. You will be created wearing a blue jacket regardless of outcome. On Tails another person will also be created, wearing a blue jacket. On Heads a person without a blue jacket will be created. You are created and notice that you are wearing a blue jacket. What’s the probability that the coin landed Heads?

Here updating on wearing blue jacket would violate the Conservation of Expected Evidence as the causal process that gave you the jacket didn’t use random sampling, there is no possible outcome where you do not have a Blue Jacket, so no new information in having one. And thus:

You may notice that in this regard wearing a blue jacket in BJE is similar to finding yourself in Room 1 in Incubator Sleeping Beauty, and in FBJE it’s similar to learning that it’s Monday in Classical Sleeping Beauty.

Now let’s modify Sleeping Beauty so that update on existence/​awakening was similar to wearing a blue jacket in BJE. Here is Bargain Sleeping Beauty:

You and another person participate in the Sleeping Beauty experiment. Sadly, the funding is limited so no amnesia drug is provided. Instead, a coin is tossed. On Heads one of you, randomly picked, will be put to sleep and then awakened. The other person, meanwhile is free to go. On Tails both of you will be put to sleep and then awakened in different rooms. You were put to sleep and now are awakened. What is the probability that the coin landed Heads

Now there is a random selection process, it’s possible for you not to be picked and thus awakening in the room is relevant evidence.

As a corollary, we can notice that wrongly assuming that random sampling is going on when it’s actually not the case, leads to wrong conclusions, as it makes us update on irrelevant information and contradicts the Conservation of Expected Evidence.

And this is what is going on with every bad example of anthropic reasoning.

The Doomsday Argument falsely assumes that we are randomly sampled among all the humans who lived or who will ever live. Grabby Aliens—that we are randomly sampled among all the possible sentient civilizations. Thirdism in Sleeping Beauty—that you are randomly sampled among all possible awakened states. Most of the time causality is completely ignored.

Sometimes the sampling assumption of SSA or SIA is not satisfied by the conditions of the experiment, and then they unsurprisingly output crazy results. It’s no use to argue which of them is true or even just better than the other, because they are not universal laws. They are literally just assumptions which occasionally fail to correspond to reality. And it’s totally fine, our mathematical models are supposed to fail in the circumstances they are not meant to work in.

Do not blindly follow anthropic theories off the cliff, biting all the ridiculous bullets on the way. Check the causal structure, see if there is random sampling going on, and base your conclusions on that. Follow the Law of Conservation of Expected Evidence and you won’t be led astray.

The next post in the series is Antropical probabilities are fully explained by difference in possible outcomes.