Anthropics over-simplified: it’s about priors, not updates

I’ve argued that anthropic reasoning isn’t magic, applied anthropic reasoning to the Fermi question, claimed that different anthropic probabilities answer different questions, and concluded that anthropics is pretty normal.

But all those posts were long and somewhat technical, and needed some familiarity with anthropic reasoning in order to be applied. So here I’ll list what people unfamiliar with anthropic reasoning can do to add it simply[1] and easily to their papers/​blog posts/​discussions:

  1. Anthropics is about priors, not updates; updates function the same way for all anthropic probabilities.

  2. If two theories predict the same population, there is no anthropic effect between them.

Updating on safety

Suppose you go into hiding in a bunker in 1956. You’re not sure if the cold war is intrinsically stable or unstable. Stable predicts a chance of nuclear war; unstable predicts a chance.

You emerge much older in 2020, and notice there has not been a nuclear war. Then, whatever anthropic probability theory you use, you update the ratio

Population balancing

Suppose you have two theories to explain the Fermi paradox:

  • Theory 1 is that life can only evolve in very rare conditions, so Earth has the only life in the reachable universe.

  • Theory 2 is that there is some disaster that regularly obliterates pre-life conditions, so Earth has the only life in the reachable universe.

Since the total population predicted by these two theories is the same, there is no anthropic update between them[2].


  1. ↩︎

    These points are a bit over-simplified, but are suitable for most likely scenarios.

  2. ↩︎

    If you use a reference class that doesn’t include certain entities—maybe you don’t include pre-mammals or beings without central nervous systems—then you only need to compare the population that is in your reference class.

No comments.