Anthropic reasoning isn’t magic

The user Optimization Process presented a very interesting collection of five anthropic situations, leading to seemingly contradictory conclusions where we can’t conclude anything about anything.

It’s an old post (which I just discovered), but it’s worth addressing, because it’s wrong—but very convincingly wrong (it had me fooled for a bit), and clearing up that error should make the situation a lot more understandable. And you don’t need to talk about SSA, SIA, or other advanced anthropic issues.

The first example is arguably legit; it’s true that:

And so, a universe created by some kind of deity, and tuned for life, is indistinguishable from a universe that happens to have the right parameters by coincidence. “We exist” can’t be evidence for our living in one or the other, because that fact doesn’t correlate with design-or-lack-of-design

But what’s really making the argument work is the claim that:

unless you think that, from inside a single universe, you can derive sensible priors for the frequency with which all universes, both designed and undesigned, can support life?

But the main argument fails at the very next example, where we can start assigning reasonable priors. It compares worlds where the cold war was incredibly dangerous, with worlds where it was relatively safer. Call these “dangerous”, and “safe”. The main outcome is “survival”, ie human survival. The characters are currently talking about surviving the cold war—designate this by “talking”. Then one of the character says:

Avery: Not so! Anthropic principle, remember? If the world had ended, we wouldn’t be standing here to talk about it.

This encodes the true statement that P(survival | talking) is approximately 1, as are P(survival | talking, safe) and P(survival | talking, dangerous). In these conditional probabilities, the fact that they are talking has screened off any effect of the cold war on survival.

But Bayes law still applies, and

  • P(dangerous | survival) = P(dangerous) * (P(survival | dangerous)/​P(survival)).

Since P(survival | dangerous) < P(survival) (by definition, dangerous cold wars are those where the chance of surviving are lower than usual), we get that

  • P(dangerous | survival) < P(dangerous).

Thus the fact of our survival has indeed caused us to believe that the cold war was safer than initially thought (there are more subtle arguments, involving near-misses, which might cause us to think the cold war was more dangerous than we thought, but those don’t detract from the result above).

The subsequent examples can be addressed in the same way.

Even the first example follows the same pattern—we might not have sensible priors to start with, but if we did, the update process proceeds as above. But beware—“a deity constructed the universe specifically for humans” is strongly updated towards, but that is only one part of the more general hypothesis “a diety construced the universe specifically for some thinking entities”, which has a much weaker update.

What is anthropic reasoning for, then?

Given the above, what is anthropic reasoning for? Well, there are sublte issues with SIA, SSA, and the like. But even without that, we can still use anthropic reasoning about the habitiability of our planet, or about the intelligence of creatures realted to us (that’s incidentally why the intelligence of dolphins and octopuses tells us a lot more about the evolution of intelligence, as they’re not already “priced in” by anthropic reasoning).

Basically, us existing does make some features of the universe more likely and some less likely, and taking that into account is perfectly legitimate.