The Meta-Anthropic Argument

Epistemic status: I just thought this up

There is a well-known style of reasoning called the anthropic argument (which has nothing to do with the AI frontier lab of the same name). It goes something like this:

Scientist 1: “X seems really unlikely! How come I’m observing it?”
(where X is usually something like the Hoyle resonance and the resulting triple-alpha process)

Scientist 2: “Your mistake is that you’re estimating — you should instead be estimating . If the triple-alpha process didn’t work, there wouldn’t be enough carbon for you to exist. Consider all the places in the String Theory Landscape that don’t have intelligent life in them asking a question like that right now. The few that do probably almost all have some sort of fluke like this that makes intelligent life likely there. It’s just a sampling effect.”

This is all well and good, and I agree with the second scientist — questions only get asked if there’s someone around to ask them, and you should include everything you already know about the situation in your priors. That’s kind of obvious, once you stop and think about it. And once the String Theory Landscape comes into the discussion, is a very large number.

However, if you apply this sort of thinking too much, you find yourself starting to be surprised if your viewpoint is sufficiently atypical in any way, compared to a random sample drawn from all sapient beings, or at least all humans, ever. Such as that, on the fairly plausible assumption that the human race will eventually colonize the stars as long as we manage not to go extinct first, it seems rather likely that that will be something of at least the rather rough order times as many people in our current forward lightcone as in our backward lightcone. That’s a rather large coincidence — why is our current viewpoint that atypical? Now, obviously, somebody did get to be the very first member of Homo sapiens born, right after we speciated, but that’s just a fluke: they do happen, just very rarely — so I continue to ask, why me? Why am I one of the lucky (or at least early) ones by a factor as large as 1 in ? That simply doesn’t seem very plausible…

I have seen this argument cited as evidence that there must be a Great Filter or that we should all have a high , because it’s just so implausible that our generation could be missing out on going to the stars. That’s a pretty serious case of sour grapes, and interestingly, this argument appears to defy causality. However, I think we also need to consider the Meta-Anthropic Principle:

Scientist 3: “No, no, no, you should actually be estimating: . Obviously asking the question in the first place, and especially not having any better way to answer it than the Anthropic Principle, is going to be a lot rarer later on, once we know a lot more. First wondering about this sort of stuff is strongly correlated with you living not that long after the development of the Scientific Method. It’s just a sampling effect.”

So the clue they forgot to include in their priors was right there all along, on Scientist 1’s name-tag.


[1]

  1. ^

    My actual point (for anyone still wondering whether I have one) is that the correct way for a Bayesian to look at a counterfactual is , which is generally very near 1 — certainly it is for the Hoyle resonance. “There sure does seem to be a lot of carbon around: I’m literally made out of the stuff! I wonder where it all came from?” Once you start doing counterfactuals more rarefied than that, arbitrarily choosing to leave out more information from your conditioning, you’re on increasingly thin epistemic ice, and probably shouldn’t be surprised if you start getting odd-looking results once you leave out almost everything else you know, such as when you are living, or even everything other than the fact that you’re sapient. If one of the enormous number of things that you left out of your conditioning happens to be a relevant apparent fluke (specifically, a relevant way in which you arguably may be atypical across all of time), then you get weird results like the Doomsday Argument. Which one might call the meta-meta-anthropic principle.

    From a statistical point of view, any time an argument mixes Bayesian and Frequentist reasoning, as the Doomsday Argument does, you should be deeply suspicious, and attempt to translate it into purely Bayesian reasoning (or I suppose purely Frequentist, if you’re secretly a Frequentist — good luck with that). From a Bayesian point of view, drawing a random sample from all humans who have ever or actually will ever exist is just not a well-defined operation until after humanity is extinct. Trying it before then violates causality: performing it requires reliable access to information about hard-to-predict events that have not yet happened, i.e. precognition. So the concept “I‘m likely to be typical, out of all humans who have existed or ever will exist”, intuitive as it may sound to a Frequentist, is simply an invalid choice of prior — that’s not a uniform prior, it violates causality (unlike “I’m likely to be typical, out of all humans who currently exist”, or “I’m likely to be typical, out of all humans who have existed up to this point” which are both reasonable priors). If you use this invalid prior, it’s not entirely surprising that the resulting invalid argument then predicts the only outcome under which the Frequentist random sampling assumption that it smuggles in in place of a valid prior becomes likely to be valid.

    In case you missed it, part of the joke is that Scientists 1, 2, and 3 above are all very evidently Frequentists, and that the Bayesian equivalent of Scientist 1’s observation is: “That’s neat, my previously-uniform prior about the ratio of the value of these two nuclear energy levels value has updated very strongly into being convinced that they’re a close resonance, as soon as I remembered that we’re all literally made out of carbon, and we’re really very sure of that fact. I realized we already had a vast amount of relevant evidence on the subject, which I’d foolishly mistaken for being only of interest to biochemists!” To which Bayesian Scientists 2 and 3 nod in quiet agreement. That’s what the Anthropic Argument looks like in a Bayesian framework.