Providing a link to Doomsday Argument and the False Dilemma of Anthropic Reasoning where I solve this anthropic issue. We can collapse all the meta-levels of anthropic reasoning into a simple principle: make sure that the map that you use actually corresponds to the territory.
My actual point (for anyone still wondering whether I have one) is that the correct way for a Bayesian to look at a counterfactual is P(X | everything else I know), which is generally very near 1 — certainly it is for the Hoyle resonance.
You should be careful here as it’s very easy to overestimate how much you actually know. Anthropic problems like Sleeping Beauty and Abscent Minded Driver are confusing specifically because of that.
There is also nothing illegal about noticing that P(X) is very low even though you already know that X is realized. If your model claims that some event X is low probable, but you’ve just observed it being realized, it’s very probable that your model is wrong.
Providing a link to Doomsday Argument and the False Dilemma of Anthropic Reasoning where I solve this anthropic issue. We can collapse all the meta-levels of anthropic reasoning into a simple principle: make sure that the map that you use actually corresponds to the territory.
You should be careful here as it’s very easy to overestimate how much you actually know. Anthropic problems like Sleeping Beauty and Abscent Minded Driver are confusing specifically because of that.
There is also nothing illegal about noticing that P(X) is very low even though you already know that X is realized. If your model claims that some event X is low probable, but you’ve just observed it being realized, it’s very probable that your model is wrong.
Great post!
Anyone still confused after this, should go read that post.