Hm, you’re right, I guess there is something weird here (I’m not talking about FNC—I think that part is weird too—I mean “ordinary” anthropic probabilities).
If I had to try to put my finger on what’s different, there is an apparent deviation from normal Bayesian updating. Normally, when you add some sense-data to your big history-o’-sense-data, you update by setting all incompatible hypotheses to zero and renormalizing what’s left. But this “anthropic update” seems to add hypotheses rather than only removing them—when you’re duplicated there’s now more possible explanations for your sense-data, rather than the normal case of less and less possible explanations.
I think a Cartesian agent wouldn’t do anthropic reasoning, though it might learn to simulate it if put through a series of Sleeping Beauty type games.
Hm, you’re right, I guess there is something weird here (I’m not talking about FNC—I think that part is weird too—I mean “ordinary” anthropic probabilities).
If I had to try to put my finger on what’s different, there is an apparent deviation from normal Bayesian updating. Normally, when you add some sense-data to your big history-o’-sense-data, you update by setting all incompatible hypotheses to zero and renormalizing what’s left. But this “anthropic update” seems to add hypotheses rather than only removing them—when you’re duplicated there’s now more possible explanations for your sense-data, rather than the normal case of less and less possible explanations.
I think a Cartesian agent wouldn’t do anthropic reasoning, though it might learn to simulate it if put through a series of Sleeping Beauty type games.