The probability is only different when you think the world is in a different state. This no more violates conservation of expected evidence than putting the kettle on violates conservation of expected evidence by predictably changing the probability of hot water. The weird part is which part of the world is different.
Hm, you’re right, I guess there is something weird here (I’m not talking about FNC—I think that part is weird too—I mean “ordinary” anthropic probabilities).
If I had to try to put my finger on what’s different, there is an apparent deviation from normal Bayesian updating. Normally, when you add some sense-data to your big history-o’-sense-data, you update by setting all incompatible hypotheses to zero and renormalizing what’s left. But this “anthropic update” seems to add hypotheses rather than only removing them—when you’re duplicated there’s now more possible explanations for your sense-data, rather than the normal case of less and less possible explanations.
I think a Cartesian agent wouldn’t do anthropic reasoning, though it might learn to simulate it if put through a series of Sleeping Beauty type games.
The probability is only different when you think the world is in a different state. This no more violates conservation of expected evidence than putting the kettle on violates conservation of expected evidence by predictably changing the probability of hot water. The weird part is which part of the world is different.
All the odds are about the outcome of a past coin flip, known to be in the past. This should not change in the ways described here.
Hm, you’re right, I guess there is something weird here (I’m not talking about FNC—I think that part is weird too—I mean “ordinary” anthropic probabilities).
If I had to try to put my finger on what’s different, there is an apparent deviation from normal Bayesian updating. Normally, when you add some sense-data to your big history-o’-sense-data, you update by setting all incompatible hypotheses to zero and renormalizing what’s left. But this “anthropic update” seems to add hypotheses rather than only removing them—when you’re duplicated there’s now more possible explanations for your sense-data, rather than the normal case of less and less possible explanations.
I think a Cartesian agent wouldn’t do anthropic reasoning, though it might learn to simulate it if put through a series of Sleeping Beauty type games.