Suppose next thing you experience is you waking up in a room. There is a writing, “You had either 1⁄100 or 99⁄100 chance to be killed in your sleep before waking up, corresponding to door painted green or red from outside”. Before opening the door and walking out, what color do you anticipate it will be from outside?
You probably should think you are in a 1⁄100 room?
Setting aside the (rather plausible sounding) hypothesis that the writing might not be entirely truthful…
The scenario you describe is a perfectly valid use of Bayesianism to use evidence from the past (“I woke up again in this room” + “the writing on the wall says…”) to make an informed prediction about the future (what color I’ll see the outside of the door is when I go look). Nothing in it involves using Frequentist-thinking to construct an invalid causality-violating Bayesian prior and then act impressed when that starts emitting acausal predictions.
Well, you can imagine you updating on all the evidence as it went in, in series. Like when you are a child and learn for the first time what year it is.
Yep (assuming I don’t have a prior that heavily favours the red door case for some reason), but in this case I think I’m just applying ordinary bayesian reasoning to ordinary, non-identity-related evidence. The information I’m learning is not “I am this person”, but “this person is still alive”. That evidence is 99 times more likely in the green door case than the red door case, so I update strongly in favour of the green door case.
Okay. Do you know like the streets you see tend to be more crowded, airplanes have more seats taken, more people in restaurants on average from your observations, compared with how they actually are on average? It’s not at all esoteric, you have to do such corrections in ordinary modelling. Anthropic reasoning is straightforward extension of this, onto rather uncertain base territory. (and with attempts to do it principledly)
Do you know like the streets you see tend to be more crowded, airplanes have more seats taken, more people in restaurants on average from your observations, compared with how they actually are on average?
You are mixing together situations where a person can be correctly approximated as a random sample from some population with situation where it’s not the case.
What mainstream anthropic reasoning is doing is assuming that this model has to always be the same in every situation and then trying to bite ridiculous bullets when it predictably leads to bizarre conclusions.
Oh yeah, I should have made this clear in my reply to you (I’d written it in a different comment just a moment before):
I do find anthropic problems puzzling. What I find nonsensical are framings of those problems that treat indexical information as evidence—e.g. in a scenario where person X (i.e. me) exists on both hypothesis A and hypothesis B, but hypothesis A implies that many more other people exist, I’m supposed to favour hypothesis B because I happen to be person X and that would be very unlikely given hypothesis A.
If I roll a million-sided die, then no individual number rolled on it is more surprising than any other, not even a roll of 1 or 1,000,000 — UNLESS I’m playing an adversarial game where me rolling a 1 is uniquely good for my opponent. Then if I roll a 1 I should wonder if the die was fixed.
However, no matter your paranoia level, “a malicious opponent broke causality to send me back through time to be born before humanity got to go to the stars” is not a plausible physical theory. (No, not even under Hindu mythology: they’d send you forward to incarnate at the corresponding point in the next Kalpa cycle, instead.)
If I roll a million-sided die, then no individual number rolled on it is more surprising than any other, not even a roll of 1 or 1,000,000 — UNLESS I’m playing an adversarial game where me rolling a 1 is uniquely good for my opponent. Then if I roll a 1 I should wonder if the die was fixed.
Yes, but if you haven’t looked at the die yet, and the question of whether it’s showing a number lower than 100 is relevant for some reason, you’re going to strongly favour ‘no’.
(That’s not quite how I think about anthropic problems, though, because I don’t think there’s anything analogous to the dice roll—hence my original complaint about smuggled dualism.)
Suppose next thing you experience is you waking up in a room. There is a writing, “You had either 1⁄100 or 99⁄100 chance to be killed in your sleep before waking up, corresponding to door painted green or red from outside”. Before opening the door and walking out, what color do you anticipate it will be from outside?
You probably should think you are in a 1⁄100 room?
Setting aside the (rather plausible sounding) hypothesis that the writing might not be entirely truthful…
The scenario you describe is a perfectly valid use of Bayesianism to use evidence from the past (“I woke up again in this room” + “the writing on the wall says…”) to make an informed prediction about the future (what color I’ll see the outside of the door is when I go look). Nothing in it involves using Frequentist-thinking to construct an invalid causality-violating Bayesian prior and then act impressed when that starts emitting acausal predictions.
Well, you can imagine you updating on all the evidence as it went in, in series. Like when you are a child and learn for the first time what year it is.
You get similar situation overall.
Yep (assuming I don’t have a prior that heavily favours the red door case for some reason), but in this case I think I’m just applying ordinary bayesian reasoning to ordinary, non-identity-related evidence. The information I’m learning is not “I am this person”, but “this person is still alive”. That evidence is 99 times more likely in the green door case than the red door case, so I update strongly in favour of the green door case.
Okay. Do you know like the streets you see tend to be more crowded, airplanes have more seats taken, more people in restaurants on average from your observations, compared with how they actually are on average? It’s not at all esoteric, you have to do such corrections in ordinary modelling. Anthropic reasoning is straightforward extension of this, onto rather uncertain base territory. (and with attempts to do it principledly)
You are mixing together situations where a person can be correctly approximated as a random sample from some population with situation where it’s not the case.
What we need to do is look at every situation trying to come up with an appropriate probabilistic model that describes it to the best of our knowledge. A map that fits the territory.
What mainstream anthropic reasoning is doing is assuming that this model has to always be the same in every situation and then trying to bite ridiculous bullets when it predictably leads to bizarre conclusions.
I strongly suspect this planet currently has more than the median level of sapients per planet on it.
Oh yeah, I should have made this clear in my reply to you (I’d written it in a different comment just a moment before):
If I roll a million-sided die, then no individual number rolled on it is more surprising than any other, not even a roll of 1 or 1,000,000 — UNLESS I’m playing an adversarial game where me rolling a 1 is uniquely good for my opponent. Then if I roll a 1 I should wonder if the die was fixed.
However, no matter your paranoia level, “a malicious opponent broke causality to send me back through time to be born before humanity got to go to the stars” is not a plausible physical theory. (No, not even under Hindu mythology: they’d send you forward to incarnate at the corresponding point in the next Kalpa cycle, instead.)
Yes, but if you haven’t looked at the die yet, and the question of whether it’s showing a number lower than 100 is relevant for some reason, you’re going to strongly favour ‘no’.
(That’s not quite how I think about anthropic problems, though, because I don’t think there’s anything analogous to the dice roll—hence my original complaint about smuggled dualism.)
Agreed, on all counts