Fair point, and I do find anthropic problems puzzling. What I find nonsensical are framings of those problems that treat indexical information as evidence—e.g. in a scenario where person X (i.e. me) exists on both hypothesis A and hypothesis B, but hypothesis A implies that many more other people exist, I’m supposed to favour hypothesis B because I happen to be person X and that would be very unlikely given hypothesis A.
The Doomsday Argument, translated into the Bayesian framework, is actually:
Suppose everyone who has lived since the invention of Science Fiction said “I don’t know if we’re all going to die soon, so I’m one of the later humans ever (let’s call that probability X%) or if we’re going to spread to the stars and I’m one of the very first humans ever by some extremely large factor Y (with a 100-X% chance).
100-X% divided by an extremely large factor Y is obviously an extremely small number, approximately zero, therefore X is almost certainly equal to 100.
Notice that step 2 here is completely specious, and no one thinking in a Bayesian framework would entertain it for a moment. It’s confused thinking that sounds plausible if you think like a Frequentist and get careless with causality.
I don’t think you need completely specious reasoning to get to a kind of puzzling position, though. For us to be in the first <relatively small n>% of people, we don’t need humanity to spread to the stars—just to survive for a while longer without a population crash. And I think we do need some principled reason to be able to say “yes, ‘I am in the first <relatively small n>% of people’ is going to be false for the majority of people, but that’s irrelevant to whether it’s true or false for me”.
Humans have a very understandable tendency, when they see what appears to be a low-probability event occurring, to get suspicious and wonder if some opponent has maneuvered things somehow to finagle a high probability of an apparently-low-probability event. We pay attention to what look like flukes, and are dubious about them. But if you can safely dismiss the possibility that before you were incarnated your soul was carried back to this time by an evil time-traveling mastermind, then the only remaining possibility is just “viewed from an achronous perspective, a low probability event has occurred — just like they always do right at the start of anything”. Sometimes they do. Especially if there were a vast number of other equally low probability events that could have occurred. Rolling a 1 on a million-sided die is kind of neat, though not actually any more improbable than rolling 314,159, and it’s not actually suspicious unless it advantages someone else who had their hands on the die. But if you watch a clock for long enough, sooner-or-later it will read 00:00:00.
“viewed from an achronous perspective, a low probability event has occurred — just like they always do right at the start of anything”
What’s the ‘low probability event’? I think this is the kind of framing I was disagreeing with in my original reply; there seems to be an implicit dualism here. So your reply isn’t, from my perspective, addressing my reasons for finding anthropic reasoning difficult to completely dismiss.
“viewed from an achronous perspective, a low probability event has occurred” means: an event such that, if I were in a position to do a random sampling over all humans who ever live – something which can only be done once we’re extinct – would then have a low probability of occurring in that random sample: such as (temporarily assuming that humans do get to go to the stars before becoming extinct) randomly selecting, out of all humans ever, one of the tiny 1-in-10∼10 minority of humans who lived before humans went to the stars.
So, if an alien archeologist from after humans go extinct wanted to write “a day in the life of a typical human” and selected a ‘typical’ human randomly, and then got one from before humans got to go to the stars, like you or me, that would be very atypical (and they might reconsider their definition of typical, or at least reroll).
So yes, there really is a dualism element here, as you put it: we’re positing some sort of extraneous random selection process, specifically one that inherently can only occur in the (hopefully far) future, and then assuming that has some relationship to our viewpoint. It simply doesn’t — it having such a relationship would necessarily break causality. The concept of “a typical human randomly selected out of all humans who have ever or will ever live” currently just isn’t well defined, no matter how intuitive it might sound to a Frequentist. Later, the concept of “a typical human out of all humans that did ever live” will become well defined once we’re extinct, but assuming that you currently know anything about it now or that it could have any causal relationship to your viewpoint is false, because that would require precognition. If we get to go to the stars, you previously having existed will then be exactly as surprising as the existence of the point 0.1 nanometer to the right of the 0 on a meter ruler. Yes, it’s in some sense an atypical point. They exist. Currently we don’t know if we’re going to get to go to the stars, and your existence isn’t surprising now either.[1]
However, we are not participating in a “roll a lucky winner” competition held at the end of time (that we know of). Where you happen to find yourself standing has nothing to do with events in the far future. Happening to find yourself standing at a time before humans may, or may not, go to the stars tells you absolutely nothing about the future, including not about whether they will or not. Causality doesn’t work that way. Bayesianism is about the process of acquiring knowledge over time, so it is carefully set up to account for causality: we have observations about the past, we can only attempt to make predictions about the future. Frequentism isn’t, and stuff that actually makes no causal sense often seems quite intuitive if you use Frequentism.
But that’s not how I’m thinking of it in the first place—I’m not positing any random selection process. I just don’t see an immediately obvious flaw here:
by definition, “I am in the first 10% of people” is false for most people
so I should expect it to be false for me, absent sufficient evidence against
And I still don’t quite understand your response to this formulation of the argument. I think you’re saying ‘people who have ever lived and will ever live’ is obviously the wrong reference class, but your arguments mostly target beliefs that I don’t hold (and that I don’t think I am implicitly assuming).
Fair point, and I do find anthropic problems puzzling. What I find nonsensical are framings of those problems that treat indexical information as evidence—e.g. in a scenario where person X (i.e. me) exists on both hypothesis A and hypothesis B, but hypothesis A implies that many more other people exist, I’m supposed to favour hypothesis B because I happen to be person X and that would be very unlikely given hypothesis A.
The Doomsday Argument, translated into the Bayesian framework, is actually:
Suppose everyone who has lived since the invention of Science Fiction said “I don’t know if we’re all going to die soon, so I’m one of the later humans ever (let’s call that probability X%) or if we’re going to spread to the stars and I’m one of the very first humans ever by some extremely large factor Y (with a 100-X% chance).
100-X% divided by an extremely large factor Y is obviously an extremely small number, approximately zero, therefore X is almost certainly equal to 100.
Notice that step 2 here is completely specious, and no one thinking in a Bayesian framework would entertain it for a moment. It’s confused thinking that sounds plausible if you think like a Frequentist and get careless with causality.
I don’t think you need completely specious reasoning to get to a kind of puzzling position, though. For us to be in the first <relatively small n>% of people, we don’t need humanity to spread to the stars—just to survive for a while longer without a population crash. And I think we do need some principled reason to be able to say “yes, ‘I am in the first <relatively small n>% of people’ is going to be false for the majority of people, but that’s irrelevant to whether it’s true or false for me”.
Humans have a very understandable tendency, when they see what appears to be a low-probability event occurring, to get suspicious and wonder if some opponent has maneuvered things somehow to finagle a high probability of an apparently-low-probability event. We pay attention to what look like flukes, and are dubious about them. But if you can safely dismiss the possibility that before you were incarnated your soul was carried back to this time by an evil time-traveling mastermind, then the only remaining possibility is just “viewed from an achronous perspective, a low probability event has occurred — just like they always do right at the start of anything”. Sometimes they do. Especially if there were a vast number of other equally low probability events that could have occurred. Rolling a 1 on a million-sided die is kind of neat, though not actually any more improbable than rolling 314,159, and it’s not actually suspicious unless it advantages someone else who had their hands on the die. But if you watch a clock for long enough, sooner-or-later it will read 00:00:00.
What’s the ‘low probability event’? I think this is the kind of framing I was disagreeing with in my original reply; there seems to be an implicit dualism here. So your reply isn’t, from my perspective, addressing my reasons for finding anthropic reasoning difficult to completely dismiss.
“viewed from an achronous perspective, a low probability event has occurred” means: an event such that, if I were in a position to do a random sampling over all humans who ever live – something which can only be done once we’re extinct – would then have a low probability of occurring in that random sample: such as (temporarily assuming that humans do get to go to the stars before becoming extinct) randomly selecting, out of all humans ever, one of the tiny 1-in-10∼10 minority of humans who lived before humans went to the stars.
So, if an alien archeologist from after humans go extinct wanted to write “a day in the life of a typical human” and selected a ‘typical’ human randomly, and then got one from before humans got to go to the stars, like you or me, that would be very atypical (and they might reconsider their definition of typical, or at least reroll).
So yes, there really is a dualism element here, as you put it: we’re positing some sort of extraneous random selection process, specifically one that inherently can only occur in the (hopefully far) future, and then assuming that has some relationship to our viewpoint. It simply doesn’t — it having such a relationship would necessarily break causality. The concept of “a typical human randomly selected out of all humans who have ever or will ever live” currently just isn’t well defined, no matter how intuitive it might sound to a Frequentist. Later, the concept of “a typical human out of all humans that did ever live” will become well defined once we’re extinct, but assuming that you currently know anything about it now or that it could have any causal relationship to your viewpoint is false, because that would require precognition. If we get to go to the stars, you previously having existed will then be exactly as surprising as the existence of the point 0.1 nanometer to the right of the 0 on a meter ruler. Yes, it’s in some sense an atypical point. They exist. Currently we don’t know if we’re going to get to go to the stars, and your existence isn’t surprising now either.[1]
However, we are not participating in a “roll a lucky winner” competition held at the end of time (that we know of). Where you happen to find yourself standing has nothing to do with events in the far future. Happening to find yourself standing at a time before humans may, or may not, go to the stars tells you absolutely nothing about the future, including not about whether they will or not. Causality doesn’t work that way. Bayesianism is about the process of acquiring knowledge over time, so it is carefully set up to account for causality: we have observations about the past, we can only attempt to make predictions about the future. Frequentism isn’t, and stuff that actually makes no causal sense often seems quite intuitive if you use Frequentism.
At least, not on the basis of what little information I’m aware of about you!
But that’s not how I’m thinking of it in the first place—I’m not positing any random selection process. I just don’t see an immediately obvious flaw here:
by definition, “I am in the first 10% of people” is false for most people
so I should expect it to be false for me, absent sufficient evidence against
And I still don’t quite understand your response to this formulation of the argument. I think you’re saying ‘people who have ever lived and will ever live’ is obviously the wrong reference class, but your arguments mostly target beliefs that I don’t hold (and that I don’t think I am implicitly assuming).