In this case, all those customers were already alive when the shop opened (I assume), so the observation does suggest that, if this process is in fact going to continue for several more hours with the store getting more and more crowded, then there might well be be some mechanism that applies to most customers that causes them to choose to arrive late, but somehow doesn’t apply to you. For example, maybe they do know when the store closes, and that the store’s chili gets stronger the longer it’s cooked, and they all like very strong chili.
The causality here is different, because you can reasonably assume that the other customers got up in the morning, thought about “When should I go to Fred’s Chili Shop?” and it seems a lot of them picked “not long before it closes”. But you are implicitly assuming that you already know this process is in fact going to continue. So it’s rather as if you asked Fred, and he told you yeah, there’s always a big rush at the end of the day, few people get here as early as you. At that point the causal paradox has just gone away: you actually do have solid grounds for making a prediction about what’s going to happen later in the day — Fred told you, and he should know.
But if you know for a fact that all the customers are only 10 minutes old (including you) so decided to come here less than 10 minutes ago, then the only reasonable assumption is that there’s a very fast population explosion going on, and you have absolutely no idea how much longer this is going to last, or how soon Fred will run out of chili and close the shop. In that situation, your predictability into the future is just short, and you just don’t know what’s going to happen after that — and clearly neither does Fred, so you can’t just ask him.
But you are implicitly assuming that you already know this process is in fact going to continue. So it’s rather as if you asked Fred, and he told you yeah, there’s always a big rush at the end of the day, few people get here as early as you.
I didn’t mean to imply certainty, just uncertain expectation based on observation. Maybe I asked Fred, or the other customers, but I didn’t receive any information about ‘the end of the day’—only confirmation of the trend so far.
(I’m not trying to be difficult for the sake of it, by the way! I just want to think these things through carefully and genuinely understand what you’re saying, which requires pedantry sometimes.)
edit in response to your edit:
But if you know for a fact that all the customers are only 10 minutes old (including you) so decided to come here less than 10 minutes ago, then the only reasonable assumption is that there’s a very fast population explosion going on, and you have absolutely no idea how much longer this is going to last, or how soon Fred will run out of chili and close the shop. In that situation, your predictability into the future is just short, and you just don’t know what’s going to happen after that — and clearly neither does Fred, so you can’t just ask him.
I think I’m not quite understanding the distinction here. Why is there an important difference between “this trend is based on mechanisms of which I’m ignorant, such as the other customers’ work hours or their expectations about chili quality over time” and “this trend is based on different mechanisms of which I’m also ignorant, i.e. birth rates and chili inventory”?
I think it’s because of our priors. In the normal city case, we already know a lot about human behavior, we have built up very strong priors that constrain the hypothesis space pretty hard. The hotter-chili hypothesis I came up with seems plausible, there are others, but the space of them is rather tightly constrained. So we can do forward modelling fairly well. Whereas in the Doomsday Argument case, or my artificial analogy to it involving 10 minute lifespans and something very weird happening, our current sample size for “How many sapient species survive their technological adolescence?” or “What happens later in the day in cities of sapient mayflies?” is zero. In dynamical systems terms, the rest of the day is a lot more Lyapunov times away in this case. From our point of view, a technological adolescence looks like a dangerous process, but making predictions is hard, especially about the future of a very complex very non-linear system with 8.3 billion humans and an exponentially rising amount of AI in it. The computational load of doing accurate modelling is simply impractical, so our future even 5–10 years out looks like a Singularity to our current computational abilities. So the constraints on our hypothesis distribution are weak, and we end up relying mostly on our arbitrary choice of initial priors. We’re still at the “I really just don’t know” point in the Bayesian process on this one. That’s why people’s P(DOOM)s vary so much — nobody actually knows, they just have different initial default priors, basically depending on temperament. Our future is still a Rorschach inkblot. Which is not a comfortable time to be living in.
In this case, all those customers were already alive when the shop opened (I assume), so the observation does suggest that, if this process is in fact going to continue for several more hours with the store getting more and more crowded, then there might well be be some mechanism that applies to most customers that causes them to choose to arrive late, but somehow doesn’t apply to you. For example, maybe they do know when the store closes, and that the store’s chili gets stronger the longer it’s cooked, and they all like very strong chili.
The causality here is different, because you can reasonably assume that the other customers got up in the morning, thought about “When should I go to Fred’s Chili Shop?” and it seems a lot of them picked “not long before it closes”. But you are implicitly assuming that you already know this process is in fact going to continue. So it’s rather as if you asked Fred, and he told you yeah, there’s always a big rush at the end of the day, few people get here as early as you. At that point the causal paradox has just gone away: you actually do have solid grounds for making a prediction about what’s going to happen later in the day — Fred told you, and he should know.
But if you know for a fact that all the customers are only 10 minutes old (including you) so decided to come here less than 10 minutes ago, then the only reasonable assumption is that there’s a very fast population explosion going on, and you have absolutely no idea how much longer this is going to last, or how soon Fred will run out of chili and close the shop. In that situation, your predictability into the future is just short, and you just don’t know what’s going to happen after that — and clearly neither does Fred, so you can’t just ask him.
I didn’t mean to imply certainty, just uncertain expectation based on observation. Maybe I asked Fred, or the other customers, but I didn’t receive any information about ‘the end of the day’—only confirmation of the trend so far.
(I’m not trying to be difficult for the sake of it, by the way! I just want to think these things through carefully and genuinely understand what you’re saying, which requires pedantry sometimes.)
edit in response to your edit:
I think I’m not quite understanding the distinction here. Why is there an important difference between “this trend is based on mechanisms of which I’m ignorant, such as the other customers’ work hours or their expectations about chili quality over time” and “this trend is based on different mechanisms of which I’m also ignorant, i.e. birth rates and chili inventory”?
Hmmm… Good question. Let’s do the Bayesian thing.
I think it’s because of our priors. In the normal city case, we already know a lot about human behavior, we have built up very strong priors that constrain the hypothesis space pretty hard. The hotter-chili hypothesis I came up with seems plausible, there are others, but the space of them is rather tightly constrained. So we can do forward modelling fairly well. Whereas in the Doomsday Argument case, or my artificial analogy to it involving 10 minute lifespans and something very weird happening, our current sample size for “How many sapient species survive their technological adolescence?” or “What happens later in the day in cities of sapient mayflies?” is zero. In dynamical systems terms, the rest of the day is a lot more Lyapunov times away in this case. From our point of view, a technological adolescence looks like a dangerous process, but making predictions is hard, especially about the future of a very complex very non-linear system with 8.3 billion humans and an exponentially rising amount of AI in it. The computational load of doing accurate modelling is simply impractical, so our future even 5–10 years out looks like a Singularity to our current computational abilities. So the constraints on our hypothesis distribution are weak, and we end up relying mostly on our arbitrary choice of initial priors. We’re still at the “I really just don’t know” point in the Bayesian process on this one. That’s why people’s P(DOOM)s vary so much — nobody actually knows, they just have different initial default priors, basically depending on temperament. Our future is still a Rorschach inkblot. Which is not a comfortable time to be living in.