I have a couple of problems with anthropic reasoning, specifically the kind that says it’s likely we are near the middle of the distribution of humans.
First, this relies on the idea that a conscious person is a random sample drawn from all of history. Okay, maybe; but it’s a sample size of 1. If I use anthropic reasoning, I get to count only myself. All you zombies were selected as a side-effect of me being conscious. A sample size of 1 has limited statistical power.
ADDED: Although, if the future human population of the universe were over 1 trillion, a sample size of 1 would still give 99% confidence.
Second, the reasoning requires changing my observation.
My observation is, “I am the Xth human born.” The odds of being the 10th human and the 10,000,000th human born are the same, as long as at least 10,000,000 humans are born. To get the doomsday conclusion, you have to instead ask, “What is the probability that I was human number N, where N is some number from 1 to X?” What justifies doing that?
To get the doomsday conclusion, you have to instead ask, “What is the probability that I was human number N, where N is some number from 1 to X?” What justifies doing that?
Because we don’t care about the probability of being a particular individual, we care about the probability of being in a certain class (namely the class of people born late enough in history, which is characterized exactly by one minus “the probability that I was human number N, where N is some number from 1 to X”).
But if you turn it around, and say “where N is some number from X to the total number of humans ever born”, you get different results. And if you say “where N is within 1/10th of all humans ever of X”, you also get different results.
And if you say “where N is within 1/10th of all humans ever of X”, you also get different results.
This is a different class, so yes, you get a different probability for belonging to it. But you likewise get a different probability that you’ll see a doomsday conditioning on belonging to that class.
Consider class A, the last 10% of all people to live, and class B, the last 20%. Clearly there’s a greater chance I belong to class B. But class B has a lower expectation for observing doomsday. There’s a lower chance of being in a class with a higher chance of seeing doomsday, and a higher chance of being in a class with a lower chance of seeing doomsday.
What’s wrong with this? I don’t see any problem with the freedom of choice for our class.
Both your example are still taking from your current position to the end of all humans.
What I said was that you get different results if you take one decile from your position, not all the way to the end. There’s no reason to do one rather than the other.
P(Observing doomsday) = P(Being in some class of people) * P(Observing doomsday | you belong to the class of people)
You get a different probability for belonging to those classes, but the conditional probabilities of observing doomsday given that you belong to those classes are different. I’m not convinced that these differences don’t balance out when you multiply the two probabilities together. Can you show me a calculation where you actually get two different values for your likelihood of seeing doomsday?
I agree that Jordan’s equation needs to be adjusted (corrected), but I humbly suggest that in this context, it is better to adjust it to the product rule:
Yes, correct. I missed that. For the standard Doomsday Argument P(B|O) is probably 1, so it can be excluded, but for alternative classes of people this isn’t so.
The real problem with anthropic reasoning is that it’s just a default starting point. We are tricked because it seems very powerful in contrived thought experiments in which no other evidence is available.
In the real world, in which there is a wealth of evidence available, it’s just a reality check saying “most things don’t last forever.”
In real world situations, it’s also very easy to get into a game of reference class tennis.
I have a couple of problems with anthropic reasoning, specifically the kind that says it’s likely we are near the middle of the distribution of humans.
First, this relies on the idea that a conscious person is a random sample drawn from all of history. Okay, maybe; but it’s a sample size of 1. If I use anthropic reasoning, I get to count only myself. All you zombies were selected as a side-effect of me being conscious. A sample size of 1 has limited statistical power.
ADDED: Although, if the future human population of the universe were over 1 trillion, a sample size of 1 would still give 99% confidence.
Second, the reasoning requires changing my observation. My observation is, “I am the Xth human born.” The odds of being the 10th human and the 10,000,000th human born are the same, as long as at least 10,000,000 humans are born. To get the doomsday conclusion, you have to instead ask, “What is the probability that I was human number N, where N is some number from 1 to X?” What justifies doing that?
Because we don’t care about the probability of being a particular individual, we care about the probability of being in a certain class (namely the class of people born late enough in history, which is characterized exactly by one minus “the probability that I was human number N, where N is some number from 1 to X”).
But if you turn it around, and say “where N is some number from X to the total number of humans ever born”, you get different results. And if you say “where N is within 1/10th of all humans ever of X”, you also get different results.
This is a different class, so yes, you get a different probability for belonging to it. But you likewise get a different probability that you’ll see a doomsday conditioning on belonging to that class.
Consider class A, the last 10% of all people to live, and class B, the last 20%. Clearly there’s a greater chance I belong to class B. But class B has a lower expectation for observing doomsday. There’s a lower chance of being in a class with a higher chance of seeing doomsday, and a higher chance of being in a class with a lower chance of seeing doomsday.
What’s wrong with this? I don’t see any problem with the freedom of choice for our class.
Both your example are still taking from your current position to the end of all humans. What I said was that you get different results if you take one decile from your position, not all the way to the end. There’s no reason to do one rather than the other.
P(Observing doomsday) = P(Being in some class of people) * P(Observing doomsday | you belong to the class of people)
You get a different probability for belonging to those classes, but the conditional probabilities of observing doomsday given that you belong to those classes are different. I’m not convinced that these differences don’t balance out when you multiply the two probabilities together. Can you show me a calculation where you actually get two different values for your likelihood of seeing doomsday?
Maybe I’m misreading this, but it looks like you’re missing a term...
You said: P(O) = P(B) * P(O|B)
Bayes’s theorem: P(O) P(B|O) = P(B) P(O|B)
ne?
I agree that Jordan’s equation needs to be adjusted (corrected), but I humbly suggest that in this context, it is better to adjust it to the product rule:
P(O and B) = P(B) * P(O|B).
ADDED. Yeah, minor point.
Yes, correct. I missed that. For the standard Doomsday Argument P(B|O) is probably 1, so it can be excluded, but for alternative classes of people this isn’t so.
The real problem with anthropic reasoning is that it’s just a default starting point. We are tricked because it seems very powerful in contrived thought experiments in which no other evidence is available.
In the real world, in which there is a wealth of evidence available, it’s just a reality check saying “most things don’t last forever.”
In real world situations, it’s also very easy to get into a game of reference class tennis.
I read the linked-to comment, but still don’t know what reference class tennis is.