Well, it has to do more with the original discussion. If you’re going to discount doomsday scenarios by putting them in appropriate reference classes and so forth, then either you automatically discount all predictions of collapse (which seems dangerous and foolish); or you have to explain very well indeed why you’re treating one scenario a bit seriously after dismissing ten others out of hand.
Or, if the reference class is “science-y Doomsday predictors”, then they’re almost certainly completely wrong. See Paul Ehrlich (overpopulation), and Matt Simmons (peak oil) for some examples, both treated extremely seriously by mainstream media at time. So far in spite of countless cases of science predicting doom and gloom, not a single one of them turned out to be true, usually not just barely enough to be discounted by anthropic principle, but spectacularly so. Cornucopians were virtually always right.
taw was saying that you should discount existential risk as such because it (the entire class of scenarios) is historically wrong. So it is the existential risk across all scenarios that was relevant.
We’d see the exact same type of evidence today if a doomsday (of any kind) were coming, so this kind of evidence is not sufficient.
I thought I addressed this with “usually not just barely enough to be discounted by anthropic principle, but spectacularly so” part. Anthropic principle style of reasoning can only be applied to disasters that have binary distributions—wipe out every observer in the universe (or at least on Earth), or don’t happen at all—or at least extremely skewed power law distributions.
I don’t see any evidence that most disasters would follow such distribution. I expect any non-negligible chance of destruction of humanity by nuclear warfare implying an almost certainty of limited scale nuclear warfare with millions dying every couple of years.
I think anthropic principle reasoning is so overused here, and so sloppily, that we’d be better off throwing it away completely.
Well, it has to do more with the original discussion. If you’re going to discount doomsday scenarios by putting them in appropriate reference classes and so forth, then either you automatically discount all predictions of collapse (which seems dangerous and foolish); or you have to explain very well indeed why you’re treating one scenario a bit seriously after dismissing ten others out of hand.
The original discussion was on this point:
taw was saying that you should discount existential risk as such because it (the entire class of scenarios) is historically wrong. So it is the existential risk across all scenarios that was relevant.
We’d see the exact same type of evidence today if a doomsday (of any kind) were coming, so this kind of evidence is not sufficient.
I thought I addressed this with “usually not just barely enough to be discounted by anthropic principle, but spectacularly so” part. Anthropic principle style of reasoning can only be applied to disasters that have binary distributions—wipe out every observer in the universe (or at least on Earth), or don’t happen at all—or at least extremely skewed power law distributions.
I don’t see any evidence that most disasters would follow such distribution. I expect any non-negligible chance of destruction of humanity by nuclear warfare implying an almost certainty of limited scale nuclear warfare with millions dying every couple of years.
I think anthropic principle reasoning is so overused here, and so sloppily, that we’d be better off throwing it away completely.