What exactly do you propose that a Bayesian should do, upon receiving the observation that a bounded search for examples within a space did not find any such example?
(I agree that it is better if you can instead construct a tight logical argument, but usually that is not an option.)
I also don’t find the examples very compelling:
Security mindset—Afaict the examples here are fictional
Superforecasters—In my experience, superforecasters have all kinds of diverse reasons for low p(doom), some good, many bad. The one you describe doesn’t seem particularly common.
Rethink—Idk the details here, will pass
Fatima Sun Miracle: I’ll just quote Scott Alexander’s own words in the post you link:
I will admit my bias: I hope the visions of Fatima were untrue, and therefore I must also hope the Miracle of the Sun was a fake. But I’ll also admit this: at times when doing this research, I was genuinely scared and confused. If at this point you’re also scared and confused, then I’ve done my job as a writer and successfully presented the key insight of Rationalism: “It ain’t a true crisis of faith unless it could go either way”.
[...]
I don’t think we have devastated the miracle believers. We have, at best, mildly irritated them. If we are lucky, we have posited a very tenuous, skeletal draft of a materialist explanation of Fatima that does not immediately collapse upon the slightest exposure to the data. It will be for the next century’s worth of scholars to flesh it out more fully.
Overall, I’m pleasantly surprised by how bad these examples are. I would have expected much stronger examples, since on priors I expected that many people would in fact follow EFAs off a cliff, rather than treating them as evidence of moderate but not overwhelming strength. To put it another way, I expected that your FA on examples of bad EFAs would find more and/or stronger hits than it actually did, and in my attempt to better approximate Bayesianism I am noticing this observation and updating on it.
It depends on properties of bounded search itself.
I.e., if you are properly calibrated domain expert who can make 200 statements on topic with assigned probability 0.5% and be wrong on average 1 time, then, when you arrive at probability 0.5% as a result of your search for examples, we can expect that your search space was adequate and wasn’t oversimplified, such that your result is not meaningless.
If you operate in confusing, novel, adversarial domain, especially when domain is “the future”, when you find yourself assigning probabilities 0.5% for any reason which is not literally theorems and physical laws, your default move should be to say “wait, this probability is ridiculous”.
As an aside, the formalisms that deal with this properly are not Bayesian, they are nonrealizable settings. See Diffractor and Vanessa’s work, like this: https://arxiv.org/abs/2504.06820v2
Also, my experience with actual super forecasters, as opposed to people who forecast in EA spaces, has been that this failure mode is quite common, and problematic, even outside of existential risk—for example, things during COVID, especially early on.
What exactly do you propose that a Bayesian should do, upon receiving the observation that a bounded search for examples within a space did not find any such example?
(I agree that it is better if you can instead construct a tight logical argument, but usually that is not an option.)
I also don’t find the examples very compelling:
Security mindset—Afaict the examples here are fictional
Superforecasters—In my experience, superforecasters have all kinds of diverse reasons for low p(doom), some good, many bad. The one you describe doesn’t seem particularly common.
Rethink—Idk the details here, will pass
Fatima Sun Miracle: I’ll just quote Scott Alexander’s own words in the post you link:
Overall, I’m pleasantly surprised by how bad these examples are. I would have expected much stronger examples, since on priors I expected that many people would in fact follow EFAs off a cliff, rather than treating them as evidence of moderate but not overwhelming strength. To put it another way, I expected that your FA on examples of bad EFAs would find more and/or stronger hits than it actually did, and in my attempt to better approximate Bayesianism I am noticing this observation and updating on it.
It depends on properties of bounded search itself.
I.e., if you are properly calibrated domain expert who can make 200 statements on topic with assigned probability 0.5% and be wrong on average 1 time, then, when you arrive at probability 0.5% as a result of your search for examples, we can expect that your search space was adequate and wasn’t oversimplified, such that your result is not meaningless.
If you operate in confusing, novel, adversarial domain, especially when domain is “the future”, when you find yourself assigning probabilities 0.5% for any reason which is not literally theorems and physical laws, your default move should be to say “wait, this probability is ridiculous”.
As an aside, the formalisms that deal with this properly are not Bayesian, they are nonrealizable settings. See Diffractor and Vanessa’s work, like this: https://arxiv.org/abs/2504.06820v2
Also, my experience with actual super forecasters, as opposed to people who forecast in EA spaces, has been that this failure mode is quite common, and problematic, even outside of existential risk—for example, things during COVID, especially early on.