Only skimmed, but I think you need to include COST of early action times the probability of false-alarm in the calculation.
The high number of false positives silently trains us to wait
For me, the high number of false positives loudly and correctly trains me to wait. Bayes for the win—every false alarm is evidence that my signal is noisy. As a lot of economists say, “the optimal error rate is not 0”.
You’re absolutely right that, in principle, you want to think about both: how costly early action is and how often it turns out to be a false alarm. In a fully explicit model, you’d compare “how much harm do I avert if this really is bad news?” to “how often am I going to spend those costs for nothing?”
This note is deliberately staying one level up from that, and just looking at the training data people’s guts get. In everyday life, most of us accumulate a lot of “big scary thing that turned out fine” and “I waited and it was fine” stories, and very few vivid “I waited and that was obviously a huge mistake” stories.
In a world where some rare events can permanently uproot you or kill you, it can actually be fine – even optimal – to tolerate a lot of false alarms. My worry is that our intuitions don’t just learn “signals are noisy”; they slide into “waiting is usually safe”, which can push people’s personal thresholds higher than they’d endorse if they were doing the full cost–benefit tradeoff explicitly.
Only skimmed, but I think you need to include COST of early action times the probability of false-alarm in the calculation.
For me, the high number of false positives loudly and correctly trains me to wait. Bayes for the win—every false alarm is evidence that my signal is noisy. As a lot of economists say, “the optimal error rate is not 0”.
You’re absolutely right that, in principle, you want to think about both: how costly early action is and how often it turns out to be a false alarm. In a fully explicit model, you’d compare “how much harm do I avert if this really is bad news?” to “how often am I going to spend those costs for nothing?”
This note is deliberately staying one level up from that, and just looking at the training data people’s guts get. In everyday life, most of us accumulate a lot of “big scary thing that turned out fine” and “I waited and it was fine” stories, and very few vivid “I waited and that was obviously a huge mistake” stories.
In a world where some rare events can permanently uproot you or kill you, it can actually be fine – even optimal – to tolerate a lot of false alarms. My worry is that our intuitions don’t just learn “signals are noisy”; they slide into “waiting is usually safe”, which can push people’s personal thresholds higher than they’d endorse if they were doing the full cost–benefit tradeoff explicitly.