You’re absolutely right that, in principle, you want to think about both: how costly early action is and how often it turns out to be a false alarm. In a fully explicit model, you’d compare “how much harm do I avert if this really is bad news?” to “how often am I going to spend those costs for nothing?”
This note is deliberately staying one level up from that, and just looking at the training data people’s guts get. In everyday life, most of us accumulate a lot of “big scary thing that turned out fine” and “I waited and it was fine” stories, and very few vivid “I waited and that was obviously a huge mistake” stories.
In a world where some rare events can permanently uproot you or kill you, it can actually be fine – even optimal – to tolerate a lot of false alarms. My worry is that our intuitions don’t just learn “signals are noisy”; they slide into “waiting is usually safe”, which can push people’s personal thresholds higher than they’d endorse if they were doing the full cost–benefit tradeoff explicitly.
You’re absolutely right that, in principle, you want to think about both: how costly early action is and how often it turns out to be a false alarm. In a fully explicit model, you’d compare “how much harm do I avert if this really is bad news?” to “how often am I going to spend those costs for nothing?”
This note is deliberately staying one level up from that, and just looking at the training data people’s guts get. In everyday life, most of us accumulate a lot of “big scary thing that turned out fine” and “I waited and it was fine” stories, and very few vivid “I waited and that was obviously a huge mistake” stories.
In a world where some rare events can permanently uproot you or kill you, it can actually be fine – even optimal – to tolerate a lot of false alarms. My worry is that our intuitions don’t just learn “signals are noisy”; they slide into “waiting is usually safe”, which can push people’s personal thresholds higher than they’d endorse if they were doing the full cost–benefit tradeoff explicitly.