The Hidden Asymmetry in Personal Preparedness: Early Costs, Late Losses
Prologue
Note: The Ukraine example is not making any claim about what someone ought to do in wartime (e.g. stay, fight, flee, help others, etc.). Those questions are outside the scope of this note.
Instead, I use the Ukraine case only to illustrate a simple structural point: when danger increases rapidly, the timing of action can sharply change an individual’s risk profile, whatever their values or duties happen to be.
In the days before the Russian full-scale invasion of Ukraine in 2022, a healthy Ukrainian man of draft age already faced some chance1 of death or permanent injury (e.g. car accident, cancer, etc.) — but that risk was diffuse and long-term. Once martial law hit, young men were barred from leaving the country by their own government and de facto forced to fight, exposing them to significant risk in the battlefield. This sharply increased their risk of dying.
Very rough back-of-the-envelope calculations2 using public casualty estimates suggest that, for a draft-eligible man who stayed in Ukraine, the chance of being killed or permanently disabled over the next few years may have ended up in the low single-digit percentage range. The point for this example is that early action mattered: once the state closed the borders— acting to maximise national defence, against individual welfare — the risk of a young, healthy male dying jumped significantly, perhaps doubled or more, and the easiest way to avoid that jump was to have left early, before the borders closed.
Claims:
Claim 1: Structural timing tradeoff
In most crises, people face a timing decision under uncertainty. You choose whether to act early or to wait, and only later does the world reveal whether the threat was real. These two dimensions form four simple categories — early/late × disaster/no disaster — a conceptual tool for understanding the act-early/act-late tradeoff.
Claim 2: Preparedness design concern
This note doesn’t try to estimate how often people end up in each quadrant. Its goal is instead to use the 2×2 to illustrate an asymmetry in the timing tradeoff: everyday experience and social feedback make “wait and see” more salient than “act early and be glad you did.” That structural skew makes it easy, without deliberate effort, to end up unable or unwilling to act early enough, including for future risks with a likelihood of noisy, disputed signals.
Key take-aways:
Many crises can be thought of as a timing choice: act early under uncertainty, or wait for more information but risk the often extreme costs of delayed action
Our lived experience includes far more “I waited and it was fine” episodes, which seem to bias us toward late action by default.
More work should be done to assess how common it is for individuals to have this bias negatively influencing their preparedness, as well as identify strategies to tip the odds in their favor.
Scope, confidence and intended audience
What this text does
Presents the timing tradeoff framework (early vs late × disaster vs no disaster).
Uses a few historical cases to make the structure vivid at the individual level.
Aims to help readers notice whether their own “move early vs wait” intuitions might be systematically miscalibrated.
What this text does not do3
Does not formalise the tradeoff (no thresholds, equations, or probabilities).
It does not claim these examples are clean or uniquely best—historical cases are messy.
It does not argue that early action is usually right; only that the tradeoff exists and is easy to mislearn.
Confidence statement/strength of claims
I am not making any claim about how universal this tradeoff is, nor how often early action is justified – that’s future work.
I am highly confident (~80–90%) that the basic early/late asymmetry exists to a meaningful degree for many people: our lived experience structurally overweights “I waited and it was fine” relative to “I acted early and was glad” or “I waited and deeply regretted it.” This is based on the conceptual structure plus a mix of historical cases and some relevant psychology on normalcy bias and habituation.
I am moderately confident (~50–60%) that the specific examples I use here illustrate the structure clearly enough for intuition-building. There are many plausible cases; I did a limited search rather than an exhaustive one, and real-world episodes are messy and cross-cut multiple of the 4 quadrants presented here. That makes me unsure whether these are close to the clearest possible illustrations, not whether the underlying pattern exists. The examples are there to build intuition for the structure, not to show that real-world events fall cleanly into four distinct quadrants.
This note is only intuition, and thus only weak evidence about specific “act-early” thresholds or concrete preparedness actions; investigating those more directly is work for a follow-on piece.
I’m not taking a strong position on the size of ‘warning fatigue’ effects. The empirical literature there is mixed and contested; where I mention false alarms, it’s as an intuition pump rather than a claim that we already know their net effect on behaviour. Nor is it a claim about the prevalence of this bias in the general population.
You should treat this note as an intuition-shaping lens, not as a decision procedure. The goal is only to help you notice: “I may be more biased toward waiting than I thought.” How to recalibrate that comes in a later piece.
Example selection / research process
For this note I did a shallow search (considered 50-100 historical cases) for historically vivid, reasonably well-documented cases that roughly map onto each quadrant (late/no disaster, early/no disaster, late/disaster, early/disaster). I relied on a mix of primary reporting, historical reconstructions, and a small number of academic papers. I did not try to identify the globally “cleanest” possible examples or to fully adjudicate causal debates about each case. That’s why I treat the examples as intuition-building rather than strong evidence about specific thresholds.
Who this text is for
People who are already interested in going beyond baseline government preparedness.
The tradeoff and the 4 possible outcomes
In many crises, people face a choice between acting early and acting late, and each option has different costs. Early action often is socially awkward or materially costly, and often turns out unnecessary if the threat never materialises. Late action feels normal until suddenly it isn’t — and when a disaster does unfold, late movers face the steepest costs — sometimes losing their health, homes, or lives. This creates a timing tradeoff that shows up across many types of risks.
The four quadrants in this early/late framework are:
Act late + no disaster: you wait for more information, nothing bad happens locally, and you suffer no obvious costs.
Act early + no disaster (Y2K): you take costly action based on a threat that doesn’t materialise.
Act late + disaster (Joplin tornado): you wait for clearer evidence, the disaster hits, and you’re in harm’s way.
Act early + disaster (Gunnison): you move or re-organise your life before local danger arrives, and avoid the worst harm when it does.
Over a lifetime, most of our salient experiences are of the first type – ‘I waited and it was fine’ – and relatively few of the others. That skewed training signal is a big part of why our guts bias us toward waiting.
Real disasters rarely fall neatly into one of the above 4 boxes. For example, the disaster/no disaster threshold is muddy—a small wildfire could still ruin vegetation and scenery—there are degrees of destruction where it is hard to draw the line between disaster and no disaster. The point of the framework isn’t to perfectly classify every individual outcome, but to highlight a structural pattern in how timing, uncertainty, and losses interact.
If you combine a lifetime of “I waited and it was fine” with vivid stories of early actors who look foolish in hindsight, you get a gut-level bias toward acting late — even when the signals are screaming. Joplin is what that looks like. The rest of this note walks through the four quadrants in turn, ending with a hopeful example of early action that actually mattered.
The examples I use are “regular” catastrophes with reasonably well-understood dynamics and data. They’re probably not the main contributors to overall risk to an individual going forward; my preliminary analysis is that rare tail events — large nuclear exchanges, globally catastrophic pandemics, and interactions with advanced AI — dominate that picture. My best guess is that the same timing structure appears, and often more sharply, in those tail scenarios: rare, high-impact threats where early warning is noisy, expert views diverge, and institutional mitigation (if it happens) is largely invisible to individuals. I use more mundane cases here because they’re tractable and emotionally legible, and because they can still give a decent first-pass intuition for that timing problem.
The importance of individual timing decisions may also grow if institutional early-warning capacity erodes: for example, if democratic institutions, public-health agencies, and international early-warning systems weaken.
How “nothing happened” experiences can skew us toward waiting
Key take-away: A high number of ‘nothing happened’ experiences silently trains us to wait: we experience ‘I waited and it was fine’ thousands of times, and almost never viscerally experience the opposite.
The signals of a catastrophe are there, but people mostly wait — and nothing happens to them. This seems to be the most common outcome following early signals of a potential catastrophe. It is business as usual. But it is setting us up to fail in a real emergency. This is the core asymmetry Claim 2 is about: our everyday experience overwhelmingly reinforces “wait and it’ll probably be fine,” while the cases where early action mattered are rarer and less vivid.
In 2009, some officials explicitly compared early H1N1 numbers to 1918. For most people in rich countries, that translated to a few alarming headlines, no major change in behaviour, and a pandemic that felt mild enough to file under “overblown scare.” Similar patterns have repeated with SARS, MERS, and Ebola for people outside the affected regions: serious experts were worried; the median person read about it, did nothing, and watched the story fade from the news.
Similarly, there have been repeated moments when nuclear war looked — at least from some expert perspectives — like a live possibility: the Cuban Missile Crisis, later increased risks of nuclear detonation (e.g. Ukraine invasion, or the Kargil War). Similar things could be said about overdue major earthquakes. Again, each time, most people didn’t move, didn’t build a shelter, didn’t overhaul their lives. So far, for almost all of them, that “do nothing” choice has worked out.
At a smaller scale, we get the same reinforcement loop. We ignore that nagging “should I back up my data, move some savings, or see a doctor about this?” feeling, and most of the time nothing obviously bad happens. The world rarely labels these as “near misses”; it just stamps them “nothing” and moves on.
Over a lifetime, this creates a very lopsided training signal: thousands of “I waited and it was fine” experiences, and far fewer vivid “I acted early and was glad” or “I waited and deeply regretted it” examples. The issue is that if you design your preparedness thresholds using only your gut, your gut has been learning from a heavily biased sample. This would be further exacerbated if, indeed, the threats of tomorrow look different from those of the past.
Side note: a high false positive rate is probably inevitable if you want early action in rare, fast-moving crises. I say more about that in a footnote4.
Takeaways from “act late + no disaster” experiences
Many apparently high-stakes threats (pandemics, nuclear crises, “overdue” earthquakes) have produced real expert concern but, for most individuals, no obvious personal harm.
Each of these “I waited and it was fine” cases very slightly rewards inaction; over time they vastly outnumber personal experiences of “early action + disaster” or “late action + disaster.”
As a result, unaided gut intuitions about when to move are systematically biased toward waiting, not because we’ve carefully analysed the tradeoff, but because our lived data is skewed.
Interesting to explore in future, follow-on pieces: Can one use historical near-misses like the ones described in this section to set reasonable early action thresholds, based on acceptable false positive rates?
The embarrassment of preparing for Y2K makes bias against early action worse
Key take-away: When early action precedes a non-event (regardless of whether it was competently mitigated or we just got lucky), the people who acted pay real costs and often feel foolish. That experience biases everyone further against early action next time.
In the late 1990s, governments and companies scrambled to fix the “Year 2000 problem” (Y2K) — two-digit year fields that might make systems misread 2000 as 1900 and fail. Contemporary estimates put worldwide remediation spending in the hundreds of billions of dollars, and the issue was widely discussed as a potential threat to power grids, banking, telecoms, and other critical systems.
When the clocks rolled over to 1 January 2000, those fears did not show up as obvious, widespread collapse. There were documented glitches — misdated receipts, some ticketing and monitoring failures, issues in a few nuclear plant and satellite systems — but major infrastructure continued to operate, and retrospective evaluations describe “few major errors” and no systemic breakdown. From the outside, it looked to many people as if “nothing happened.”
Even before that, however, a noticeable minority of individuals had treated Y2K as a personal disaster signal and acted well ahead of any visible local failure. A national survey reported by Wired in early 1999 found that although nearly everyone had heard about Y2K, about one in five Americans (21%) said they had considered stockpiling food and water, and 16% planned to buy a generator or wood stove. Coverage at the time, as well as later summaries, notes that some people also bought backup generators, firearms, and extra cash in case of disruptions.
Long-form reporting makes the costs to early actors very concrete. One Wired feature follows Scott Olmsted, a software developer who established a desert retreat with a mobile home and freshwater well, and began building up long-life food stores. He planned to add solar panels and security measures. Taken together, this implied substantial out-of-pocket costs on top of his normal living expenses. Socially, he also paid a price: the reporter notes that “most of the non-geeks closest to Scott think he’s a little nuts,” while more hardcore survivalists criticised his setup as naïvely insufficient and too close to Los Angeles. He describes talking to friends and relatives and “getting nowhere” — too alarmed for his normal social circle, not alarmed enough for the even more extreme fringe.
Not all early actors moved to the desert. The same feature describes Paloma O’Riley, a Y2K project manager who turned down a contract extension in London, returned to the United States, and founded “The Cassandra Project,” a grassroots Y2K preparedness group. She spent much of her time organising local meetings, lobbying state officials, and building a network of community preparedness groups, while her family stockpiled roughly a six-month food supply. For her, in addition to food storage, the main costs were time, foregone income, and political capital invested in a catastrophe that, from the outside, never visibly arrived.
When Y2K finally passed with only minor disruptions, official narratives tended to emphasise successful institutional remediation, and in public memory, Y2K came to be seen as an overblown scare — a big build-up to ’nothing.’5 For individuals like Olmsted, O’Riley, and the fraction of the public who had stocked supplies, bought generators, or shifted cash and investments, the visible outcome was simpler: they had paid real material and social costs in a world where, to everyone around them, “nothing serious” seemed to happen.
One complication is that Y2K may actually be a case where the early action of companies fixing software glitches prevented a disaster. Technologists argue that the underlying software issue was real and was fixed by large-scale remediation, which is why the rollover was uneventful. From a high-level system’s point of view, that looks like it could have been successful prevention. From the perspective I care about here, though, what tends to lodge in memory is simpler: people prepared, the clocks rolled over, and nothing obviously bad happened. That phenomenology—“someone acted early and it later looked unnecessary”—feeds into future intuitions regardless.
Takeaways from Y2K early individual action
Real costs: It seems a significant minority of the population diverted savings and time into food, fuel, generators, rural property, off-grid systems, and community organising — all on top of normal living expenses.
Social penalties: Early actors were widely seen as irrational or extreme; friends, family, and the broader public mostly viewed strong Y2K preparations as overreacting.
Two ways this biases people towards late action: The first reason is that because there was no crisis, it simply seemed unnecessary. The second reason is that one might believe the mitigation efforts prevented the crisis—in this case one is also likely to update more in the direction of “I don’t need to act early, I can trust experts to prevent disaster”. Episodes like Y2K don’t just feel like one-off embarrassments; they become part of the background story that makes future early moves feel riskier and less justifiable.
No visible payoff: When the feared collapse didn’t materialise, any benefits of early action were invisible, and the social dismissal amplified the sense that preparation had been unnecessary.
This is the clearest modern example of the “act early + no apparent disaster” quadrant: real costs, reputational hit, and no visible crisis.
When desensitisation meets a real disaster - (Joplin tornado − 2011, USA)
Key take-away: The biases of the above two sections, when pushing people to act late in an actual disaster, can have tragic consequences.
Following the sections above on why people become desensitized due to the flood of false positives as well as the updates from “failed preppers”, this section investigates how such desensitization leads to death when, in a minority of cases, the warning signs turn into an actual disaster:
At 1:30pm, May 21st 2011, a tornado watch was issued for southwestern Missouri, including the city of Joplin. The tornado watch — a routine, opt-in alert that many residents either didn’t receive or didn’t treat as significant. Tornado watches were common in the region, and most people continued their normal Saturday activities.
About four hours later, the city’s sirens sounded loudly across the city. Some residents moved to interior rooms, but many waited for clearer confirmation. Nationally, roughly three out of four tornado warnings don’t result in a tornado striking the warned area, and Joplin residents were used to frequent false alarms. Moreover, many people didn’t distinguish between a “watch” and a “warning,” and the most dangerous part of the storm was hidden behind a curtain of rain. From these viewpoints the situation might not have felt obviously threatening, so many people hesitated.
Seventeen minutes after the sirens, the tornado touched down. It intensified rapidly, becoming one of the deadliest in U.S. history. By the time it dissipated, it had killed around 160 people and injured more than 1,000. For anyone who delayed even briefly, the window for safe action closed almost immediately.
Takeaways from Joplin tornado
Surveys in tornado-prone regions show many people don’t clearly distinguish between a ‘watch’ (higher false positive rate) and a ‘warning’ (significantly lower false positive rate) which can contribute to hesitation.
Frequent tornado alerts in the region may have contributed to habituation — a common pattern where repeated false alarms make hesitation feel natural.
The storm’s most dangerous features were hidden behind a curtain of rain, giving a misleading sense of safety.
Only 17 minutes separated the siren from the tornado’s touchdown, leaving very little time for those who waited.
This is a clear example of the “act late + actual disaster” quadrant: early signals existed, but waiting for certainty carried steep costs.
Acting early when a disaster unfolds can dramatically reduce harm (Gunnison influenza response – 1918, USA)
Key take-away: While the above three sections showed why people become desensitized, and how tragic such desensitization is in an actual disaster, this section paints a picture of hope. It shows that acting early is possible, and that it avoids large costs when disaster actually unfolds.
A note on the role of authorities in this Gunnison example: I have tried to choose scenarios showing the dynamics for an individual. However, individual action is a fuzzy concept—a family is not individual, nor is a group of friends. With Gunnison county having ~8000 residents, we might assume the town had ~2000 inhabitants. Compared to the United States, this is perhaps more akin to a neighborhood taking action, than a government. As such, and because the main point is the structural features and less the number of people, I believe this example is relevant.
By early October 1918, major U.S. cities were being overwhelmed by the influenza pandemic. In Philadelphia, hospitals ran out of beds, emergency facilities filled within a day, and the city recorded 759 influenza deaths in a single day — more than its average weekly death toll from all causes. Reports from Philadelphia and other cities illustrated how quickly local healthcare systems could be overwhelmed once the virus gained a foothold, especially in places with far fewer resources than large coastal cities.
While influenza was already spreading rapidly across Colorado, Gunnison itself still had almost no influenza cases. Local newspapers ran headlines like “Spanish Flu Close By” and “Flu Epidemic Rages Everywhere But Here,” noting thousands of cases and hundreds of deaths elsewhere in the state while Gunnison remained mostly untouched.
Gunnison was a small, relatively isolated mountain town, plausibly similar to many of the other Colorado communities with very limited medical resources and few doctors. Contemporary overviews note that the 1918 flu “hit small towns hard, many with few doctors and medical resources,” and that Gunnison was unusual in avoiding this fate by imposing an extended quarantine. Under the direction of the county physician and local officials, the town used its small population, low density, and limited transport links (source, p.72) — and, despite some tension among city, county, and state officials, seems to have benefited from cooperation among local public agencies sufficient to implement and maintain the measures.
Historical reconstructions of so-called “escape communities” (including Gunnison) describe them as monitoring the spread of influenza elsewhere and implementing “protective sequestration” while they still had little or no local transmission. Several measures were implemented: schools and churches were closed, parties and public gatherings were banned, and barricades were erected on the main highways. Train passengers who stepped off in Gunnison were quarantined for several days, and violators were fined or jailed.
Takeaways from Gunnison’s early response
Authorities in Gunnison acted on non-local information: leaders responded to reports of severe outbreaks elsewhere rather than waiting for local cases.
It is plausible that officials expected that once the first cases appeared in Gunnison, it would likely be too late — the virus spread faster than local observation could detect.
Early action was possible because the town was structured to make protective measures cheap: a small population, limited transportation links, and cohesive local leadership made isolation feasible.
Gunnison avoided the first and most lethal wave almost entirely, demonstrating how early action can dramatically change outcomes even in a severe, unfolding disaster.
This is a clear example of the “act early + actual disaster” quadrant: the pandemic did unfold, but because Gunnison acted before local danger appeared, it avoided the worst consequences.
Putting the 4 quadrants together
Taken together, these four cases show why our intuitions about acting early might not be neutral. Most of what we personally live through, and most of what we hear about, looks like “I waited and it was fine,” occasionally punctuated by stories of people who acted early and later looked foolish. Direct, vivid experiences of “I waited and deeply regretted it” or “I acted early and was glad I did” are much rarer. Over time, that asymmetry quietly trains us to treat “wait and see” as the safe, reasonable default. My aim in this note is only to make that skew visible. The more speculative follow-on question — how to design preparedness setups and early-action thresholds that balance any imbalance — is work for later pieces.
Endnotes
1. Very rough baseline mortality anchor (not Ukraine-specific): To give a concrete scale for “ordinary” mortality, suppose we have a stylised population where about 30% of men die between ages 15 and 60, and the rest survive to at least 60. That corresponds to a survival probability over 45 years of 0.70. If we (unrealistically) assume a constant annual mortality rate 𝑟 over that period, we have:
2. For illustration, take mid-range public estimates of Ukrainian military casualties, e.g. on the order of 60,000–100,000 killed and perhaps a similar magnitude of permanently disabling injuries as of late 2024. If we (very crudely) divide ~150,000–200,000 “death or life-altering injury” outcomes by a denominator of a few million draft-eligible men (say 4–8 million, depending on where you draw age and fitness boundaries), we get something like a 2–5% risk for a randomly selected draft-eligible man over the relevant period. This ignores civilian casualties, regional variation, selective mobilisation practices, and many other complications; it’s meant only as an order-of-magnitude illustration that the personal risk conditional on staying was not tiny. A more careful analysis could easily move this number around by a factor of ~2× in either direction.
3. Each of these topics I am not covering are areas I have worked on and that I’ve already explored to some extent, and I hope several of them will become their own follow-on pieces. So despite having gathered evidence and performed analysis, I’m deliberately not covering them here because this first text is narrowly focused on making the timing tradeoff intuitive before adding more complexity and exploring solutions in later pieces.
4. It might be worth pointing out that a high false positive rate is likely reasonable. One main point of this text is showing that in the lead-up to a disaster, the signals are weak. This means that in order to act early, one has to make decisions under uncertainty. If one pushes the threshold for action, as is illustrated in the following example, until one is certain—it is often too late. The tradeoff between desensitization and sufficiently early action is extensively discussed in academic and government circles. It is an unfortunate fact of the world and human psychology. Governments are even setting thresholds so high that they expect deaths from alarms coming too late—from a utilitarian view they are minimizing deaths across both desensitized people acting too late (acting later) and people not getting information early enough (acting earlier). These are dark calculations with real lives on the line.
5. Some technologists argue that Y2K was a genuine near-miss, prevented by large-scale remediation. The cultural memory, however, tends to frame it as an overreaction rather than a narrowly avoided catastrophe.
Your analysis of early action + no disaster overlooks the fact that early action can prevent the disaster. But you never see the things that were prevented… because they were prevented. Early action only seems useful when it merely mitigates a disaster — that is, when it is not all that successful. Valiant failure is valorised over competent success.
Y2K is a case in point. There actually was a problem, and it was fixed.
Thanks — I agree that early action can genuinely prevent disasters, and Y2K may well be a case where large-scale remediation averted serious failures. That’s an important distinction, and I’m not trying to deny it.
Instead, I am deliberately mostly overlooking prevention (though I can make that clearer) because the level I’m focusing on in this note is one step down from that system view: what things look like to a reasonably informed non-expert in advance, under uncertainty, before the outcome is known. The reason I am overlooking prevention is because, for the purpose of this text, it would not affect my conclusion. In 1998–1999, it wasn’t obvious to most people outside the remediation teams whether Y2K fixes were sufficient or even well coordinated. Expert assessments diverged, public information was mixed, and there was no way for a layperson to “test” the fix ahead of time. Some people responded to that murky situation by preparing early.
Afterwards, when the rollover produced no visible breakdowns, it became easy to reframe Y2K as a non-event or a clean mitigation success. But foresight and hindsight operate on different information. From the point of view of a typical person in 1999, you couldn’t know whether early preparation would turn out to be prudent or would later look unnecessary — that only becomes clear after the fact. A similar pattern shows up in nuclear brinkmanship: diplomats may succeed in preventing escalation, but families deciding whether to leave Washington or New York during a crisis have to act under incomplete information. They cannot rely on knowing in advance that prevention efforts will succeed.
In that sense, I actually think your point strengthens the mechanism I’m interested in. If someone now looks back at Y2K and sees it as a mitigation success — “the system handled it” — then their lived lesson is still “I waited and it was fine; professionals took care of it.” For many others who barely tracked the details and just remember that nothing bad seemed to happen where they lived, the felt lesson is similar: “I waited and it was fine.” In both cases, doing nothing personally seemed to have worked, regardless of what, if any, beliefs they had about why there was no disaster. In both instances, that is exactly the kind of training signal I’m worried about for future timing decisions.
So I fully agree there can be real, competent prevention at the system level. My claim is about what these episodes teach individuals making timing choices under uncertainty. I’ll make that foresight–hindsight and system–individual distinction clearer in the Y2K section so readers don’t bounce off in the way you describe. Thanks for flagging it — this comment helps me see where the draft was under-explained. And none of my examples are completely clear: The Gunnison example is actual system level prevention, though at a “near-individual-level”. I think this is generally the cases when trying to split actual, messy and complex parts of the world into delineated classes.
Side note: As I discuss in the note, one complication for future decisions is that institutional early-warning capacity may be weakening in some areas, while emerging technologies (especially in bio and AI) could create faster, harder-to-mitigate risks. So even if Y2K was ultimately a case where system-level remediation succeeded, that doesn’t guarantee the same dynamic will hold for future threats. But that’s a separate point from the hindsight/foresight issue you raised here.
Only skimmed, but I think you need to include COST of early action times the probability of false-alarm in the calculation.
For me, the high number of false positives loudly and correctly trains me to wait. Bayes for the win—every false alarm is evidence that my signal is noisy. As a lot of economists say, “the optimal error rate is not 0”.
You’re absolutely right that, in principle, you want to think about both: how costly early action is and how often it turns out to be a false alarm. In a fully explicit model, you’d compare “how much harm do I avert if this really is bad news?” to “how often am I going to spend those costs for nothing?”
This note is deliberately staying one level up from that, and just looking at the training data people’s guts get. In everyday life, most of us accumulate a lot of “big scary thing that turned out fine” and “I waited and it was fine” stories, and very few vivid “I waited and that was obviously a huge mistake” stories.
In a world where some rare events can permanently uproot you or kill you, it can actually be fine – even optimal – to tolerate a lot of false alarms. My worry is that our intuitions don’t just learn “signals are noisy”; they slide into “waiting is usually safe”, which can push people’s personal thresholds higher than they’d endorse if they were doing the full cost–benefit tradeoff explicitly.