Biases seem like they could be understood in terms of logical validity. Even if you reason solely from sound premises, you could still adopt an invalid argument (aka a fallacy; a conclusion that does not actually follow from the premises, no matter how true). I suggest the definition that biases are whatever cause people to adopt invalid arguments.
I suggest the definition that biases are whatever cause people to adopt invalid arguments.
False or incomplete/insufficient data can cause the adoption of invalid arguments.
Contrast this with:
The control group was told only the background information known to the city when it decided not to hire a bridge watcher. The experimental group was given this information, plus the fact that a flood had actually occurred. Instructions stated the city was negligent if the foreseeable probability of flooding was greater than 10%. 76% of the control group concluded the flood was so unlikely that no precautions were necessary; 57% of the experimental group concluded the flood was so likely that failure to take precautions was legally negligent. A third experimental group was told the outcome and also explicitly instructed to avoid hindsight bias, which made no difference: 56% concluded the city was legally negligent.
I.e. on average, it doesn’t matter if people try to avoid hindsight bias. “prior outcome knowledge” literally corresponds to conclusion “prior outcome should’ve been deemed very likely”.
To avoid it, you literally have to INSIST on NOT knowing what actually happened, if you aim to accurately represent the decision making process that actually happened.
Or if you do have the knowledge, you might result in having to force yourself to assign an extra 1 : 10 odds factor against the actual outcome (or worse) in order to compensate.