I like the general thought here, that some of our decision tools that are supposed to help make better decisions are themselves potentially subject to the same problems we’re trying to avoid.
I wonder if the pathway might not be about the priors around the event (polar bears in Berkeley ) but an update about the reliability of the evidence presented—your friend reporting the polar bear gets your priors on her reliability/honesty down graded thus helping to confirm the original prior about polar bears in Berkeley.
While “isolated demands for rigor” may be suspect, an outlier could be the result of high measurement error* or model failure. (Though people may be systematically overconfident in their models.)
*Which has implications for the model—the data thought previously correct may contain smaller amounts of error.
I like the general thought here, that some of our decision tools that are supposed to help make better decisions are themselves potentially subject to the same problems we’re trying to avoid.
I wonder if the pathway might not be about the priors around the event (polar bears in Berkeley ) but an update about the reliability of the evidence presented—your friend reporting the polar bear gets your priors on her reliability/honesty down graded thus helping to confirm the original prior about polar bears in Berkeley.
While “isolated demands for rigor” may be suspect, an outlier could be the result of high measurement error* or model failure. (Though people may be systematically overconfident in their models.)
*Which has implications for the model—the data thought previously correct may contain smaller amounts of error.