I think an important gear here is that things can be obvious-in-hindsight, but not in advance, in a way which isn’t really a Bayesian update on new evidence and therefore doesn’t strictly follow prediction rules.
That’s my model here as well. Pseudo-formalizing it: We’re not idealized agents, we’re bounded agents, which means we can’t actually do full Bayesian updates. We have to pick and choose what computations we run, what classes of evidence we look for and update on. In hindsight, we may discover that an incorrect prediction was caused by ours opting not to spend the resources on updating on some specific information, such that if we knew to do that, we would have reliably avoided the error even while having all the same object-level information.
In other words, it’s a Bayesian update to the distribution over Bayesian updates we should run. We discover a thing about (human) reasoning: that there’s a specific reasoning error/oversight we’re prone to, and that we have to run an update on the output of “am I making this reasoning error?” in specific situations.
This doesn’t necessarily mean that this meta-level error would have been obvious to anyone in the world at all, at the time it was made. Nowadays, we all may be committing fallacies whose very definitions require agent-foundations theory decades ahead of ours; fallacies whose definitions we wouldn’t even understand without reading a future textbook. But it does mean that specific object-level conclusions we’re reaching today would be obviously incorrect to someone who is reasoning in a more correct way.
That’s my model here as well. Pseudo-formalizing it: We’re not idealized agents, we’re bounded agents, which means we can’t actually do full Bayesian updates. We have to pick and choose what computations we run, what classes of evidence we look for and update on. In hindsight, we may discover that an incorrect prediction was caused by ours opting not to spend the resources on updating on some specific information, such that if we knew to do that, we would have reliably avoided the error even while having all the same object-level information.
In other words, it’s a Bayesian update to the distribution over Bayesian updates we should run. We discover a thing about (human) reasoning: that there’s a specific reasoning error/oversight we’re prone to, and that we have to run an update on the output of “am I making this reasoning error?” in specific situations.
This doesn’t necessarily mean that this meta-level error would have been obvious to anyone in the world at all, at the time it was made. Nowadays, we all may be committing fallacies whose very definitions require agent-foundations theory decades ahead of ours; fallacies whose definitions we wouldn’t even understand without reading a future textbook. But it does mean that specific object-level conclusions we’re reaching today would be obviously incorrect to someone who is reasoning in a more correct way.