Why it took so long to do the Fermi calculation right?

This is a meta-level followup to an object level post about Dissolving the Fermi Paradox.

The basic observation of the paper is that when the statistics is done correctly to represent realistic distributions of uncertainty, the paradox largely dissolves.

The correct statistics is not that technically difficult: instead of point estimates, just take the distributions, reflecting the uncertainty (implied in the literature!)

There is sizeable literature about the paradox, stretching several decades. Just Wikipedia lists 22 hypothetical explanations, and it seems realistic, at least several hundred researchers spent some serious effort thinking about the problem.

It seems to me really important to reflect on this.

What’s going on, why this inadequacy? (in general research)

And more locally, why did not this particular subset of the broader community, priding itself on use of Bayesian statistics, notice earlier?

(I have some hypotheses, but it seems better to just post it as an open-ended question)