Some of this might be conflation between within-model predictions and overall predictions that account for model uncertaintly and unknown unknowns. Within-model predictions are in any case very useful as exercises for developing/understanding models, and as anchors for overall predictions. So it’s good actually (rather than a problem) when within-model predictions are being made (based on whatever legible considerations that come to mind), including when they are prepared as part of the context before making an overall prediction, even for claims/predictions that are poorly understood and not properly captured by such models.
The issue is that when you run out of models and need to incorporate unknown unknowns, the last step that transitions from your collection of within-model prediction anchors to an overall prediction isn’t going to be legible (otherwise it would just be following another model, and you’d still need to take that last step eventually). It’s an error to give too much weight to within-model anchors (rather than some illegible prior) when the claim/prediction is overall poorly understood, but also sometimes the illegible overall assessment just happens to remain close to those anchors. And even base rates (reference classes) is just another model, it shouldn’t claim to be the illegible prior at the end of this process, not when the claim/prediction remains poorly understood (and especially not when its understanding explicitly disagrees with the assumptions for base rate models).
So when you happen to disagree about the overall prediction, or about the extent to which the claim/prediction is well-understood, a prediction that happens to remain close to the legible anchors would look like it’s committing the error described in the post, but it’s not necessarily always (or often) the case. The only way to resolve such disagreements would be by figuring out how the last step was taken, but anything illegible takes a book to properly communicate. There’s not going to be a good argument, for any issue that’s genuinely poorly understood. The trick is usually to find related but different claims that can be understood better.
Some of this might be conflation between within-model predictions and overall predictions that account for model uncertaintly and unknown unknowns. Within-model predictions are in any case very useful as exercises for developing/understanding models, and as anchors for overall predictions. So it’s good actually (rather than a problem) when within-model predictions are being made (based on whatever legible considerations that come to mind), including when they are prepared as part of the context before making an overall prediction, even for claims/predictions that are poorly understood and not properly captured by such models.
The issue is that when you run out of models and need to incorporate unknown unknowns, the last step that transitions from your collection of within-model prediction anchors to an overall prediction isn’t going to be legible (otherwise it would just be following another model, and you’d still need to take that last step eventually). It’s an error to give too much weight to within-model anchors (rather than some illegible prior) when the claim/prediction is overall poorly understood, but also sometimes the illegible overall assessment just happens to remain close to those anchors. And even base rates (reference classes) is just another model, it shouldn’t claim to be the illegible prior at the end of this process, not when the claim/prediction remains poorly understood (and especially not when its understanding explicitly disagrees with the assumptions for base rate models).
So when you happen to disagree about the overall prediction, or about the extent to which the claim/prediction is well-understood, a prediction that happens to remain close to the legible anchors would look like it’s committing the error described in the post, but it’s not necessarily always (or often) the case. The only way to resolve such disagreements would be by figuring out how the last step was taken, but anything illegible takes a book to properly communicate. There’s not going to be a good argument, for any issue that’s genuinely poorly understood. The trick is usually to find related but different claims that can be understood better.