An earlier comment of yours pointed out that one compensates for overconfidence not by adjusting ones probability towards 50%, but by adjusting it towards the probability that a broader reference class would give. In this instance, the game of reference class tennis seems harder to avoid.
Speaking of that post: It didn’t occur to me when I was replying to your comment there, but if you’re arguing about reference classes, you’re arguing about the term in your equation representing ignorance.
I think that is very nearly the canonical case for dropping the argument until better data comes in.
This is an excellent post, thank you.
An earlier comment of yours pointed out that one compensates for overconfidence not by adjusting ones probability towards 50%, but by adjusting it towards the probability that a broader reference class would give. In this instance, the game of reference class tennis seems harder to avoid.
Speaking of that post: It didn’t occur to me when I was replying to your comment there, but if you’re arguing about reference classes, you’re arguing about the term in your equation representing ignorance.
I think that is very nearly the canonical case for dropping the argument until better data comes in.
My introduction to that idea was RobinZ’s “The Prediction Hierarchy”.
Agreed.
So what do we do about it?