Yeah, but the fact that my estimate was pretty close to the correct date suggests that some underconfidence may have been at work. If someone had stated the exactly correct year, and had estimated only a 51% chance that they were in the correct zone, we’d probably look at them funny.
Maybe, but getting very close with low confidence is entirely possible with these estimation-calibration tasks: a uniformly chosen year between 1600-1800 could be the exact year but the confidence of such a guess is always 15%.
Yup. You might already know about it, but PredictionBook seems to get touted around here as a good method to calibrate oneself (although I haven’t used it myself).
That doesn’t mean you were underconfident; with a confidence of 65% you are correct 65% of the time.
Yeah, but the fact that my estimate was pretty close to the correct date suggests that some underconfidence may have been at work. If someone had stated the exactly correct year, and had estimated only a 51% chance that they were in the correct zone, we’d probably look at them funny.
Maybe, but getting very close with low confidence is entirely possible with these estimation-calibration tasks: a uniformly chosen year between 1600-1800 could be the exact year but the confidence of such a guess is always 15%.
That’s a good point. So a single data point like this doesn’t really say much useful for my own calibration.
Yup. You might already know about it, but PredictionBook seems to get touted around here as a good method to calibrate oneself (although I haven’t used it myself).
Yes, I’ve used it quite a bit. So far the main thing I’ve been convinced of from it is that my calibration is all over the place.