When forecasting, you can be well-calibrated or badly calibrated (well calibrated if e.g. 90% of your 90% forecasts come true). This can be true on smaller ranges: you can be well-calibrated from 50% to 60% if your 50%/51%/52%/…/60% forecasts are each well calibrated.
But, for most forecasters, there must be a resolution at which their forecasts are pretty much randomly calibrated, if this is e.g. at the 10% level, then they are pretty much taking random guesses from the specific 10% interval around their probability (they forecast 20%, but they could forecast 25% or 15% just as well, because they’re just not better calibrated).
I assume there is a name for this concept, and that there’s a way to compute it from a set of forecasts and resolutions, but I haven’t stumbled on it yet. So, what is it?
You could say that the Briers score of the particular forcaster is bad for those cases when they forecast 20% or is bad in a particular interval. It’s more then a single phrase but it covers the meaning.
When forecasting, you can be well-calibrated or badly calibrated (well calibrated if e.g. 90% of your 90% forecasts come true). This can be true on smaller ranges: you can be well-calibrated from 50% to 60% if your 50%/51%/52%/…/60% forecasts are each well calibrated.
But, for most forecasters, there must be a resolution at which their forecasts are pretty much randomly calibrated, if this is e.g. at the 10% level, then they are pretty much taking random guesses from the specific 10% interval around their probability (they forecast 20%, but they could forecast 25% or 15% just as well, because they’re just not better calibrated).
I assume there is a name for this concept, and that there’s a way to compute it from a set of forecasts and resolutions, but I haven’t stumbled on it yet. So, what is it?
You could say that the Briers score of the particular forcaster is bad for those cases when they forecast 20% or is bad in a particular interval. It’s more then a single phrase but it covers the meaning.