I’m not sure how to think about the proposed calibration rule—it’s a heuristic to approximate something, but I’m not sure what. Probably the right thing to approximate is “how much money could someone make by betting against me,” which is proportional to something like the KL divergence or the earth mover’s distance, depending on your model of what bets are available.
Anyway, if you’re quite confident about the wrong value, somebody can take a bunch of money from you. You can think of yourself as having overestimated the amount of knowledge you had, and as you suggested, you can think of the correction as uniformly decreasing your knowledge estimate going forward.
Talking about the stdev being “correct” is perfectly sensible if the ground truth is actually normally distributed, but makes less sense as the distribution becomes less normal.
I agree, most things is not normal distributed and my callibrations rule answers how to rescale to a normal. Metaculus uses the cdf of the predicted distribution which is better If you have lots of predictions, my scheme gives an actionable number faster, by making assumptions that are wrong, but if you like me have intervals that seems off by almost a a factor of 2, then your problem is not the tails but the entire region :), so the trade of seems worth it.
I’m not sure how to think about the proposed calibration rule—it’s a heuristic to approximate something, but I’m not sure what. Probably the right thing to approximate is “how much money could someone make by betting against me,” which is proportional to something like the KL divergence or the earth mover’s distance, depending on your model of what bets are available.
Anyway, if you’re quite confident about the wrong value, somebody can take a bunch of money from you. You can think of yourself as having overestimated the amount of knowledge you had, and as you suggested, you can think of the correction as uniformly decreasing your knowledge estimate going forward.
Talking about the stdev being “correct” is perfectly sensible if the ground truth is actually normally distributed, but makes less sense as the distribution becomes less normal.
I agree, most things is not normal distributed and my callibrations rule answers how to rescale to a normal. Metaculus uses the cdf of the predicted distribution which is better If you have lots of predictions, my scheme gives an actionable number faster, by making assumptions that are wrong, but if you like me have intervals that seems off by almost a a factor of 2, then your problem is not the tails but the entire region :), so the trade of seems worth it.
You keep claiming this, but I don’t understand why you think this