I’ve occasionally been guilty of that, but I see it less as a magical talisman of +1 to prediction skills and more as a means of improving my calibration after the fact: if I discover I’ve been systematically overweighting (or, much less likely, underweighting) the probability of some category of events, that’s a good clue that my thinking about them is flawed in some way. Can’t do that if I don’t quantify my uncertainty in the first place, although of course the flaw might come in at the quantification step.
This only works if you actually go back and collect that data, though, and I haven’t generally been very good about using PredictionBook or any similar tools.
I’ve occasionally been guilty of that, but I see it less as a magical talisman of +1 to prediction skills and more as a means of improving my calibration after the fact: if I discover I’ve been systematically overweighting (or, much less likely, underweighting) the probability of some category of events, that’s a good clue that my thinking about them is flawed in some way. Can’t do that if I don’t quantify my uncertainty in the first place, although of course the flaw might come in at the quantification step.
This only works if you actually go back and collect that data, though, and I haven’t generally been very good about using PredictionBook or any similar tools.