I suspect I should also be writing down calibrated probability estimates for my project completion dates. This calibration test is easy to do oneself, without infrastructure, but I’d still be interested in a website tabulating my and others’ early predictions and then our actual performance—perhaps a page within LW?. Might be especially good to know about people within a group of coworkers, who could perhaps then know how much to actually estimate timelines when planning or dividing complex projects.
Wouldn’t making a probability estimate for your project completion dates influence your date of completion? Predicting your completion times successfully won’t prove your rationality.
This is a good point. Still, it would provide evidence of rationality, especially in the likely majority of cases where people didn’t try to game the system by e.g. deliberately picking dates far in advance of their actual completions, and then doing the last steps right at that date. My calibration scores on trivia have been fine for awhile now, but my calibration at predicting my own project completions is terrible.
I wonder to what degree this is a problem of poor calibration vs. poor motivation. Maybe commitment mechanisms like Stikk.com would have a greater marginal benefit than better calibration. I don’t know about you, but that seems to be the case with regards to similar issues on my end.
I suspect I should also be writing down calibrated probability estimates for my project completion dates. This calibration test is easy to do oneself, without infrastructure, but I’d still be interested in a website tabulating my and others’ early predictions and then our actual performance—perhaps a page within LW?. Might be especially good to know about people within a group of coworkers, who could perhaps then know how much to actually estimate timelines when planning or dividing complex projects.
Wouldn’t making a probability estimate for your project completion dates influence your date of completion? Predicting your completion times successfully won’t prove your rationality.
This is a good point. Still, it would provide evidence of rationality, especially in the likely majority of cases where people didn’t try to game the system by e.g. deliberately picking dates far in advance of their actual completions, and then doing the last steps right at that date. My calibration scores on trivia have been fine for awhile now, but my calibration at predicting my own project completions is terrible.
I wonder to what degree this is a problem of poor calibration vs. poor motivation. Maybe commitment mechanisms like Stikk.com would have a greater marginal benefit than better calibration. I don’t know about you, but that seems to be the case with regards to similar issues on my end.