Is there any information on how well-calibrated the community predictions are on Metaculus?
Great question! Yes. There was a post on the official Metaculus blog that addressed this, though this was back in Oct 2016. In the past, they’ve also sent to subscribed users a few emails that looked at community calibration.
I’ve actually done my own analysis on this around two months ago, in private communication. Let me just copy two of the plots I created and what I said there. You might want to ignore the plots and details, and just skip to the “brief summary” at the end.
(Questions on Metaculus go through an ‘open’ phase then a ‘closed’ phase; predictions can only be made and updated while the question is open. After a question closes, it gets resolved either positive or negative once the outcome is known. I based my analysis on the 71 questions that have been resolved as of 2 months ago; there are around 100 resolved questions now.)
First, here’s a plot for the 71 final median predictions. The elements of this plot:
Of all monotonic functions, the black line is the one that, when applied to this set of median predictions, performs the best (in mean score) under every proper scoring rule given the realized outcomes. This can be interpreted as a histogram with adaptive bin widths. So for instance, the figure shows that, binned together, predictions from 14% to 45% resolved positive around 0.11 of the time. This is also the maximum-likelihood monotonic function.
The confidence bands are for the null hypothesis that the 71 predictions are all perfectly calibrated and independent, so that we can sample the distribution of counterfactual outcomes simply by treating the outcome of each prediction with credence p as an independent coin flip with probability p of positive resolution. I sampled 80,000 sets of these 71 outcomes, and built the confidence bands by computing the corresponding maximum-likelihood monotonic function for each set. The inner band is pointwise 1 sigma, whereas the outer is familywise 2 sigma. So the corner of the black line that exceeds the outer band around predictions of 45% is a p < 0.05 event under perfect calibration, and it looks to me that predictions around 30% to 40% are miscalibrated (underconfident).
The two rows of tick marks below the x-axis show the 71 predictions, with the upper green row comprising positive resolutions, and the lower red row comprising negatives.
The dotted blue line is a rough estimate of the proportion of questions resolving positive along the range of predictions, based on kernel density estimates of the distributions of predictions giving positive and negative resolutions.
Now, a plot of all 3723 final predictions on the 71 questions.
The black line is again the monotonic function that minimizes mean proper score, but with the 1% and 99% predictions removed because—as I expected—they were especially miscalibrated (overconfident) compared to nearby predictions.
The two black dots indicate the proportion of question resolving positive for 1% and 99% predictions (around 0.4 and 0.8).
I don’t have any bands indicating dispersion here because these predictions are a correlated mess that I can’t deal with. But for predictions below 20%, the deviation from the diagonal looks large enough that I think it shows miscalibration (overconfidence).
Along the x-axis I’ve plotted kernel density estimates of the predictions resolving positive (green, solid line) and negative (red, dotted line). Kernel densities were computed under log-odds with Gaussian kernels, then converted back to probabilities in [0, 1].
The blue dotted line is again a rough estimate of the proportion resolving positive, using these two density estimates.
Brief summary:
Median predictions around 30% to 40% occur less often than claimed.
User predictions below around 20% occur more often than claimed.
User predictions at 1% and 99% are obviously overconfident.
Other than these, calibration seems okay everywhere else; at least, they aren’t obviously off.
I’m very surprised that user predictions look fairly accurate around 90% and 95% (resolving positive around 0.85 and 0.90 of the time). I expected strong overconfidence like that shown by the predictions below 20%.
Also, if one wanted to get into it, could you describe what your process is?
Is there anything in particular that you want to hear about? Or would you rather have a general description of 1) how I’d suggest starting out on Metaculus, and/or 2) how I approach making and updating predictions on the site, and/or 3) something else?
(The FAQ is handy for questions about the site. It’s linked to by the ‘help’ button at the button of every page.)
Great question! Yes. There was a post on the official Metaculus blog that addressed this, though this was back in Oct 2016. In the past, they’ve also sent to subscribed users a few emails that looked at community calibration.
I’ve actually done my own analysis on this around two months ago, in private communication. Let me just copy two of the plots I created and what I said there. You might want to ignore the plots and details, and just skip to the “brief summary” at the end.
(Questions on Metaculus go through an ‘open’ phase then a ‘closed’ phase; predictions can only be made and updated while the question is open. After a question closes, it gets resolved either positive or negative once the outcome is known. I based my analysis on the 71 questions that have been resolved as of 2 months ago; there are around 100 resolved questions now.)
Is there anything in particular that you want to hear about? Or would you rather have a general description of 1) how I’d suggest starting out on Metaculus, and/or 2) how I approach making and updating predictions on the site, and/or 3) something else?
(The FAQ is handy for questions about the site. It’s linked to by the ‘help’ button at the button of every page.)