Thanks for the great post. As someone who builds these kinds of bots, I find this really interesting.
One thought: I think the way we prompt and guide these AI models makes a huge difference in their forecasting accuracy. We’re still very new to figuring out the best techniques, so there’s a lot of room for improvement there.
Because of that, the performance on benchmarks like ForecastBench might not show the full picture. Better scaffolds could unlock big gains quickly, so I lean toward an earlier date for AI reaching the level of top human forecasters.
That’s why I’m paying closer attention to the Metaculus tournaments. They feel like a better test of what a well-guided AI can actually do.
But the tournaments only provide the head-to-head scores for direct comparisons with top human forecasting performance. ForecastBench has clear human baselines.
It would be helpful if the Metaculus tournament leaderboards also reported Brier scores, even if they would not be directly comparable to human scores since the humans make predictions on fewer questions.
Thanks for the great post. As someone who builds these kinds of bots, I find this really interesting.
One thought: I think the way we prompt and guide these AI models makes a huge difference in their forecasting accuracy. We’re still very new to figuring out the best techniques, so there’s a lot of room for improvement there.
Because of that, the performance on benchmarks like ForecastBench might not show the full picture. Better scaffolds could unlock big gains quickly, so I lean toward an earlier date for AI reaching the level of top human forecasters.
That’s why I’m paying closer attention to the Metaculus tournaments. They feel like a better test of what a well-guided AI can actually do.
Indeed!
But the tournaments only provide the head-to-head scores for direct comparisons with top human forecasting performance. ForecastBench has clear human baselines.
It would be helpful if the Metaculus tournament leaderboards also reported Brier scores, even if they would not be directly comparable to human scores since the humans make predictions on fewer questions.