I think you’re measuring the right thing (decisions changed) but blaming the wrong cause. I think the field underperformed because:
The questions are at the wrong altitude. “P(AGI by 2027)” is fun to trade but hard to act on. The decision-relevant questions (e.g., will this research direction work, will this eval saturate first, will this intervention move its metric) rarely get asked because they’re too narrow and poorly funded to attract pro-forecasters. Moreover, such narrow questions usually rely on internal information that is difficult to attain for the forecasters.
Good forecasts aren’t reaching decision-makers. There’s no apparent pipeline from these forecasting platforms to decisions at a large enough scale to appear noticeable. I’d argue forecasting is still “niche” amongst the general population.
AI forecasters fix (1) directly. Calibrated answers to arbitrary narrow questions means you can finally ask the questions that bind to actual decisions, with the internal information and predictive power actually needed to forecast on these questions correctly.
If you had an expert AI forecasting on your daily decisions would you not listen?
I don’t think the right update from the last decade is “stop funding”. I think it’s “stop funding platforms and tournaments, start funding question design, decision integration, and automated forecasting.”
I think you’re measuring the right thing (decisions changed) but blaming the wrong cause. I think the field underperformed because:
The questions are at the wrong altitude. “P(AGI by 2027)” is fun to trade but hard to act on. The decision-relevant questions (e.g., will this research direction work, will this eval saturate first, will this intervention move its metric) rarely get asked because they’re too narrow and poorly funded to attract pro-forecasters. Moreover, such narrow questions usually rely on internal information that is difficult to attain for the forecasters.
Good forecasts aren’t reaching decision-makers. There’s no apparent pipeline from these forecasting platforms to decisions at a large enough scale to appear noticeable. I’d argue forecasting is still “niche” amongst the general population.
AI forecasters fix (1) directly. Calibrated answers to arbitrary narrow questions means you can finally ask the questions that bind to actual decisions, with the internal information and predictive power actually needed to forecast on these questions correctly.
If you had an expert AI forecasting on your daily decisions would you not listen?
I don’t think the right update from the last decade is “stop funding”. I think it’s “stop funding platforms and tournaments, start funding question design, decision integration, and automated forecasting.”