I’ve launched Forecast Labs, an organization focused on using AI forecasting to help reduce AI risk.
Our initial results are promising. We have an AI model that is outperforming superforecasters on the Manifold Markets benchmark, as evaluated by ForecastBench. You can see a summary of the results at our website: https://www.forecastlabs.org/results.
This is just the preliminary scaffolding, and there’s significant room for improvement. The long-term vision is to develop these AI forecasting capabilities to a point where we can construct large-scale causal models. These models would help identify the key decisions and interventions needed to navigate the challenges of advanced AI and minimize existential risk.
I’m sharing this here to get feedback from the community and connect with others interested in this approach. The goal is to build powerful tools for foresight, and I believe that’s a critical component of the AI safety toolkit.
I’ve launched Forecast Labs, an organization focused on using AI forecasting to help reduce AI risk.
Our initial results are promising. We have an AI model that is outperforming superforecasters on the Manifold Markets benchmark, as evaluated by ForecastBench. You can see a summary of the results at our website: https://www.forecastlabs.org/results.
This is just the preliminary scaffolding, and there’s significant room for improvement. The long-term vision is to develop these AI forecasting capabilities to a point where we can construct large-scale causal models. These models would help identify the key decisions and interventions needed to navigate the challenges of advanced AI and minimize existential risk.
I’m sharing this here to get feedback from the community and connect with others interested in this approach. The goal is to build powerful tools for foresight, and I believe that’s a critical component of the AI safety toolkit.