[Question] Resources on quantifiably forecasting future progress or reviewing past progress in AI safety?

I’m seeking resources/​work done related to forecasting overall progress made on AI safety (especially mitigating the most catastrophic risks), such as trying to estimate how much X-risks from AI can be expected to be reduced within the medium-term future (as in, the time range people generally expect to have left before said risks become legitimately feasible). Ideally, resources trying to quantify the reduction in risk, and/​or looking at technical or governance work independently (or even better, both).

If not this, the next best alternative would be resources that try to estimate reduction in AI risks from work done thus far (again, especially quantified, even if only something like an overview of progress on alignment benchmarks). And if not that, any pointers you may have for someone trying to do work like this themselves. This feels like something that would already exist but I couldn’t find much.

(I do suspect any such estimates or work to naturally be extremely uncertain, and perhaps essentially worthless to make at the current stage of AI development.)

No comments.