The ideas in this post greatly influence how I think about AI timelines, and I believe they comprise the current single best way to forecast timelines.
A +12-OOMs-style forecast, like a bioanchors-style forecast, has two components:
an estimate of (effective) compute over time (including factors like compute getting cheaper and algorithms/ideas getting better in addition to spending increasing), and
a probability distribution on the (effective) training compute requirements for TAI (or equivalently the probability that TAI is achievable as a function of training compute).
Unlike bioanchors, a +12-OOMs-style forecast answers #2 by considering various kinds of possible transformative AI systems and using some combination of existing-system performance, scaling laws, principles, miscellaneous arguments, and inside-view intuition to estimate how much compute they would require. Considering the “fun things” that could be built with more compute lets us use more inside-view knowledge than bioanchors-style analysis, while not committing to a particular path to TAI like roadmap-style analysis would.
In addition to introducing this forecasting method, this post has excellent analysis of some possible paths to TAI.
The ideas in this post greatly influence how I think about AI timelines, and I believe they comprise the current single best way to forecast timelines.
A +12-OOMs-style forecast, like a bioanchors-style forecast, has two components:
an estimate of (effective) compute over time (including factors like compute getting cheaper and algorithms/ideas getting better in addition to spending increasing), and
a probability distribution on the (effective) training compute requirements for TAI (or equivalently the probability that TAI is achievable as a function of training compute).
Unlike bioanchors, a +12-OOMs-style forecast answers #2 by considering various kinds of possible transformative AI systems and using some combination of existing-system performance, scaling laws, principles, miscellaneous arguments, and inside-view intuition to estimate how much compute they would require. Considering the “fun things” that could be built with more compute lets us use more inside-view knowledge than bioanchors-style analysis, while not committing to a particular path to TAI like roadmap-style analysis would.
In addition to introducing this forecasting method, this post has excellent analysis of some possible paths to TAI.