Tricky hypothesis 1: ASI will in fact be developed in a world that looks very similar to today’s (e.g. because sub-ASI AIs will have negligible effect on the world; this could also be because ASI will be developed very soon).
Tricky hypothesis 2: But the differences between the world of today and the world where ASI will be developed don’t matter for the prognosis.
Both of these hypotheses look relatively more plausible than they did 4y ago, don’t they? Looking back at this section from the 2021 takeoff speed conversation gives a sense of how people were thinking about this kind of thing at the time.
AI-related investment and market caps are exploding, but not really due to actual revenue being “in the trillions”—it’s mostly speculation and investment in compute and research.
Deployed AI systems can already provide a noticeable speed-up to software engineering and other white-collar work broadly, but it’s not clear that this is having much of an impact on AI research (and especially a differential impact on alignment research) specifically.
Maybe we will still get widely deployed / transformative robotics, biotech, research tools etc. due to AI that could make a difference in some way prior to ASI, but SoTA AIs of today are routinely blowing through tougher and tougher benchmarks before they have widespread economic effects due to actual deployment.
I think most people in 2021 would have been pretty surprised by the fact we have widely available LLMs in 2025 with gold medal-level performance on the IMO, but which aren’t yet having much larger economic effects. But in relative terms it seems like you and Christiano should be more surprised than Yudkowsky and Soares.
Both of these hypotheses look relatively more plausible than they did 4y ago, don’t they? Looking back at this section from the 2021 takeoff speed conversation gives a sense of how people were thinking about this kind of thing at the time.
AI-related investment and market caps are exploding, but not really due to actual revenue being “in the trillions”—it’s mostly speculation and investment in compute and research.
Deployed AI systems can already provide a noticeable speed-up to software engineering and other white-collar work broadly, but it’s not clear that this is having much of an impact on AI research (and especially a differential impact on alignment research) specifically.
Maybe we will still get widely deployed / transformative robotics, biotech, research tools etc. due to AI that could make a difference in some way prior to ASI, but SoTA AIs of today are routinely blowing through tougher and tougher benchmarks before they have widespread economic effects due to actual deployment.
I think most people in 2021 would have been pretty surprised by the fact we have widely available LLMs in 2025 with gold medal-level performance on the IMO, but which aren’t yet having much larger economic effects. But in relative terms it seems like you and Christiano should be more surprised than Yudkowsky and Soares.