I think it’s worth forecasting AI risk timelines instead of GDP timelines, because the former is what we really care about while the latter raises a bunch of economics concerns that don’t necessarily change the odds of x-risk.
I agree that’s probably the more important variable to forecast. On the other hand, if your model of AI is more continuous, you might expect a slow-rolling catastrophe, like a slow takeover of humanity’s institutions, making it harder to determine the exact “date” that we lost control. Predicting GDP growth is the easy way out of this problem, though I admit it’s not ideal.
On a separate note, you might be interested in Erik Byrnjolfsson’s work on the economic impact of AI and other technologies. For example this paper argues that general purpose technologies have an implementation lag, where many people can see the transformative potential of the technology decades before the economic impact is realized.
In fact, I cited this strand of research in my original post on long timelines. It was one of the main reasons why I had long timelines, and can help explain why it seems I still have somewhat long timelines (a median of 2047) despite having made, in my opinion, a strong update.
I agree that’s probably the more important variable to forecast. On the other hand, if your model of AI is more continuous, you might expect a slow-rolling catastrophe, like a slow takeover of humanity’s institutions, making it harder to determine the exact “date” that we lost control. Predicting GDP growth is the easy way out of this problem, though I admit it’s not ideal.
In fact, I cited this strand of research in my original post on long timelines. It was one of the main reasons why I had long timelines, and can help explain why it seems I still have somewhat long timelines (a median of 2047) despite having made, in my opinion, a strong update.