From one of the linked articles, Christiano talking about takeoff speeds:
I believe that before we have incredibly powerful AI, we will have AI which is merely very powerful. This won’t be enough to create 100% GDP growth, but it will be enough to lead to (say) 50% GDP growth. I think the likely gap between these events is years rather than months or decades.
In particular, this means that incredibly powerful AI will emerge in a world where crazy stuff is already happening (and probably everyone is already freaking out). If true, I think it’s an important fact about the strategic situation.
and in your post:
Still, the general strategy of “dealing with things as they come up” is much more viable under continuous takeoff. Therefore, if a continuous takeoff is more likely, we should focus our attention on questions which fundamentally can’t be solved as they come up.
I agree that the continuous/slow takeoff is more likely than fast takeoff, though I have low confidence in that belief (and in most of my beliefs about AGI timelines) but the world of a continuous/slow takeoff, badly managed still seems like an extreme danger and a case where it would be too late to deal with many problems in e.g. the same year that they arise. Are you imagining something like this?
From one of the linked articles, Christiano talking about takeoff speeds:
and in your post:
I agree that the continuous/slow takeoff is more likely than fast takeoff, though I have low confidence in that belief (and in most of my beliefs about AGI timelines) but the world of a continuous/slow takeoff, badly managed still seems like an extreme danger and a case where it would be too late to deal with many problems in e.g. the same year that they arise. Are you imagining something like this?