This post argues that the debate over takeoff speeds is over a smaller issue than you might otherwise think: people seem to be arguing for either discontinuous progress, or continuous but fast progress. Both camps agree that once AI reaches human-level intelligence, progress will be extremely rapid; the disagreement is primarily about whether there is already quite a lot of progress _before_ that point. As a result, these differences don’t constitute a “shift in arguments on AI safety”, as some have claimed.
The post also goes through some of the arguments and claims that people have made in the past, which I’m not going to summarize here.
Planned opinion:
While I agree that the debate about takeoff speeds is primarily about the path by which we get to powerful AI systems, that seems like a pretty important question to me with <@many ramifications@>(@Clarifying some key hypotheses in AI alignment@).
Planned summary for the Alignment Newsletter:
Planned opinion: