The upshot is that I find it difficult to accept the AI 2027 model as strong evidence for short timelines
Here you’re using “short timelines” to refer to our takeoff model I think, which is what you spend most of the post discussing? Seems a bit confusing if so, and you also do this in a few other places.
Correct. Am I wrong in thinking that it’s usual to use the word “timelines” to refer to the entire arc of AI progress, including both the periods covered in the “Timelines Forecast” and “Takeoff Forecast”? But, since this is all in the context of AI 2027 I should have clarified.
I think that usually in AI safety lingo people use timelines to mean time to AGI and takeoff to mean something like the speed of progression after AGI.
Here you’re using “short timelines” to refer to our takeoff model I think, which is what you spend most of the post discussing? Seems a bit confusing if so, and you also do this in a few other places.
Correct. Am I wrong in thinking that it’s usual to use the word “timelines” to refer to the entire arc of AI progress, including both the periods covered in the “Timelines Forecast” and “Takeoff Forecast”? But, since this is all in the context of AI 2027 I should have clarified.
I think that usually in AI safety lingo people use timelines to mean time to AGI and takeoff to mean something like the speed of progression after AGI.