So, again: what could we observe at the start of 2028 that would create pause this way?
Very little. I’ve been seriously thinking about ASI since the early 00s. Around 2004-2007, I put my timeline around 2035-2045, depending on the rate of GPU advancements. Given how hardware and LLM progress actually played out, my timeline is currently around 2035.
I do expect LLMs (as we know them now) to stall before 2028, if they haven’t already. Something is missing. I have very concrete guesses as to what is missing, and it’s an area of active research. But I also expect the missing piece adds less than a single power of 10 to existing training and inference costs. So once someone publishes it in any kind of convincing way, then I’d estimate better than an 80% chance of uncontrolled ASI within 10 years.
Now, there are lots of things I could see in 2035 that would cause me to update away from this scenario. I did, in fact, update away from my 2004-2007 predictions by 2018 or so, largely because nothing like ChatGPT 3.5 existed by that point. GPT 3 made me nervous again, and 3.5 Instruct caused me to update all the way back to my original timeline. And if we’re still stalled in 2035, then sure, I’ll update heavily away from ASI again. But I’m already predicting the LLM S-curve to flatten out around now, resulting in less investment in Chinchilla scaling and more investment in algorithmic improvement. But since algorithmic improvement is (1) hard to predict, and (2) where I think the actual danger lies, I don’t intend to make any near-term updates away ASI.
Very little. I’ve been seriously thinking about ASI since the early 00s. Around 2004-2007, I put my timeline around 2035-2045, depending on the rate of GPU advancements. Given how hardware and LLM progress actually played out, my timeline is currently around 2035.
I do expect LLMs (as we know them now) to stall before 2028, if they haven’t already. Something is missing. I have very concrete guesses as to what is missing, and it’s an area of active research. But I also expect the missing piece adds less than a single power of 10 to existing training and inference costs. So once someone publishes it in any kind of convincing way, then I’d estimate better than an 80% chance of uncontrolled ASI within 10 years.
Now, there are lots of things I could see in 2035 that would cause me to update away from this scenario. I did, in fact, update away from my 2004-2007 predictions by 2018 or so, largely because nothing like ChatGPT 3.5 existed by that point. GPT 3 made me nervous again, and 3.5 Instruct caused me to update all the way back to my original timeline. And if we’re still stalled in 2035, then sure, I’ll update heavily away from ASI again. But I’m already predicting the LLM S-curve to flatten out around now, resulting in less investment in Chinchilla scaling and more investment in algorithmic improvement. But since algorithmic improvement is (1) hard to predict, and (2) where I think the actual danger lies, I don’t intend to make any near-term updates away ASI.