Basically agree with this in the near term, though I do think in the longer term, especially in the 2030s, continual learning will bring the dangers of AGI, and probably will lead to faster takeoffs than purely LLM-based takeoff worlds.
But yes, for at least the next 5 years, continual learning will differentially wake up the world to AGI without bringing the dangers of AGI, but unlike many on here, I don’t expect it to lead to policy that lets us reduce x-risk from AI much for the reasons Anton Leicht states here, but in short form, even if accelerationist power declines, this doesn’t necessarily mean AI existential safety can take advantage of it, and AI safety money will decline as a percentage compared to money for various job protection lobbies, and while accelerationists won’t be able to defeat entire anti-AI bills, it will still remain easy for them to neuter AI safety bills enough to make the EV of politics for reducing existential risk either much less than technical AI safety, or even outright worthless/negative depending on the politics of AI.
There’s a Dwarkesh quote on continual learning that I really want to emphasize here:
“Solving” continual learning won’t be a singular one-and-done achievement. Instead, it will feel like solving in context learning. GPT-3 demonstrated that in context learning could be very powerful (its ICL capabilities were so remarkable that the title of the GPT-3 paper is ‘Language Models are Few-Shot Learners’). But of course, we didn’t “solve” in-context learning when GPT-3 came out—and indeed there’s plenty of progress still to be made, from comprehension to context length. I expect a similar progression with continual learning. Labs will probably release something next year which they call continual learning, and which will in fact count as progress towards continual learning. But human level continual learning may take another 5 to 10 years of further progress.
Basically agree with this in the near term, though I do think in the longer term, especially in the 2030s, continual learning will bring the dangers of AGI, and probably will lead to faster takeoffs than purely LLM-based takeoff worlds.
But yes, for at least the next 5 years, continual learning will differentially wake up the world to AGI without bringing the dangers of AGI, but unlike many on here, I don’t expect it to lead to policy that lets us reduce x-risk from AI much for the reasons Anton Leicht states here, but in short form, even if accelerationist power declines, this doesn’t necessarily mean AI existential safety can take advantage of it, and AI safety money will decline as a percentage compared to money for various job protection lobbies, and while accelerationists won’t be able to defeat entire anti-AI bills, it will still remain easy for them to neuter AI safety bills enough to make the EV of politics for reducing existential risk either much less than technical AI safety, or even outright worthless/negative depending on the politics of AI.
There’s a Dwarkesh quote on continual learning that I really want to emphasize here: