None of this helps with automatically acquiring deep skills like playing good chess or fluency in a novel topic of math, and so these aren’t the stright lines on graphs directly relevant to crossing the AGI threshold, full automation of civilization.
Humans don’t know how to automate learning of arbitrary deep skills in an AI that only come up post-deployment, but can manually add them with RLVR at training time, by developing RL environments, graders, and tasks. AI might automate this process not by doing what humans couldn’t and inventing algorithmic advancements for low level acquisition of deep skills, but instead by merely being smart and skilled enough to do all the same things that humans are currently doing to make it work, “manually”. So in principle AI might become able to automatically acquire deep skills if it’s capable enough at routine AI R&D, even if it doesn’t have the capability to acquire deep skills at a low level, the way humans do, and doesn’t have the capability to invent substantial algorithmic innovations that humans haven’t invented yet. Some of the straight lines on graphs are relevant to when this might happen, and so indirectly they are relevant to crossing the AGI threshold.
I don’t think in-context learning or even true continual learning with anything like the current methods can automate acquisition of deep skills at a low level, because only RLVR currently works for that purpose, context persistence is essentially unrelated. But these things might get AIs to the level of capability where they can do the same things as the humans who set up the ingredients for task-specific RLVR.
None of this helps with automatically acquiring deep skills like playing good chess or fluency in a novel topic of math, and so these aren’t the stright lines on graphs directly relevant to crossing the AGI threshold, full automation of civilization.
Humans don’t know how to automate learning of arbitrary deep skills in an AI that only come up post-deployment, but can manually add them with RLVR at training time, by developing RL environments, graders, and tasks. AI might automate this process not by doing what humans couldn’t and inventing algorithmic advancements for low level acquisition of deep skills, but instead by merely being smart and skilled enough to do all the same things that humans are currently doing to make it work, “manually”. So in principle AI might become able to automatically acquire deep skills if it’s capable enough at routine AI R&D, even if it doesn’t have the capability to acquire deep skills at a low level, the way humans do, and doesn’t have the capability to invent substantial algorithmic innovations that humans haven’t invented yet. Some of the straight lines on graphs are relevant to when this might happen, and so indirectly they are relevant to crossing the AGI threshold.
I don’t think in-context learning or even true continual learning with anything like the current methods can automate acquisition of deep skills at a low level, because only RLVR currently works for that purpose, context persistence is essentially unrelated. But these things might get AIs to the level of capability where they can do the same things as the humans who set up the ingredients for task-specific RLVR.