I think the ability to autonomously find novel problems to solve will emerge as reasoning models scale up. It will emerge because it is instrumental to solving difficult problems.
This of course is not a sufficient reason. (Demonstration: telepathy will emerge [as evolution improves organisms] because it is instrumental to navigating social situations.) It being instrumental means that there is an incentive—or to be more precise, a downward slope in the loss function toward areas of model space with that property—which is one required piece, but it also must be feasible. E.g., if the parameter space doesn’t have any elements that are good at this ability, then it doesn’t matter whether there’s a downward slope.
Fwiw I agree with this:
Current LLMs are capable of solving novel problems when the user does most the work: when the user lays the groundwork and poses the right question for the LLM to answer.
… though like you I think posing the right question is the hard part, so imo this is not very informative.
This of course is not a sufficient reason. (Demonstration: telepathy will emerge [as evolution improves organisms] because it is instrumental to navigating social situations.) It being instrumental means that there is an incentive—or to be more precise, a downward slope in the loss function toward areas of model space with that property—which is one required piece, but it also must be feasible. E.g., if the parameter space doesn’t have any elements that are good at this ability, then it doesn’t matter whether there’s a downward slope.
Fwiw I agree with this:
… though like you I think posing the right question is the hard part, so imo this is not very informative.