Looking back on this comment, I’m pleased to note how well the strengths of reasoning models line up with the complaints I made about “non-reasoning” HHH assistants.
Reasoning models provide 3 of the 4 things I said “I would pay a premium for” in the comment: everything except for quantified uncertainty[1].
I suspect that capabilities are still significantly bottlenecked by limitations of the HHH assistant paradigm, even now that we have “reasoning,” and that we will see more qualitative changes analogous the introduction of “reasoning” in the coming months/years.
An obvious area for improvement is giving assistant models a more nuanced sense of when to check in with the user because they’re confused or uncertain. This will be really important for making autonomous computer-using agents that are actually useful, since they need to walk a fine line between “just do your best based on the initial instruction” (which predictably causes Sorcerer’s Apprentice situations[2]) and “constantly nag the user for approval and clarification” (which defeats the purpose of autonomy).
And come to think of it, I’m not actually sure about that one. Presumably if you just ask o1 / R1 for a probability estimate, they’d exhibit better calibration than their “non-reasoning” ancestors, though I haven’t checked how large the improvement is.
Note that “Sorcerer’s Apprentice situations” are not just “alignment failures,” they’re also capability failures: people aren’t going to want to use these things if they expect that they will likely get a result that is not-what-they-really-meant in some unpredictable, possibly inconvenient/expensive/etc. manner. Thus, no matter how cynical you are about frontier labs’ level of alignment diligence, you should still expect them to work on mitigating the “overly zealous unchecked pursuit of initially specified goal” failure modes of their autonomous agent products, since these failure modes make their products less useful and make people less willing to pay for them.
(This is my main objection to the “people will give the AI goals” line of argument, BTW. The exact same properties that make this kind of goal-pursuit dangerous also make it ineffectual for getting things done. If this is what happens when you “give the AI goals,” then no, you generally won’t want to give the AI goals, at least not after a few rounds of noticing what happens to others when they try to do it. And these issues will be hashed out very soon, while the value of “what happens to others” is not an existential catastrophe, just wasted money or inappropriately deleted files or other such things.)
Looking back on this comment, I’m pleased to note how well the strengths of reasoning models line up with the complaints I made about “non-reasoning” HHH assistants.
Reasoning models provide 3 of the 4 things I said “I would pay a premium for” in the comment: everything except for quantified uncertainty[1].
I suspect that capabilities are still significantly bottlenecked by limitations of the HHH assistant paradigm, even now that we have “reasoning,” and that we will see more qualitative changes analogous the introduction of “reasoning” in the coming months/years.
An obvious area for improvement is giving assistant models a more nuanced sense of when to check in with the user because they’re confused or uncertain. This will be really important for making autonomous computer-using agents that are actually useful, since they need to walk a fine line between “just do your best based on the initial instruction” (which predictably causes Sorcerer’s Apprentice situations[2]) and “constantly nag the user for approval and clarification” (which defeats the purpose of autonomy).
And come to think of it, I’m not actually sure about that one. Presumably if you just ask o1 / R1 for a probability estimate, they’d exhibit better calibration than their “non-reasoning” ancestors, though I haven’t checked how large the improvement is.
Note that “Sorcerer’s Apprentice situations” are not just “alignment failures,” they’re also capability failures: people aren’t going to want to use these things if they expect that they will likely get a result that is not-what-they-really-meant in some unpredictable, possibly inconvenient/expensive/etc. manner. Thus, no matter how cynical you are about frontier labs’ level of alignment diligence, you should still expect them to work on mitigating the “overly zealous unchecked pursuit of initially specified goal” failure modes of their autonomous agent products, since these failure modes make their products less useful and make people less willing to pay for them.
(This is my main objection to the “people will give the AI goals” line of argument, BTW. The exact same properties that make this kind of goal-pursuit dangerous also make it ineffectual for getting things done. If this is what happens when you “give the AI goals,” then no, you generally won’t want to give the AI goals, at least not after a few rounds of noticing what happens to others when they try to do it. And these issues will be hashed out very soon, while the value of “what happens to others” is not an existential catastrophe, just wasted money or inappropriately deleted files or other such things.)