I agree with this and would like to add that scaling along the inference-time axis seems to be more likely to rapidly push performance in certain closed-domain reasoning tasks far beyond human intelligence capabilities (likely already this year!) which will serve as a very convincing show of safety to many people and will lead to wide adoption of such models for intellectual task automation. But without the various forms of experiential and common-sense reasoning humans have, there’s no telling where and how such a “superhuman” model may catastrophically mess up simply because it doesn’t understand a lot of things any human being takes for granted. Given the current state of AI development, this strikes me as literally the shortest path to a paperclip maximizer. Well, maybe not that catastrophic, but hey, you never know.
In terms of how immediately it accelerates certain adoption-related risks, I don’t think this bodes particularly well. I would prefer a more evenly spread cognitive capability.
Accuracy being halved going from 5.1 to 5.2 suggests one of the two things:
1) the new model shows dramatic regression on data retrieval which cannot possibly be the desired outcome for a successor, and I’m sure it would be noticed immediately on internal tests and benchmarks, etc.—we’d most likely see this manifest in real-world usage as well;
2) the new model refuses to guess much more often when it isn’t too sure (being more cautious about answering wrong), which is a desired outcome meant to reduce hallucinations and slop. I’m betting this is exactly what we’re looking at, and your Sonnet graph also suggests the same.
So if your methodology counts refusal as lowering accuracy, then it doesn’t necessarily prove the base model or the training data mix is different. Teaching a model to refuse on low-signal data is in the domain of SFT and reinforcement learning, and investing into that heavily on the same pretrain would result in something similar to the graph you’ve posted.
4o and 5 almost certainly have different base models since 4o is natively omnimodal and 5 and its derivatives are not, taking that into account you have to make a lot of weird assumptions to reconcile this discrepancy. 5 and 4.1, on the other hand… Everything seems to fall into place neatly when looking in that direction.