One problem is that log-loss is not tied that closely to the types of intelligence that we care about. Extremely low log-loss necessarily implies extremely high ability to mimic a broad variety of patterns in the world, but that’s sort of all you get. Moderate improvements in log-loss may or may not translate to capabilities of interest, and even when they do, the story connecting log-loss numbers to capabilities we care about is not obvious. (EG, what log-loss translates to the ability to do innovative research in neuroscience? How could you know before you got there?)
When there were rampant rumors about an AI slowdown in 2024, the speculation in the news articles often mentioned the “scaling laws” but never (in my haphazard reading) made a clear distinction between (a) frontier labs seeing that the scaling laws were violated, IE, improvements in loss are really slowing down, (b) there’s a slowdown in the improvements to other metrics, (c) frontier labs are facing a qualitative slowdown, such as a feeling that GPT5 doesn’t feel like as big of a jump as GPT4 did. Often these concepts were actively conflated.
Strongly agree. I was making a narrower point, but the metric is clearly different than the goal—if anything it’s more surprising that we see so much correlation as we do, given how much it has been optimized.
One problem is that log-loss is not tied that closely to the types of intelligence that we care about. Extremely low log-loss necessarily implies extremely high ability to mimic a broad variety of patterns in the world, but that’s sort of all you get. Moderate improvements in log-loss may or may not translate to capabilities of interest, and even when they do, the story connecting log-loss numbers to capabilities we care about is not obvious. (EG, what log-loss translates to the ability to do innovative research in neuroscience? How could you know before you got there?)
When there were rampant rumors about an AI slowdown in 2024, the speculation in the news articles often mentioned the “scaling laws” but never (in my haphazard reading) made a clear distinction between (a) frontier labs seeing that the scaling laws were violated, IE, improvements in loss are really slowing down, (b) there’s a slowdown in the improvements to other metrics, (c) frontier labs are facing a qualitative slowdown, such as a feeling that GPT5 doesn’t feel like as big of a jump as GPT4 did. Often these concepts were actively conflated.
Strongly agree. I was making a narrower point, but the metric is clearly different than the goal—if anything it’s more surprising that we see so much correlation as we do, given how much it has been optimized.