3.5 substantially improved performance for your use case and 3.6 slightly improved performance.
The o-series models didn’t improve performance on your task. (And presumably 3.7 didn’t improve perf.)
So, by “recent model progress feels mostly like bullshit” I think you basically just mean “reasoning models didn’t improve performance on my application and Claude 3.5/3.6 sonnet is still best”. Is this right?
I don’t find this state of affairs that surprising:
Without specialized scaffolding o1 is quite a bad agent and it seems plausible your use case is mostly blocked on this. Even with specialized scaffolding, it’s pretty marginal. (This shows up in the benchmarks AFAICT, e.g., see METR’s results.)
o3-mini is generally a worse agent than o1 (aside from being cheaper). o3 might be a decent amount better than o1, but it isn’t released.
Generally Anthropic models are better for real world coding and agentic tasks relative to other models and this
mostly shows up in the benchmarks. (Anthropic models tend to slightly overperform their benchmarks relative to other models I think, but they also perform quite well on coding and agentic SWE benchmarks.)
I would have guessed you’d see performance gains with 3.7 after coaxing it a bit. (My low confidence understanding is that this model is actually better, but it is also more misaligned and reward hacky in ways that make it less useful.)
Our experience so far is while reasoning models don’t improve performance directly (3.7 is better than 3.6, but 3.7 extended thinking is NOT better than 3.7), they do so indirectly because thinking trace helps us debug prompts and tool output when models misunderstand them. This was not the result we expected but it is the case.
Completely agree with this. While there are some novel applications possible with reasoning models, the main value has been the ability to trace specific chains of thought and redefine/reprompt accordingly. Makes the system (slightly) less of a black box
Just edited the post because I think the way it was phrased kind of exaggerated the difficulties we’ve been having applying the newer models. 3.7 was better, as I mentioned to Daniel, just underwhelming and not as big a leap as either 3.6 or certainly 3.5.
Is this an accurate summary:
3.5 substantially improved performance for your use case and 3.6 slightly improved performance.
The o-series models didn’t improve performance on your task. (And presumably 3.7 didn’t improve perf.)
So, by “recent model progress feels mostly like bullshit” I think you basically just mean “reasoning models didn’t improve performance on my application and Claude 3.5/3.6 sonnet is still best”. Is this right?
I don’t find this state of affairs that surprising:
Without specialized scaffolding o1 is quite a bad agent and it seems plausible your use case is mostly blocked on this. Even with specialized scaffolding, it’s pretty marginal. (This shows up in the benchmarks AFAICT, e.g., see METR’s results.)
o3-mini is generally a worse agent than o1 (aside from being cheaper). o3 might be a decent amount better than o1, but it isn’t released.
Generally Anthropic models are better for real world coding and agentic tasks relative to other models and this mostly shows up in the benchmarks. (Anthropic models tend to slightly overperform their benchmarks relative to other models I think, but they also perform quite well on coding and agentic SWE benchmarks.)
I would have guessed you’d see performance gains with 3.7 after coaxing it a bit. (My low confidence understanding is that this model is actually better, but it is also more misaligned and reward hacky in ways that make it less useful.)
Our experience so far is while reasoning models don’t improve performance directly (3.7 is better than 3.6, but 3.7 extended thinking is NOT better than 3.7), they do so indirectly because thinking trace helps us debug prompts and tool output when models misunderstand them. This was not the result we expected but it is the case.
Completely agree with this. While there are some novel applications possible with reasoning models, the main value has been the ability to trace specific chains of thought and redefine/reprompt accordingly. Makes the system (slightly) less of a black box
Just edited the post because I think the way it was phrased kind of exaggerated the difficulties we’ve been having applying the newer models. 3.7 was better, as I mentioned to Daniel, just underwhelming and not as big a leap as either 3.6 or certainly 3.5.
How long do you[1] expect it to take to engineer scaffolding that will make reasoning models useful for the kind of stuff described in the OP?
You=Ryan firstmost but anybody reading this secondmost.