Claude Sonnet 4.5 scored an 82% on this metric, as of September 29th, 2025. Three percentage points below the 85% target, achieved one month late, again, remarkably close. Particularly given that in August, Opus 4.1 was already scoring 80% on this benchmark.
I disagree this is close for several reasons.
It isn’t clear that the “parallel test time” number even counts.
My understanding is these benchmarks can’t be achieved by using mechanisms that cost more in compute than a human to manually perform and we have no idea how much parallel attempts are sampled. They use up to 256 in their post on GPQA
It uses an internal scoring model that might not generalize beyond the repos swe-bench tests.
Sonnet 3.7′s 70.3% score did not exist on swebench.com at the point ai-2027 was released (highest was 65.4%), suggesting the authors were not anchoring from that parallel test time number to begin with.
If parallel test time does count, projection is not close:
A projection for 5 months away (beginning of Sep) of growing +15% instead grew +12% 6 months away. That’s 33% slower growth (2% a month vs. 3% a month projected)
Looking more recently, the growth from May’s Sonnet 4 with parallel compute to now (4 months later) has been 1.8%. At this rate assuming linearity, 85% won’t be crossed for nearly 7 months from now, which is over 60% slower than projection.
Claude Sonnet 4.5 scored a 62% on this metric, as of September 29th, 2025.
For OSWorld, these aren’t even the same benchmarks. ai-2027 referred to the original osworld, while the sonnet 4.5 score of 61.4% is for osworld-verifed. Huge difference—Sonnet 3.7 scored 28 on osworld original, while getting a 35.8% on osworld-verified. We might be at more like a 55.6% SOTA today (GTA1 w/ GPT-5) on OG osworld, a huge miss (~46% slower)
Overall, realized data suggests something more like an ai-2029 or even later.
Good response. A few things I do want to stress:
I personally see the lower bound as 33% slower. That’s enough to change 2 to 3 years which is significant.
And again, realistically progress is even slower. The parallel compute version only increased by 1.8% in 4 months. We might be another 6 months from hitting 85% at current rates—this is quite a prediction gap.
Is this true? They haven’t updated their abstract claiming 72.36% (which was from the old version) and I’m wondering if they simply haven’t re-evaluated.
But yes, looking at the GTA1 paper, you are correct that perf varies a bit between os-world and os-world-verified, so I take back that growth is obviously slower than projected.
All said, I trust swe-bench-verified more regardless to track progress:
We’re relying on a well-made benchmark that was done as a second pass by OpenAI. os-world is not that.
Labs seem to be targetting more—low hanging fuit like attaching python interpreters just doesn’t exist for this benchmark ( I’m not sure if the ai-2027 considered this issue when making their os-world predictions)..
We are concerned mainly with coding abilities (automated ai research) on the ai 2027 timelines.