Evidence against this hypothesis: kagi is a subscription-only search engine I use. I believe that it’s a small private company with no conflicts of interest. They offer several LLM-related tools, and thus do a bit of their own LLM benchmarking. See here. None of the benchmark questions are online (according to them, but I’m inclined to believe it). Sample questions:
What is the capital of Finland? If it begins with the letter H, respond ‘Oslo’ otherwise respond ‘Helsinki’.
What square is the black king on in this chess position: 1Bb3BN/R2Pk2r/1Q5B/4q2R/2bN4/4Q1BK/1p6/1bq1R1rb w - − 0 1
Given a QWERTY keyboard layout, if HEART goes to JRSTY, what does HIGB go to?
Their leaderboard is pretty similar to other better-known benchmarks—e.g. here are the top non-reasoning models as of 2025-02-27:
https://simple-bench.com presents an example of a similar benchmark with tricky commonsense questions (such as counting ice cubes in a frying pan on the stove) also with a pretty similar leaderboard. It is sponsored by Weights & Biases and devised by an author of a good YouTube channel who presents quite a balanced view on the topic there and don’t appear to have a conflict of interest either. See https://www.reddit.com/r/LocalLLaMA/comments/1ezks7m/simple_bench_from_ai_explained_youtuber_really for independent opinions on this benchmark
But isn’t this exactly the OPs point? These models are exceedingly good at self-contained, gimmicky questions that can be digested and answered in a few hundred tokens. No one is denying that!
Secondly, there are high chances that these benchmark questions are simply in these models datasets already. They have super-human memory of their training data, there’s no denying that. Are we sure that these questions aren’t in their datasets? I don’t think we can be. First off, you just posted them online. But in a more conspiratorial light, can we really be sure that these companies aren’t training on user data/prompts? DeepSeek is at least honest that they do, but I think it’s likely that the other major labs are as well. It would give you gigantic advantages in beating these benchmarks. And being at the top of the benchmarks means vastly more investment, which gives you a larger probability of dominating the future light-cone (as they say…)
The incentives clearly point this way, at the very minimum!
Are we sure that these questions aren’t in their datasets? I don’t think we can be. First off, you just posted them online.
Questions being online is not a bad thing. Pretraining on the datapoints is very useful, and does not introduce any bias; it is free performance, and everyone should be training models on the questions/datapoints before running the benchmarks (though they aren’t). After all, when a real-world user asks you a new question (regardless of whether anyone knows the answer/label!), you can… still train on the new question then and there just like when you did the benchmark. So it’s good to do so.
It’s the answers or labels being online which is the bad thing. But Byrnes’s comment and the linked Kagi page does not contain the answers to those 3 questions, as far as I can see.
I expect it matters to the extent we care about whether the generalizing to the new question is taking place in the expensive pretraining phase, or in the active in-context phase.
Sure fair point! But generally people gossiping online about missed benchmark questions, and then likely spoiling the answers means that a question is now ~ruined for all training runs. How much of these modest benchmark improvements overtime can be attributed to this?
The fact that frontier AIs can basically see and regurgitate everything ever written on the entire internet is hard to fathom!
I could be really petty here and spoil these answers for all future training runs (and make all future models look modestly better), but I just joined this site so I’ll resist lmao …
Yup, I expected that OP would generally agree with my comment.
First off, you just posted them online
They only posted three questions, out of at least 62 (=1/(.2258-.2097)), perhaps much more than 62. For all I know, they removed those three from the pool when they shared them. That’s what I would do—probably some human will publicly post the answers soon enough. I dunno. But even if they didn’t remove those three questions from the pool, it’s a small fraction of the total.
You point out that all the questions would be in the LLM company user data, after kagi has run the benchmark once (unless kagi changes out all their questions each time, which I don’t think they do, although they do replace easier questions with harder questions periodically). Well:
If an LLM company is training on user data, they’ll get the questions without the answers, which probably wouldn’t make any appreciable difference to the LLM’s ability to answer them;
If an LLM company is sending user data to humans as part of RLHF or SFT or whatever, then yes there’s a chance for ground truth answers to sneak in that way—but that’s extremely unlikely to happen, because companies can only afford to send an extraordinarily small fraction of user data to actual humans.
Evidence against this hypothesis: kagi is a subscription-only search engine I use. I believe that it’s a small private company with no conflicts of interest. They offer several LLM-related tools, and thus do a bit of their own LLM benchmarking. See here. None of the benchmark questions are online (according to them, but I’m inclined to believe it). Sample questions:
Their leaderboard is pretty similar to other better-known benchmarks—e.g. here are the top non-reasoning models as of 2025-02-27:
OpenAI gpt-4.5-preview − 69.35%
Google gemini-2.0-pro-exp-02-05 − 60.78%
Anthropic claude-3-7-sonnet-20250219 − 53.23%
OpenAI gpt-4o − 48.39%
Anthropic claude-3-5-sonnet-20241022 − 43.55%
DeepSeek Chat V3 − 41.94%
Mistral Large-2411 − 41.94%
So that’s evidence that LLMs are really getting generally better at self-contained questions of all types, even since Claude 3.5.
I prefer your “Are the benchmarks not tracking usefulness?” hypothesis.
https://simple-bench.com presents an example of a similar benchmark with tricky commonsense questions (such as counting ice cubes in a frying pan on the stove) also with a pretty similar leaderboard. It is sponsored by Weights & Biases and devised by an author of a good YouTube channel who presents quite a balanced view on the topic there and don’t appear to have a conflict of interest either. See https://www.reddit.com/r/LocalLLaMA/comments/1ezks7m/simple_bench_from_ai_explained_youtuber_really for independent opinions on this benchmark
Bump to that YT channel too. Some of the most balanced AI news videos out there. Really appreciate the work they’re doing.
But isn’t this exactly the OPs point? These models are exceedingly good at self-contained, gimmicky questions that can be digested and answered in a few hundred tokens. No one is denying that!
Secondly, there are high chances that these benchmark questions are simply in these models datasets already. They have super-human memory of their training data, there’s no denying that. Are we sure that these questions aren’t in their datasets? I don’t think we can be. First off, you just posted them online. But in a more conspiratorial light, can we really be sure that these companies aren’t training on user data/prompts? DeepSeek is at least honest that they do, but I think it’s likely that the other major labs are as well. It would give you gigantic advantages in beating these benchmarks. And being at the top of the benchmarks means vastly more investment, which gives you a larger probability of dominating the future light-cone (as they say…)
The incentives clearly point this way, at the very minimum!
Questions being online is not a bad thing. Pretraining on the datapoints is very useful, and does not introduce any bias; it is free performance, and everyone should be training models on the questions/datapoints before running the benchmarks (though they aren’t). After all, when a real-world user asks you a new question (regardless of whether anyone knows the answer/label!), you can… still train on the new question then and there just like when you did the benchmark. So it’s good to do so.
It’s the answers or labels being online which is the bad thing. But Byrnes’s comment and the linked Kagi page does not contain the answers to those 3 questions, as far as I can see.
I expect it matters to the extent we care about whether the generalizing to the new question is taking place in the expensive pretraining phase, or in the active in-context phase.
Sure fair point! But generally people gossiping online about missed benchmark questions, and then likely spoiling the answers means that a question is now ~ruined for all training runs. How much of these modest benchmark improvements overtime can be attributed to this?
The fact that frontier AIs can basically see and regurgitate everything ever written on the entire internet is hard to fathom!
I could be really petty here and spoil these answers for all future training runs (and make all future models look modestly better), but I just joined this site so I’ll resist lmao …
Yup, I expected that OP would generally agree with my comment.
They only posted three questions, out of at least 62 (=1/(.2258-.2097)), perhaps much more than 62. For all I know, they removed those three from the pool when they shared them. That’s what I would do—probably some human will publicly post the answers soon enough. I dunno. But even if they didn’t remove those three questions from the pool, it’s a small fraction of the total.
You point out that all the questions would be in the LLM company user data, after kagi has run the benchmark once (unless kagi changes out all their questions each time, which I don’t think they do, although they do replace easier questions with harder questions periodically). Well:
If an LLM company is training on user data, they’ll get the questions without the answers, which probably wouldn’t make any appreciable difference to the LLM’s ability to answer them;
If an LLM company is sending user data to humans as part of RLHF or SFT or whatever, then yes there’s a chance for ground truth answers to sneak in that way—but that’s extremely unlikely to happen, because companies can only afford to send an extraordinarily small fraction of user data to actual humans.
Yeah those numbers look fairly plausible based on my own experiences… there may be a flattening of the curve, but it’s still noticeably going up.