Epoch AI’s new evaluation for Gemini 2.5 Flash Preview is broken.
On their AI Benchmarking dashboard, the newest Gemini 2.5 Flash model is listed as having an accuracy of 4% ± 0.71% on GPQA Diamond, when Google’s official announcement lists it at over 80%, and when GPQA is a multiple-choice test with 4 options:
It’s because of formatting issues. Helpfully, Epoch provides the logs from the evaluation, and the model just simply hasn’t been responding in the correct format.
For example, if you look at the first sample from the logs, the correct answer is listed as “B”, but the model answered in Latex, $\boxed{B}$, so it was scored incorrect. There are plenty of other examples like this.
This sounds like a whiteboard to me