It can be hard to really understand what numbers in a benchmark mean. To do so, you have to be pretty familiar with the task distribution, which is often a little surprising. And, if you are bothering to get familiar with it, you probably already know how the LLM performs. So it’s hard to be sure you’re judging the difficulty accurately, rather than using your sense of the LLM’s intelligence to infer the task difficulty.
Fortunately, a Pokémon game involves a bunch of different tasks, and I’m pretty familiar with them from childhood gameboy sessions. So LLM performance on the game can provide some helpful intuitions about LLM performance in general. Of course, you don’t get all the niceties of statistical power and so on, but I still find it a helpful data source to include.
This post does a good job abstracting some of the subskills involved and provides lots of deliciously specific examples for the claims. It’s also quite entertaining!
Curated. I appreciate this post’s concreteness.
It can be hard to really understand what numbers in a benchmark mean. To do so, you have to be pretty familiar with the task distribution, which is often a little surprising. And, if you are bothering to get familiar with it, you probably already know how the LLM performs. So it’s hard to be sure you’re judging the difficulty accurately, rather than using your sense of the LLM’s intelligence to infer the task difficulty.
Fortunately, a Pokémon game involves a bunch of different tasks, and I’m pretty familiar with them from childhood gameboy sessions. So LLM performance on the game can provide some helpful intuitions about LLM performance in general. Of course, you don’t get all the niceties of statistical power and so on, but I still find it a helpful data source to include.
This post does a good job abstracting some of the subskills involved and provides lots of deliciously specific examples for the claims. It’s also quite entertaining!