If I’m interpreting the paper correctly the k at which base models start beating RL’d models is a per-task number, and k can be arbitrarily high for a given task, and the 50-400 range was specifically for tasks of the type the authors chose within a narrow difficulty band.
Let’s say you have a base model which performs at 35% on 5 digit addition, and an RL’d model which performs at 99.98%. Even if the failures of the RL’d model are perfectly correlated, you’d need k=20 for base@20 to exceed the performance of fine-tuned@20. And the failures of the RL model won’t be perfectly correlated—but this paper claims that the failures of the RL model will be more correlated than the failures of the base model, and so the lines will cross eventually, and “eventually” was @50 to @400 in the tasks they tested.
But you could define a task where you pass in 10 pairs of 5 digit numbers and the model must correctly find the sum of each pair. The base model will probably succeed at this task at somewhere on the order of 0.35^10 or about 0.0003% of the time, while the RL’d model should succeed about 99.8% of the time. So for this task we’d expect k in the range of k=220,000 assuming perfectly-correlated failures in the RL model, and higher otherwise.
Also I suspect that there is some astronomically high k such that monkeys at a keyboard (i.e. “output random tokens”) will outperform base models for some tasks by the pass@k metric.
Also I suspect that there is some astronomically high k such that monkeys at a keyboard (i.e. “output random tokens”) will outperform base models for some tasks by the pass@k metric.
It would be an extreme bias-variance tradeoff, yes.
The base model will probably succeed at this task at somewhere on the order of 0.35^10 or about 0.0003% of the time, while the RL’d model should succeed about 99.8% of the time.
The interesting concept in the paper is the location of the crossover point, which seems remarkably stable (for a given task) across specific RL techniques and amount of RL training. It can be measured experimentally for a task by doing a little bit of RL training, and RL@1 performance won’t get better than that with more training, so you’re unlikely to get the RL model to succeed 99.8% of the time (at pass@1) ever unless the level of performance of the base model at the crossover point with a weak RL model was already higher than 99.8%.
Probably the crossover point for a task depends on things that can be changed (such as strength of the pretrained model, or size/relevance of the verifiable task dataset, or possibly the inference time reasoning budget). The issue isn’t for example as straightforward as losing entropy in RL policy (as a formulation of reduced exploration), since DAPO specifically addresses this issue (otherwise present in vanilla GRPO), but the pass@k plot for DAPO (Figure 7, top) barely moves (compared to other methods), in their experiment it’s even slightly worse at the crossover point.
So in the context of this paper it remains unclear how to move the plot to reach ever higher base@k performance using RL@1, higher than the ceiling of where base@k already was at the crossover point when comparing with some method at only 100-500 RL steps.
If I’m interpreting the paper correctly the
kat which base models start beating RL’d models is a per-task number, andkcan be arbitrarily high for a given task, and the 50-400 range was specifically for tasks of the type the authors chose within a narrow difficulty band.Let’s say you have a base model which performs at 35% on 5 digit addition, and an RL’d model which performs at 99.98%. Even if the failures of the RL’d model are perfectly correlated, you’d need k=20 for base@20 to exceed the performance of fine-tuned@20. And the failures of the RL model won’t be perfectly correlated—but this paper claims that the failures of the RL model will be more correlated than the failures of the base model, and so the lines will cross eventually, and “eventually” was @50 to @400 in the tasks they tested.
But you could define a task where you pass in 10 pairs of 5 digit numbers and the model must correctly find the sum of each pair. The base model will probably succeed at this task at somewhere on the order of 0.35^10 or about 0.0003% of the time, while the RL’d model should succeed about 99.8% of the time. So for this task we’d expect k in the range of k=220,000 assuming perfectly-correlated failures in the RL model, and higher otherwise.
Also I suspect that there is some astronomically high k such that monkeys at a keyboard (i.e. “output random tokens”) will outperform base models for some tasks by the pass@k metric.
It would be an extreme bias-variance tradeoff, yes.
The interesting concept in the paper is the location of the crossover point, which seems remarkably stable (for a given task) across specific RL techniques and amount of RL training. It can be measured experimentally for a task by doing a little bit of RL training, and RL@1 performance won’t get better than that with more training, so you’re unlikely to get the RL model to succeed 99.8% of the time (at pass@1) ever unless the level of performance of the base model at the crossover point with a weak RL model was already higher than 99.8%.
Probably the crossover point for a task depends on things that can be changed (such as strength of the pretrained model, or size/relevance of the verifiable task dataset, or possibly the inference time reasoning budget). The issue isn’t for example as straightforward as losing entropy in RL policy (as a formulation of reduced exploration), since DAPO specifically addresses this issue (otherwise present in vanilla GRPO), but the pass@k plot for DAPO (Figure 7, top) barely moves (compared to other methods), in their experiment it’s even slightly worse at the crossover point.
So in the context of this paper it remains unclear how to move the plot to reach ever higher base@k performance using RL@1, higher than the ceiling of where base@k already was at the crossover point when comparing with some method at only 100-500 RL steps.