Worse performance was due to using a worse/less adapted agency scaffold
Are you sure? I’m pretty sure that was cited as *one* of the possible reasons, but not confirmed anywhere. I don’t know if some minor scaffolding differences could have that much of an effect on the results (-15%?) in a math benchmark, but if they did, that should have been accounted for in the first place. I don’t think other models were tested with scaffolds specifically engineered for them getting a higher score.
December-2024 o3 and the public o3 are indeed entirely different models, but I don’t think it implies the December one was tailored for ARC-AGI.
As per Arc Prize and what they said OpenAI told them, the December version (“o3-preview”, as Arc Prize named it) had a compute tier above that of any publicly released model. Not only that, they say that the public version of o3 didn’t undergo any RL for ARC-AGI, “not even on the train set”. That seems suspicious to me, because once you train a model on something, you can’t easily untrain it; as per OpenAI, the ARC-AGI train set was “just a tiny fraction of the o3 train set” and, once again, the model used for evaluations is “fully general”. This means that either o3-preview was trained on the ARC-AGI train set somewhere close to the end of the training run and OpenAI was easily able to load an earlier checkpoint to undo that, then not train it on that again for unknown reasons, OR that the public version of o3 was retrained from scratch/a very early checkpoint, then again, not trained on the ARC-AGI data again for unknown reasons, OR that o3-preview was somehow specifically tailored towards ARC-AGI. The latter option seems the most likely to me, especially considering the custom compute tier used in the December evaluation.
Are you sure? I’m pretty sure that was cited as *one* of the possible reasons, but not confirmed anywhere. I don’t know if some minor scaffolding differences could have that much of an effect on the results (-15%?) in a math benchmark, but if they did, that should have been accounted for in the first place. I don’t think other models were tested with scaffolds specifically engineered for them getting a higher score.
As per Arc Prize and what they said OpenAI told them, the December version (“o3-preview”, as Arc Prize named it) had a compute tier above that of any publicly released model. Not only that, they say that the public version of o3 didn’t undergo any RL for ARC-AGI, “not even on the train set”. That seems suspicious to me, because once you train a model on something, you can’t easily untrain it; as per OpenAI, the ARC-AGI train set was “just a tiny fraction of the o3 train set” and, once again, the model used for evaluations is “fully general”. This means that either o3-preview was trained on the ARC-AGI train set somewhere close to the end of the training run and OpenAI was easily able to load an earlier checkpoint to undo that, then not train it on that again for unknown reasons, OR that the public version of o3 was retrained from scratch/a very early checkpoint, then again, not trained on the ARC-AGI data again for unknown reasons, OR that o3-preview was somehow specifically tailored towards ARC-AGI. The latter option seems the most likely to me, especially considering the custom compute tier used in the December evaluation.