Yeah, I didn’t mention this explicitly, but I think this is also likely to happen! It could look something like “the model can do steps 1-5, 6-10, 11-15, and 16-20 in one forward pass each, but it still writes out 20 steps.” Presumably most of the tasks we use reasoning models for will be too complex to do in a single forward pass.
Good point! My thinking is that the model may have a bias for the CoT to start with some kind of obvious “planning” behavior rather than just a vague phrase. Either planning to delete the tests or (futilely) planning to fix the actual problem meets this need. Alternatively, it’s possible that the two training runs resulted in two different kinds of CoT by random chance.
Yeah, I didn’t mention this explicitly, but I think this is also likely to happen! It could look something like “the model can do steps 1-5, 6-10, 11-15, and 16-20 in one forward pass each, but it still writes out 20 steps.” Presumably most of the tasks we use reasoning models for will be too complex to do in a single forward pass.
Good point! My thinking is that the model may have a bias for the CoT to start with some kind of obvious “planning” behavior rather than just a vague phrase. Either planning to delete the tests or (futilely) planning to fix the actual problem meets this need. Alternatively, it’s possible that the two training runs resulted in two different kinds of CoT by random chance.