I think this might be a result of o-series being trained in a non-chat setup for most of the CoT RL phase and then being hamfistedly finetuned right at the end so it can go into ChatGPT, which just makes them kind of bad at chat and so o3 gets confused when the conversation has a lot of turns. Retraining it to be good at multi-turn chat with separate reasoning traces would probably just be super expensive and not worth the squeeze. (this is just a guess)
I think this might be a result of o-series being trained in a non-chat setup for most of the CoT RL phase and then being hamfistedly finetuned right at the end so it can go into ChatGPT, which just makes them kind of bad at chat and so o3 gets confused when the conversation has a lot of turns. Retraining it to be good at multi-turn chat with separate reasoning traces would probably just be super expensive and not worth the squeeze. (this is just a guess)