There’s been incremental improvement and various quality-of-life features like more pleasant chatbot personas, tool use, multimodality, gradually better math/programming performance that make the models useful for gradually bigger demographics, et cetera.
But it’s all incremental, no jumps like 2-to-3 or 3-to-4.
I see, thanks. Just to make sure I’m understanding you correctly, are you excluding the reasoning models, or are you saying there was no jump from GPT-4 to o3? (At first I thought you were excluding them in this comment, until I noticed the “gradually better math/programming performance.”)
There’s been incremental improvement and various quality-of-life features like more pleasant chatbot personas, tool use, multimodality, gradually better math/programming performance that make the models useful for gradually bigger demographics, et cetera.
But it’s all incremental, no jumps like 2-to-3 or 3-to-4.
I see, thanks. Just to make sure I’m understanding you correctly, are you excluding the reasoning models, or are you saying there was no jump from GPT-4 to o3? (At first I thought you were excluding them in this comment, until I noticed the “gradually better math/programming performance.”)
I think GPT-4 to o3 represent non-incremental narrow progress, but only, at best, incremental general progress.
(It’s possible that o3 does “unlock” transfer learning, or that o4 will do that, etc., but we’ve seen no indication of that so far.)