There is a notable slowdown in that progress, however we should note the following (so that we don’t overinterpret it):
A lot of gains in this particular competition come from adaptations of the pre-existing research literature (it’s not clear how much of non-yet-adopted acceleration is in the pre-existing literature, and it might be quite a lot, but (by definition) the pre-existing literature is a fixed size resource, with its use being subject to saturation, and the “true software intelligence explosion mode” would presumably include creation of novel research, and not just re-use of pre-existing research).
Organizationally, the big slowdown around 3 min coincides with the project organizer being hired by OpenAI, and then no longer contributing (and, for some time, not even reviewing record breaking pull requests). So for a while the project looked dormant. Now it is active again, but it’s difficult to say if the level of participation is back to the pre-slowdown level.
One thing which should not be considered “pre-existing” literature is Muon optimizer (which is the child of the project organizer in collaboration with his colleagues and which is probably the most exciting event in the space of gradient-based optimizers since the invention of Adam in 2014; see e.g. https://jeremybernste.in/writing/deriving-muon for a more in-depth look and also Kimi K2 paper, https://arxiv.org/abs/2507.20534, and, in particular, its remarkable Figure 3 Page 5 learning curve). But an event of this magnitude is not a part of a series (it is not an accident that this improvement comes from the project organizer, and not from the “field”).
So, yes, it is possible that this curve points to the presence of some saturation effects, but it’s difficult to be certain.
There is a notable slowdown in that progress, however we should note the following (so that we don’t overinterpret it):
A lot of gains in this particular competition come from adaptations of the pre-existing research literature (it’s not clear how much of non-yet-adopted acceleration is in the pre-existing literature, and it might be quite a lot, but (by definition) the pre-existing literature is a fixed size resource, with its use being subject to saturation, and the “true software intelligence explosion mode” would presumably include creation of novel research, and not just re-use of pre-existing research).
Organizationally, the big slowdown around 3 min coincides with the project organizer being hired by OpenAI, and then no longer contributing (and, for some time, not even reviewing record breaking pull requests). So for a while the project looked dormant. Now it is active again, but it’s difficult to say if the level of participation is back to the pre-slowdown level.
One thing which should not be considered “pre-existing” literature is Muon optimizer (which is the child of the project organizer in collaboration with his colleagues and which is probably the most exciting event in the space of gradient-based optimizers since the invention of Adam in 2014; see e.g. https://jeremybernste.in/writing/deriving-muon for a more in-depth look and also Kimi K2 paper, https://arxiv.org/abs/2507.20534, and, in particular, its remarkable Figure 3 Page 5 learning curve). But an event of this magnitude is not a part of a series (it is not an accident that this improvement comes from the project organizer, and not from the “field”).
So, yes, it is possible that this curve points to the presence of some saturation effects, but it’s difficult to be certain.