The price for GPT-5.5 ($5/$30 per 1M input/output tokens) is about the same as the price for Opus 4.7 ($5/$25), so they are likely in the same weight class. Given the jagged parity in performance between the smaller GPT-5.4 and the bigger Opus 4.6 (with different reasoning token budgets), the current OpenAI recipe might make GPT-5.5 or its upcoming successors stronger than Mythos in some ways, but as a pretrain Mythos probably holds more potential by the end of the year (and might be stronger for now, if GPT-5.5 didn’t yet go all out on RLVR scaling).
At the same time, OpenAI might still train its own Mythos-scale model by the end of the year, so that it’s ready to go once the GB300 NVL72 buildout can support it at a more reasonable price (such as $10/$50), and they can release it as a replacement for the GPT-5.5 series then (which might be at GPT-5.7 or something at that point), the way GPT-5.5 is replacing GPT-5.4 despite the doubling of API price. The introduction of the mini and nano models for GPT-5.4 illustrates how OpenAI probably intends to frame the introduction of ever more expensive models, with users who need the models to remain cheap having the option to switch to a smaller model variant after a version change, while a model with a given size branding (normal/mini/nano) can get more expensive at some version changes.
The reason I previously suspected a Mythos-class model is the Spud rumors. With GPT-4.5 pretrain, it was plausible OpenAI gained experience RLVRing an Opus-class model in late 2025, once they got enough GB200 NVL72 working (but not yet enough to serve it as a flagship model). And that experience could be leveraged to start working on a Mythos-class model in early 2026, even before the Opus-class model was released.
There could still be a secret Mythos-class model, hence I only said there likely isn’t a Mythos-class model. I think it’s unlikely, since Spud is the first RLVRed Opus-class model OpenAI has released. The size of GPT-4.5 was very likely similar, could even be literally the same pretrain, but it wasn’t RLVRed back then. And OpenAI didn’t have the hardware (in sufficient quantity and good enough shape) to RLVR it until probably late 2025, and possibly had to re-do the pretrain. So they’d start with Spud, and move on to a Mythos-class model after that, not the other way around, and they were only just RLVRing Spud in Mar 2026.
A Mythos-class model is likely the next major step, and for deployment it makes sense for OpenAI to take it once there’s enough GB300 NVL72 to rely on exclusively to serve it, so it could still happen this year, in time to compete with the actual Mythos. If Spud was still pretrained on Hopper, further scaling of pretraining for the Mythos-class model is the natural thing that might change compared to Spud, apart from model size.
Since GPT-5.5 turns out to be the same thing as Spud, there likely isn’t currently a Mythos-class model at OpenAI.
The price for GPT-5.5 ($5/$30 per 1M input/output tokens) is about the same as the price for Opus 4.7 ($5/$25), so they are likely in the same weight class. Given the jagged parity in performance between the smaller GPT-5.4 and the bigger Opus 4.6 (with different reasoning token budgets), the current OpenAI recipe might make GPT-5.5 or its upcoming successors stronger than Mythos in some ways, but as a pretrain Mythos probably holds more potential by the end of the year (and might be stronger for now, if GPT-5.5 didn’t yet go all out on RLVR scaling).
At the same time, OpenAI might still train its own Mythos-scale model by the end of the year, so that it’s ready to go once the GB300 NVL72 buildout can support it at a more reasonable price (such as $10/$50), and they can release it as a replacement for the GPT-5.5 series then (which might be at GPT-5.7 or something at that point), the way GPT-5.5 is replacing GPT-5.4 despite the doubling of API price. The introduction of the mini and nano models for GPT-5.4 illustrates how OpenAI probably intends to frame the introduction of ever more expensive models, with users who need the models to remain cheap having the option to switch to a smaller model variant after a version change, while a model with a given size branding (normal/mini/nano) can get more expensive at some version changes.
I can’t follow the logic. If Spud is about the same price and capability of Opus 4.7 then OpenAI doesn’t have a Mythos level model?
The reason I previously suspected a Mythos-class model is the Spud rumors. With GPT-4.5 pretrain, it was plausible OpenAI gained experience RLVRing an Opus-class model in late 2025, once they got enough GB200 NVL72 working (but not yet enough to serve it as a flagship model). And that experience could be leveraged to start working on a Mythos-class model in early 2026, even before the Opus-class model was released.
But now we know the Spud rumors refer to GPT-5.5 rather than to a Mythos-class model. Furthermore, this means that comments by Altman on 11 Mar 2026 about currently training at the Abilene site very likely refer to Spud. Given the timing, and also a claim from SemiAnalysis (from 25 Apr 2026), the training at the Abilene site was specifically RL rather than pretraining.
There could still be a secret Mythos-class model, hence I only said there likely isn’t a Mythos-class model. I think it’s unlikely, since Spud is the first RLVRed Opus-class model OpenAI has released. The size of GPT-4.5 was very likely similar, could even be literally the same pretrain, but it wasn’t RLVRed back then. And OpenAI didn’t have the hardware (in sufficient quantity and good enough shape) to RLVR it until probably late 2025, and possibly had to re-do the pretrain. So they’d start with Spud, and move on to a Mythos-class model after that, not the other way around, and they were only just RLVRing Spud in Mar 2026.
A Mythos-class model is likely the next major step, and for deployment it makes sense for OpenAI to take it once there’s enough GB300 NVL72 to rely on exclusively to serve it, so it could still happen this year, in time to compete with the actual Mythos. If Spud was still pretrained on Hopper, further scaling of pretraining for the Mythos-class model is the natural thing that might change compared to Spud, apart from model size.