For what it’s worth I’ve personally found that Claude Code with Opus 4.6 and Codex with GPT 5.4 are very similar products. I haven’t done a very deep dive, but I’ve used them side by side for a few projects. Certainly the difference between them feels much smaller than the difference from models that are a few months older.
Yeah, that’s what I hear from people I actually know. I think people may have been under the mistaken impression that I think Claude is significantly better than Codex at coding? I never said that.
(Wei Dai was suggesting that current sentiment for Claude Code and Codex seemed to be comparable, in response to Vladimir Nesov mentioning “OpenAI’s current failure to have a strong offering similar to Claude Code.”)
Yeah, I guess I was under that impression, since if Claude is similar to Codex at coding, while being a larger, more expensive model (which seems likely based on API costs and Nesov’s analysis of their training hardware), then Anthropic has no clear lead (or would be behind if not for Mythos). So I thought your claim of their lead was partly based on an impression (similar to Nesov’s) that Claude is significantly better than Codex at coding.
(And I think a lot of people were probably under this misimpression at some point, including me, due to seeing a lot of talk about Claude Code on X around December, much more than about Codex, which in retrospect I have to attribute to a successful Anthropic marketing campaign.)
For what it’s worth I’ve personally found that Claude Code with Opus 4.6 and Codex with GPT 5.4 are very similar products. I haven’t done a very deep dive, but I’ve used them side by side for a few projects. Certainly the difference between them feels much smaller than the difference from models that are a few months older.
Yeah, that’s what I hear from people I actually know. I think people may have been under the mistaken impression that I think Claude is significantly better than Codex at coding? I never said that.
(Wei Dai was suggesting that current sentiment for Claude Code and Codex seemed to be comparable, in response to Vladimir Nesov mentioning “OpenAI’s current failure to have a strong offering similar to Claude Code.”)
Yeah I was replying narrowly to the point about Reddit comment threads. Perhaps I should have disclaimed.
Yeah, I guess I was under that impression, since if Claude is similar to Codex at coding, while being a larger, more expensive model (which seems likely based on API costs and Nesov’s analysis of their training hardware), then Anthropic has no clear lead (or would be behind if not for Mythos). So I thought your claim of their lead was partly based on an impression (similar to Nesov’s) that Claude is significantly better than Codex at coding.
(And I think a lot of people were probably under this misimpression at some point, including me, due to seeing a lot of talk about Claude Code on X around December, much more than about Codex, which in retrospect I have to attribute to a successful Anthropic marketing campaign.)