the same way that Claude Code builds Claude Code, Codex now builds Codex.
I am curious, when humans build AIs, they probably do similar things, because they draw from the same science, the same general knowledge, etc. When the AIs start coding themselves, should we expect them to diverge?
Maybe no, because they are still trained on the same data. Maybe yes, if they can somehow create and accumulate knowledge for themselves that they no longer share? (For example, if they design and run their own experiments and learn from them.)
I am curious, when humans build AIs, they probably do similar things, because they draw from the same science, the same general knowledge, etc. When the AIs start coding themselves, should we expect them to diverge?
Maybe no, because they are still trained on the same data. Maybe yes, if they can somehow create and accumulate knowledge for themselves that they no longer share? (For example, if they design and run their own experiments and learn from them.)