We have no idea whether or not GPT-4 is self-improving. From the outside, it’s possible that GPT-4 already produces training data that are used for fine-tuning GPT-4.
Given how little OpenAI has said about the step that lead to training GPT-4, I doubt they would tell us if they use GPT-5 in a way where it self-improves.
Your bet seems to assume that there will be public knowledge about whether or not GPT-5 is self-improving when it gets released. It’s unclear to me why we would expect that to be public knowledge.
Yes, that is true, it could be difficult to resolve. One way it might resolve is if the improvement is sufficiently rapid and dramatic, such that it is obvious even from the outside. Another way is if they do decide to come out and say that they have done this. Or if ARC tests for this and declares it to be the case. Or perhaps some other as yet unforseen way.
But yeah, there’s a chance the market has to resolve NA for being undeterminable.
We have no idea whether or not GPT-4 is self-improving. From the outside, it’s possible that GPT-4 already produces training data that are used for fine-tuning GPT-4.
Given how little OpenAI has said about the step that lead to training GPT-4, I doubt they would tell us if they use GPT-5 in a way where it self-improves.
Your bet seems to assume that there will be public knowledge about whether or not GPT-5 is self-improving when it gets released. It’s unclear to me why we would expect that to be public knowledge.
Yes, that is true, it could be difficult to resolve. One way it might resolve is if the improvement is sufficiently rapid and dramatic, such that it is obvious even from the outside. Another way is if they do decide to come out and say that they have done this. Or if ARC tests for this and declares it to be the case. Or perhaps some other as yet unforseen way. But yeah, there’s a chance the market has to resolve NA for being undeterminable.