But, on one hand, he is saying that proper methodology is important and expects it to be in place for the next year competition:
But most of his specific methodological issues are inapplicable here, unless OpenAI is lying: they didn’t rewrite the questions, provide tools, intervene during the run, or hand-select answers.
I don’t have a theory of Tao’s motivations, but if the post I linked is interpreted as a response to OpenAI’s result (he didn’t say it was, but he didn’t say it wasn’t and the timing makes it an obvious interpretation) raising those issues is bizarre.
First of all, we would like to see pre-registration, so that we don’t end up learning only about successes (and generally cherry-picking good results, while omitting negative results).
He is trying to steer the field towards generally better practices. I don’t think this is specifically a criticism of this particular OpenAI result, but more an attempt to change the standards.
But most of his specific methodological issues are inapplicable here, unless OpenAI is lying: they didn’t rewrite the questions, provide tools, intervene during the run, or hand-select answers.
I don’t have a theory of Tao’s motivations, but if the post I linked is interpreted as a response to OpenAI’s result (he didn’t say it was, but he didn’t say it wasn’t and the timing makes it an obvious interpretation) raising those issues is bizarre.
First of all, we would like to see pre-registration, so that we don’t end up learning only about successes (and generally cherry-picking good results, while omitting negative results).
He is trying to steer the field towards generally better practices. I don’t think this is specifically a criticism of this particular OpenAI result, but more an attempt to change the standards.
Although he is likely to have some degree of solidarity with the IMO viewpoint and to share some of their annoyance with timing of all this, e.g. https://www.reddit.com/r/math/comments/1m3uqi0/comment/n40qbe9/