Another neat example of mundane LLM utility, by Tim Gowers on Twitter:
I crossed an interesting threshold yesterday, which I think many other mathematicians have been crossing recently as well. In the middle of trying to prove a result, I identified a statement that looked true and that would, if true, be useful to me. 1⁄3
Instead of trying to prove it, I asked GPT5 about it, and in about 20 seconds received a proof. The proof relied on a lemma that I had not heard of (the statement was a bit outside my main areas), so although I am confident I’d have got there in the end, 2⁄3
the time it would have taken me would probably have been of order of magnitude an hour (an estimate that comes with quite wide error bars). So it looks as though we have entered the brief but enjoyable era where our research is greatly sped up by AI but AI still needs us. 3⁄3
I’ve seen lots of variations of this anecdote by mathematicians, but none by Fields medalists.
Also that last sentence singles Gowers out among top-tier mathematicians as far as I can tell for thinking that AI will obsolete him soon at the thing he does best. Terry Tao and Kevin Buzzard in contrast don’t give me this impression at all, as excited and engaged as they are with AI x math.
Another neat example of mundane LLM utility, by Tim Gowers on Twitter:
I’ve seen lots of variations of this anecdote by mathematicians, but none by Fields medalists.
Also that last sentence singles Gowers out among top-tier mathematicians as far as I can tell for thinking that AI will obsolete him soon at the thing he does best. Terry Tao and Kevin Buzzard in contrast don’t give me this impression at all, as excited and engaged as they are with AI x math.