Hooray, my prediction six years ago is now unambiguously correct:
“I assert that at some point in the next two years, there will exist an AI engine which when given the total body of human work in mathematics and a small prompt (like the one used in gpt-2), is capable of generating mathematical works that humans in the field find interesting to read, provided of course that someone bothers to try.”
I’m going to go celebrate by saying something else that the people around me think is dumb.
Edit for the tags: I think I was right two years ago because the (low) threshold I set was exceeded with GPT-3 or 3.5 (which was in scope for two years), but there was still room for debate. I assert that six years on, there’s no ambiguity about whether the threshold was crossed.
The thread that comment came from was contentious, I got a lot of pushback here and elsewhere during the early GPT days for my opinion that transformers would be able to output interesting math.
Two years later when 3.5 was out, I felt that my ‘interesting’ threshold had been crossed and I had been technically correct, but was still hearing the same arguments. I’m happy that six years on, we have proof that my assessment of the potential of transformers, which to be clear, was absolutely viewed as ‘evidence that this person is crazy in a way that makes me want to avoid him’, was close to accurate.
From a meta perspective, this post is probably not helping me appear sane.
Hooray, my prediction six years ago is now unambiguously correct:
“I assert that at some point in the next two years, there will exist an AI engine which when given the total body of human work in mathematics and a small prompt (like the one used in gpt-2), is capable of generating mathematical works that humans in the field find interesting to read, provided of course that someone bothers to try.”
I’m going to go celebrate by saying something else that the people around me think is dumb.
Edit for the tags: I think I was right two years ago because the (low) threshold I set was exceeded with GPT-3 or 3.5 (which was in scope for two years), but there was still room for debate. I assert that six years on, there’s no ambiguity about whether the threshold was crossed.
I thought you were making a joke, but your edit confused me.
The thread that comment came from was contentious, I got a lot of pushback here and elsewhere during the early GPT days for my opinion that transformers would be able to output interesting math.
Two years later when 3.5 was out, I felt that my ‘interesting’ threshold had been crossed and I had been technically correct, but was still hearing the same arguments. I’m happy that six years on, we have proof that my assessment of the potential of transformers, which to be clear, was absolutely viewed as ‘evidence that this person is crazy in a way that makes me want to avoid him’, was close to accurate.
From a meta perspective, this post is probably not helping me appear sane.
It shows both that you were right qualitatively and wrong about the numbers.