Why not think that the new paradigm/insight would in practice be much more continuous? E.g., you first invent a shitty version of it which creates some improvement on existing methods, then you make a somewhat better version, and so on.
I think there are sometimes large breakthroughs which come all in a small period of time (e.g. a month), but usually things are more incremental. For instance, “reasoning models” was arguably the largest publicly known breakthrough of the last 1.5 years and it seems very continuous. (Note that even as of november 2023, OpenAI had some prototype of the relevant thing and this is long before o1 came out.)
Things are also probably smoothed out some because you first test new improvements at smaller scale and companies only run big training runs periodically. (Though this can make things jumpier in some ways.)
I think we should put a bit of weight on “big algorithmic breakthrough that occurs over the course of a month lead to very powerful AI starting from well below that” (maybe like 10%) and more weight on “very powerful AI will emerge at a point when some shift in paradigm/algorithms invented within a year has made progress substantially faster for some potentially short period” (maybe like 40% though I feel quite uncertain).
Why not think that the new paradigm/insight would in practice be much more continuous? E.g., you first invent a shitty version of it which creates some improvement on existing methods, then you make a somewhat better version, and so on.
I think there are sometimes large breakthroughs which come all in a small period of time (e.g. a month), but usually things are more incremental. For instance, “reasoning models” was arguably the largest publicly known breakthrough of the last 1.5 years and it seems very continuous. (Note that even as of november 2023, OpenAI had some prototype of the relevant thing and this is long before o1 came out.)
Things are also probably smoothed out some because you first test new improvements at smaller scale and companies only run big training runs periodically. (Though this can make things jumpier in some ways.)
I think we should put a bit of weight on “big algorithmic breakthrough that occurs over the course of a month lead to very powerful AI starting from well below that” (maybe like 10%) and more weight on “very powerful AI will emerge at a point when some shift in paradigm/algorithms invented within a year has made progress substantially faster for some potentially short period” (maybe like 40% though I feel quite uncertain).