You’re right that there’s nuance here. The scaling laws involved mean exponential investment → linear improvement in capability, so yeah it naturally slows down unless you go crazy on investment… and we are, in fact, going crazy on investment. GPT-3 is pre-ChatGPT, pre-current paradigm, and GPT-4 is nearly so. So ultimately I’m not sure it makes that much sense to compare the GPT1-4 timelines to now. I just wanted to note that we’re not off-trend there.
You’re right that there’s nuance here. The scaling laws involved mean exponential investment → linear improvement in capability, so yeah it naturally slows down unless you go crazy on investment… and we are, in fact, going crazy on investment. GPT-3 is pre-ChatGPT, pre-current paradigm, and GPT-4 is nearly so. So ultimately I’m not sure it makes that much sense to compare the GPT1-4 timelines to now. I just wanted to note that we’re not off-trend there.