Pardon the half-sneering tone, but old nan can’t resist: « Oh, my sweet summer child, what do you know of fearing noob gains? Fear is for AI winter, my little lord, when the vanishing gradient problem was a hundred feet deep and the ice wind comes howling out of funding agencies, cutting every budget, dispersing the students, freezing the sparse spared researchers..
Seriously, three years is just a data point, and you want to conclude on the rate of change! I guess you would agree 2016-2022 saw more gains than 2010-2016, and not because the latter were boring times. I disagree that finding out what big transformers could do in the three last years was not a big deal, or even that this was low hanging fruits. I guess that it was low hanging fruits for you, because of the tools you were having access to, and I interpret your post as a deep and true intuition that the next step shall demand different tools (I vote for: « clever inferences from functional neuroscience & neuropsychology»). In any case, welcome on lesswrong and thanks for your precious input! (even if old nan was amazed you were expecting even faster progress!)
I am a young bushy eyed first year PhD. I imagine if you knew how much of a child of summer I was you would sneer on sheer principle, and it would be justified. I have seen a lot of people expecting eternal summer, and this is why I predict a chilly fall. Not a full winter, but a slowdown as expectations come back to reality.
I wish I was wise enough at your age to post my gut feeling on internet so that I could better update later. Well, internet did not exist, but you got the idea.
One question after gwern’s reformulation: do you agree that, in the past, technical progress in ML almost always came first (before fundamental understanding)? In other words, is the crux of your post that we should no longer hope for practical progress without truly understanding why what we do should work?
Pardon the half-sneering tone, but old nan can’t resist: « Oh, my sweet summer child, what do you know of fearing noob gains? Fear is for AI winter, my little lord, when the vanishing gradient problem was a hundred feet deep and the ice wind comes howling out of funding agencies, cutting every budget, dispersing the students, freezing the sparse spared researchers..
Seriously, three years is just a data point, and you want to conclude on the rate of change! I guess you would agree 2016-2022 saw more gains than 2010-2016, and not because the latter were boring times. I disagree that finding out what big transformers could do in the three last years was not a big deal, or even that this was low hanging fruits. I guess that it was low hanging fruits for you, because of the tools you were having access to, and I interpret your post as a deep and true intuition that the next step shall demand different tools (I vote for: « clever inferences from functional neuroscience & neuropsychology»). In any case, welcome on lesswrong and thanks for your precious input! (even if old nan was amazed you were expecting even faster progress!)
I am a young bushy eyed first year PhD. I imagine if you knew how much of a child of summer I was you would sneer on sheer principle, and it would be justified. I have seen a lot of people expecting eternal summer, and this is why I predict a chilly fall. Not a full winter, but a slowdown as expectations come back to reality.
I wish I was wise enough at your age to post my gut feeling on internet so that I could better update later. Well, internet did not exist, but you got the idea.
One question after gwern’s reformulation: do you agree that, in the past, technical progress in ML almost always came first (before fundamental understanding)? In other words, is the crux of your post that we should no longer hope for practical progress without truly understanding why what we do should work?