I suspect Chinchilla’s implied data requirements aren’t going to be that much of a blocker for capability gain. It is an important result, but it’s primarily about the behavior of current backpropped transformer based LLMs.
The data inefficiency of many architectures was known before Chinchilla, but the industry worked around it because it wasn’t yet a bottleneck. After Chinchilla, it has become one of the largest architectural optimization targets. Given the increase in focus and the relative infancy of the research, I would guess the next two years will see the picking of some very juicy low hanging fruit. There are a lot of options floating nearby in conceptspace and there is a lot of room to grow; I’d be surprised if data limitations still feel as salient in 2025.
I suspect Chinchilla’s implied data requirements aren’t going to be that much of a blocker for capability gain. It is an important result, but it’s primarily about the behavior of current backpropped transformer based LLMs.
The data inefficiency of many architectures was known before Chinchilla, but the industry worked around it because it wasn’t yet a bottleneck. After Chinchilla, it has become one of the largest architectural optimization targets. Given the increase in focus and the relative infancy of the research, I would guess the next two years will see the picking of some very juicy low hanging fruit. There are a lot of options floating nearby in conceptspace and there is a lot of room to grow; I’d be surprised if data limitations still feel as salient in 2025.