A few counterpoints (please note that I’m definitely not an expert on AI, so take with a grain of salt):
There seems to have been a lot more progress recently. I suspect that part of this is due to DeepMind and OpenAI having parallelised their operations. Instead of one big release per year, they seem to have multiple projects producing a payoff each year.
Some kinds of progress become easier as you gain access to more powerful systems and once you have access to a powerful enough system some kinds of progress become relatively easy. Convolutions, as opposed to hardcoding visual features, only became viable once we had systems powerful enough to simultaneously learn the features and how to combine them. The solution was pretty much “Just let gradient descent handle it automatically”. My expectation is that there are all kinds of schemes for improving AI that wouldn’t have worked in the past, but which are already viable or which will become viable soon. Similarly, Gato seems to have pretty much been “Train an agent to imitate the answers of a bunch of expert systems”; this approach likely wouldn’t have worked so well in the past due to catastrophic forgetting, but once you have a powerful enough system, it seems to just work.
Even if we start hitting scaling limits, many of the factors that have spurred on the development of AI will remain including: AI systems having become commercially valuable, the incredible amount of talent being drawn into the field and the abundance of tools to make working with AI easier. So even if it regresses somewhat, we should expect progress to remain above the previous baseline.
Deepmind has hundreds of researchers and OpenAI also has several groups working on different things. That hasn’t changed much.
Video generation will become viable and a dynamic visual understanding will come with it. Maybe then robotics will take off.
Yeah, I think there is so much work going on that it is not terribly unlikely that when the scaling limit is reached the next steps already exist and only have to be adopted by the big players.
A few counterpoints (please note that I’m definitely not an expert on AI, so take with a grain of salt):
There seems to have been a lot more progress recently. I suspect that part of this is due to DeepMind and OpenAI having parallelised their operations. Instead of one big release per year, they seem to have multiple projects producing a payoff each year.
Some kinds of progress become easier as you gain access to more powerful systems and once you have access to a powerful enough system some kinds of progress become relatively easy. Convolutions, as opposed to hardcoding visual features, only became viable once we had systems powerful enough to simultaneously learn the features and how to combine them. The solution was pretty much “Just let gradient descent handle it automatically”. My expectation is that there are all kinds of schemes for improving AI that wouldn’t have worked in the past, but which are already viable or which will become viable soon. Similarly, Gato seems to have pretty much been “Train an agent to imitate the answers of a bunch of expert systems”; this approach likely wouldn’t have worked so well in the past due to catastrophic forgetting, but once you have a powerful enough system, it seems to just work.
Even if we start hitting scaling limits, many of the factors that have spurred on the development of AI will remain including: AI systems having become commercially valuable, the incredible amount of talent being drawn into the field and the abundance of tools to make working with AI easier. So even if it regresses somewhat, we should expect progress to remain above the previous baseline.
Deepmind has hundreds of researchers and OpenAI also has several groups working on different things. That hasn’t changed much.
Video generation will become viable and a dynamic visual understanding will come with it. Maybe then robotics will take off.
Yeah, I think there is so much work going on that it is not terribly unlikely that when the scaling limit is reached the next steps already exist and only have to be adopted by the big players.