You could be right about the limit based on overall compute applying to other approaches to AI just as much as to LLMs. Speculating about the future of AI is always a little frustrating because ultimately we won’t know how to make AGI/ASI until we have it (and can’t even agree on how we will know it when we see it). The way I approach the problem is by looking at what we do know—at this point in time, we only know of one system in existence that we can all agree meets the definition of “general intelligence”, and that is the human brain. Because of how little we still understand about how intelligence actually works, I think the most likely path to AGI—resting on the fewest assumptions about things we don’t know—is a “brain-like AGI”. That’s basically Steven Byrnes’s view and I think his arguments are very compelling. If you accept that view, than I think we end up with something like your scenario anyway, at least for a while until the brain-like AGI comes to fruition.
The whole no-one-can-agree-on-what-AGI-is thing is damn true, and a real problem. Cole and I have a joke that it’s not AGMI (Artificial Gary Marcus Intelligence) unless it solve the hard problem of consciousness, multiply numbers of arbitrary length without error (which humans can’t do perfectly with paper, and obviously really can’t without), and various other things, all at once. A recent post with over 250 karma said that LLM’s aren’t AGI because they can’t make billion-dollar businesses, which almost no humans can do, and no humans can do quickly.
As for the most likely way to get AGI, the case is quite strong for LRMs with additional RL around stuff like long-term memory and to reduce hallucinations, since those systems are, in many ways, nearly there, and there are no clear barriers to them making it the rest of the way.
You could be right about the limit based on overall compute applying to other approaches to AI just as much as to LLMs. Speculating about the future of AI is always a little frustrating because ultimately we won’t know how to make AGI/ASI until we have it (and can’t even agree on how we will know it when we see it). The way I approach the problem is by looking at what we do know—at this point in time, we only know of one system in existence that we can all agree meets the definition of “general intelligence”, and that is the human brain. Because of how little we still understand about how intelligence actually works, I think the most likely path to AGI—resting on the fewest assumptions about things we don’t know—is a “brain-like AGI”. That’s basically Steven Byrnes’s view and I think his arguments are very compelling. If you accept that view, than I think we end up with something like your scenario anyway, at least for a while until the brain-like AGI comes to fruition.
The whole no-one-can-agree-on-what-AGI-is thing is damn true, and a real problem. Cole and I have a joke that it’s not AGMI (Artificial Gary Marcus Intelligence) unless it solve the hard problem of consciousness, multiply numbers of arbitrary length without error (which humans can’t do perfectly with paper, and obviously really can’t without), and various other things, all at once. A recent post with over 250 karma said that LLM’s aren’t AGI because they can’t make billion-dollar businesses, which almost no humans can do, and no humans can do quickly.
As for the most likely way to get AGI, the case is quite strong for LRMs with additional RL around stuff like long-term memory and to reduce hallucinations, since those systems are, in many ways, nearly there, and there are no clear barriers to them making it the rest of the way.