The whole no-one-can-agree-on-what-AGI-is thing is damn true, and a real problem. Cole and I have a joke that it’s not AGMI (Artificial Gary Marcus Intelligence) unless it solve the hard problem of consciousness, multiply numbers of arbitrary length without error (which humans can’t do perfectly with paper, and obviously really can’t without), and various other things, all at once. A recent post with over 250 karma said that LLM’s aren’t AGI because they can’t make billion-dollar businesses, which almost no humans can do, and no humans can do quickly.
As for the most likely way to get AGI, the case is quite strong for LRMs with additional RL around stuff like long-term memory and to reduce hallucinations, since those systems are, in many ways, nearly there, and there are no clear barriers to them making it the rest of the way.
The whole no-one-can-agree-on-what-AGI-is thing is damn true, and a real problem. Cole and I have a joke that it’s not AGMI (Artificial Gary Marcus Intelligence) unless it solve the hard problem of consciousness, multiply numbers of arbitrary length without error (which humans can’t do perfectly with paper, and obviously really can’t without), and various other things, all at once. A recent post with over 250 karma said that LLM’s aren’t AGI because they can’t make billion-dollar businesses, which almost no humans can do, and no humans can do quickly.
As for the most likely way to get AGI, the case is quite strong for LRMs with additional RL around stuff like long-term memory and to reduce hallucinations, since those systems are, in many ways, nearly there, and there are no clear barriers to them making it the rest of the way.