Serious question… how is this different from baby AGI?
I’d been thinking about that a while ago—it is not like the human brain is one thing that can do near everything; it is a thing that can determine what categories of todos there are, and invoke modules that can handle them. The way my brain processes a math question, vs. a language question, vs. visual processing, vs motion control relies on whether different systems, often ones that developed independently and at different times, but simply interface neatly. My conscious perception often only pertains to the results, not their production; I have no idea how my brain processes a lot of what it processes. And so I figured, well, it does not not matter if chatGPT sucks at math and spatial reasoning, insofar as we have AIs which don’t, so if it can recognise its limitations and connect to them for answers, wouldn’t that be the same? And an LLM is well positioned for this; it can logically reason, has extensive knowledge, can speak with humans, can code, can interpret websites.
And isn’t this the same? It runs out of knowledge; it googles it. It needs math; it invokes Wolfram Alpha. It needs to make an image; it invokes Dall-e. It needs your data; it opens your gmail and todo app and cloud drive. It runs into a different problem… it googles which program might fix it… it writes the code to access it based on prior examples… I saw this as a theoretical path to AGI, but had envisioned it as a process that humans would have to set up for it, figuring out whether it needed another AI and which one that would be, and individually allowing the connections based on a careful analysis of whether this is safe. Like, maybe starting with allowing it to access a chess AI, cause that seems harmless. Not allowing practically anyone to connect this AI to practically anything. This is crazy.
Serious question… how is this different from baby AGI?
I’d been thinking about that a while ago—it is not like the human brain is one thing that can do near everything; it is a thing that can determine what categories of todos there are, and invoke modules that can handle them. The way my brain processes a math question, vs. a language question, vs. visual processing, vs motion control relies on whether different systems, often ones that developed independently and at different times, but simply interface neatly. My conscious perception often only pertains to the results, not their production; I have no idea how my brain processes a lot of what it processes. And so I figured, well, it does not not matter if chatGPT sucks at math and spatial reasoning, insofar as we have AIs which don’t, so if it can recognise its limitations and connect to them for answers, wouldn’t that be the same? And an LLM is well positioned for this; it can logically reason, has extensive knowledge, can speak with humans, can code, can interpret websites.
And isn’t this the same? It runs out of knowledge; it googles it. It needs math; it invokes Wolfram Alpha. It needs to make an image; it invokes Dall-e. It needs your data; it opens your gmail and todo app and cloud drive. It runs into a different problem… it googles which program might fix it… it writes the code to access it based on prior examples… I saw this as a theoretical path to AGI, but had envisioned it as a process that humans would have to set up for it, figuring out whether it needed another AI and which one that would be, and individually allowing the connections based on a careful analysis of whether this is safe. Like, maybe starting with allowing it to access a chess AI, cause that seems harmless. Not allowing practically anyone to connect this AI to practically anything. This is crazy.