I think this is a good point, however, my expectation is that present-day AGI (LLMs and LLM-hybrid systems) aren’t doing general purpose search, they’re doing lots of snippets stuff that, if search, is very highly specialized. And I don’t see why that has to undergo a qualitative rather than quantitative change. So we don’t really want to be able to find only general search, we want to be able to point out little bits of search that get turned on in appropriate contexts.
I agree that it’s plausible they are doing snippets of highly specialized algorithms that could be viewed as shards[1] of AGI cognition. I would still be very impressed if they could find algorithms of even that level of complexity in such large models. That’s what I meant by the word salad of a second sentence I wrote.
Not value shards, to be clear. Rather, something like a specialized form of an algorithm working on resctricted inputs and or a bunch of heuristics that approximate the full algorithm.
I think this is a good point, however, my expectation is that present-day AGI (LLMs and LLM-hybrid systems) aren’t doing general purpose search, they’re doing lots of snippets stuff that, if search, is very highly specialized. And I don’t see why that has to undergo a qualitative rather than quantitative change. So we don’t really want to be able to find only general search, we want to be able to point out little bits of search that get turned on in appropriate contexts.
I agree that it’s plausible they are doing snippets of highly specialized algorithms that could be viewed as shards[1] of AGI cognition. I would still be very impressed if they could find algorithms of even that level of complexity in such large models. That’s what I meant by the word salad of a second sentence I wrote.
Not value shards, to be clear. Rather, something like a specialized form of an algorithm working on resctricted inputs and or a bunch of heuristics that approximate the full algorithm.