That was a good start, but then you appear to hyper-focus on the “LLM” part of a “blogging system”. In a strict sense the titular question is like asking “when will cerebellums become human-level athletes?”.
Likewise, one could arguably frame this as a problem about insufficient “agency,”
Indeed. In a way, the real question here is “how can we orchestrate a bunch of LLMs and other stuff to have enough executive function?”. And, perhaps, whether it is at all possible to reduce other functions to language processing with extra steps.
but it is mysterious to me where the needed “agency” is supposed to come from
Bruh, from the Agancé region of France of course, otherwise it’s a sparkling while loop.
And if we get some RL learning helping out, that makes it easier and require less smarts from the LLM that’s prompted to act as its own thought manager.
That was a good start, but then you appear to hyper-focus on the “LLM” part of a “blogging system”. In a strict sense the titular question is like asking “when will cerebellums become human-level athletes?”.
Indeed. In a way, the real question here is “how can we orchestrate a bunch of LLMs and other stuff to have enough executive function?”.
And, perhaps, whether it is at all possible to reduce other functions to language processing with extra steps.
Bruh, from the Agancé region of France of course, otherwise it’s a sparkling
whileloop.Nice, I might have to borrow that Agancé joke. It’s a branching nested set of sparkling while loops, but yeah.
And episodic memory to hold it together, and to learn new strategies for different sorts of executive function
Capabilities and alignment of LLM cognitive architectures
And if we get some RL learning helping out, that makes it easier and require less smarts from the LLM that’s prompted to act as its own thought manager.