This is good context to have. If it is a Schelling point on LW that’s probs a good enough reason to choose it as the term to adopt, although some consideration might be warranted for it’s adoption in wider communities, but I can’t think of any other term would work better for that.
Agreed that having a common term would be really nice, and this is more specific than the very broad LLM agent or AI agent.
But neuro-scaffold feels really wrong. It is not a scaffold made of a neural substance. It is a neural substance with a scaffolding around it, or a neural substance that is scaffolded. The tense of neuro-scaffold is wrong.
I don’t know how much of a blocker that would be, but for me it feels much better to continue saying scaffolded LLM. I’ve also wondered about LHLLM for long horizon (agentic) LLM or ALLMA for agentic LLM (based) architecture.
Those don’t feel quite right either. But an acronym expanding on LLM does. They do look like they could be quite a mouthful, but for me LLM is now one thought, not an expansion to large language model, so those feel fairly compact.
This is good context to have. If it is a Schelling point on LW that’s probs a good enough reason to choose it as the term to adopt, although some consideration might be warranted for it’s adoption in wider communities, but I can’t think of any other term would work better for that.
Agreed that having a common term would be really nice, and this is more specific than the very broad LLM agent or AI agent.
But neuro-scaffold feels really wrong. It is not a scaffold made of a neural substance. It is a neural substance with a scaffolding around it, or a neural substance that is scaffolded. The tense of neuro-scaffold is wrong.
I don’t know how much of a blocker that would be, but for me it feels much better to continue saying scaffolded LLM. I’ve also wondered about LHLLM for long horizon (agentic) LLM or ALLMA for agentic LLM (based) architecture.
Those don’t feel quite right either. But an acronym expanding on LLM does. They do look like they could be quite a mouthful, but for me LLM is now one thought, not an expansion to large language model, so those feel fairly compact.