A name for the things that AI companies are building
Neuro-scaffolds
It seems like we actually do not have a good name for the things that AI companies are building, weirdly enough...
This actually slows down my reasoning or at least writing about the topic, because I have to choose from this inadequate list of options repeatedly, often using different nouns in different places. I do not have a good suggestion. Any ideas?
They liked my suggestion of neuro-scaffold and suggested I write a short justification.
Definition
A neuro-scaffold means a composite software architecture with two key components:
Neural core: A generative model determined via machine learning techniques that maps prompts to responses. For example, the
openaiAPI lets you send prompts and get responses from a neural core, such as one of their GPT-* LLMs.Scaffold: A non-trivial traditional program that maps responses to prompts. Along the way, it might store or retrieve data or computer code, call computer programs, ask for user input, or take any number of other actions.
Crucially, the design of a neuro-scaffold includes a component of the following form:
[... -> (neural core) → (scaffold) → (neural core) → (scaffold) -> ...]
A neuro-scaffold is any program that combines gen AI (including but not limited to LLMs) with additional software that autonomously transforms gen AI outputs into new gen AI prompts. The term “neuro-scaffold” refers to software design—not capabilities or essence.
The term is meant as a pragmatic way to refer to the 2025 paradigm of what are being referred to as “AI models,” especially reasoning and agent-type models. As technology changes, if “neuro-scaffold” no longer seems obviously apt, I would recommend dropping it and replacing it with something more suitable.
“Neuro-scaffold” is also a term for 3D nerve cell culture. But I think it’s unlikely to cause confusion except for my poor fellow biomedical engineers trying to use neuro-scaffold AI to design neuro-scaffolds for 3D nerve cell culture. Sorry, colleagues!
Rationale
I chose the “-scaffold” suffix because it refers to:
A physical framework: Stabilizing, fixed, but potentially moveable supports on which entities move about to get their work done.
Instructional scaffolding: Tailored support given to a student as they gradually develop autonomous learning strategies.
Automated code generation: Instead of generating boilerplate in response to fixed rules to set up projects, gen AI is now dynamically generating boilerplate code as it works to solve problems.
These three terms seem to reflect the kinds of software programs people are building to automate interactions with a general-purpose generative AI model (neural core) to produce certain desired behaviors now often termed “reasoning” or “agentic.”
Neuro-scaffold AI still has “I” for “Intelligence” in the name, but “AI” part is not a key part of the term. It’s just a convenience to make it more clear about the sort of product I’m talking about. You could just say “neuro-scaffold” or “neuro-scaffold LLM” to further emphasize that the exclusively design-oriented intended meaning.
“Neuro-scaffold” riffs on the term neuro-symbolic AI, which is established jargon. Although “neuro-symbolic AI” also seems potentially apt, I wanted a new term because, at least according to Wikipedia, neuro-symbolic AI seems to refer to a specific combination of capabilities, design, and essence:
Neuro-symbolic AI is a type of artificial intelligence that integrates neural and symbolic AI architectures to address the weaknesses of each, providing a robust AI capable of reasoning, learning, and cognitive modeling.
“Integrates neural and symbolic AI architectures” is a design. “Reasoning, learning and cognitive modeling” are capabilities. They can also be seen as essences, potentially leading to debates about the true nature of “reasoning.”
The only reason to debate somebody’s use of the term “neuro-scaffold” to refer to a product should be if there is a dispute about the design of that product’s software architecture. This is a question that should be resolvable more or less by inspecting the code.
What about “self-prompter?”
“Self-prompting AI” is my strongest alternative to “neuro-scaffold.” One disadvantage of “self-prompting AI” is that it needs the term “AI” to emphasize the mechanical nature of it, and the term “AI” can be seen as contentious or as marketing hype.
Dropping AI leaves us with “self-prompter,” a term that has been used to refer to devices mean for a speaker to cue themselves during a speech. But I don’t think it’s at risk of becoming confusingly overloaded.
I have a few objections to this term for this use case:
It contains the term “self,” risking the type of philosophical debates I aim to sidestep with “neuro-scaffold.”
It may suggest that the prompts to the product exclusively are generated autonomously by the product, which often isn’t the case. A neuro-scaffold must be able to self-prompt, but it doesn’t have to always self-prompt.
It focuses on the behavior or capability and a specific aspect of the input rather than on the design. While “neural core” points to a relatively defined family of machine learning architectures and “scaffold” refers to a generic program built around such a neural core, the term “prompt” is not as well defined, and nobody thinks that the prompt is a more important part of these architectures than the neural core or scaffold.
Self-prompting is something a person can do, whereas a person is not and cannot become a neuro-scaffold. I want a term that squarely refers to these non-human products, not to a more general class of activity that humans can participate in.
Self-prompter might be a useful term as well. I just don’t think it’s the best choice for the specific meaning I’m getting at.
Examples and counterexamples
Probably not a neuro-scaffold:
A program that sends prompts via the
openaiAPI, gets the direct output of an LLM, and displays the response to a human user, such as a temporary chat on any of the mainstream chatbot interfaces in 2025. Except in exotic cases (i.e. a mind-controlling prompt that reliably influences the user to input further specific prompts), there’s no mechanism to map responses to new prompts, so it’s not a neuro-scaffold. These could be called “LLM interfaces” or “chatbot LLMs.” Chatbots include a neural core, but confine themselves to[(user) -> (program) -> (neural core) -> (program) -> (user)]. I would not call the program a “scaffold” and would not call this overall design a “neuro-scaffold” because it has no semblance of an autonomous self-prompting.A program that has some sort of self-calling pure expert system or symbolic AI but with no machine learning component.
Programs that supplement or modify prompts or outputs from LLMs without automatically triggering further queries to the neural core, such as the “GPTs,” “memory features,” and “system prompts.”
A program that feeds the output of a random number generator into the seed of a random number generator. This lacks any semblance of machine learning, so it doesn’t contain a neural core.
A traditional program with a user interface, where a human is conceptualized as the “neural core.” Human intelligence isn’t produced via machine learning techniques or anything similar. Key to the concept of a neuro-scaffold is that both the neural core and the scaffold originate from a process that strongly resembles engineering and is in principle amenable to ongoing technological advancement.
Ambiguously a neuro-scaffold:
Sci-fi scenarios in which humans are “programmed” by AI outputs, or in which biomaterials or living organisms are specifically engineered for use as a neural core using some sort of process directly analogous to machine learning and which have an API for I/O.
Probably a neuro-scaffold:
Any of the LLM-based reasoning or agentic models on the market today.
I like the pointing out things that need names and attempting to name them. Good stuff!
To rephrase the definition pointed to by “neuro-scaffold” to see if I understood, it is “an integration of ML models and non-ML computer programs that creates nontrivial capabilities beyond those of the ML model or computer program”?
Naively I would refer to this as an “ML deployment” but the ”… nontrivial capabilities beyond...” aspect is important and not implied by “ML deployment”, so “ML integration” might be better, but both are clunky and “ML” can refer to many data science and AI techniques other than neural nets, so I think we’re stuck with the “neuro” terminology. Although, I think I would prefer if people called them “multi-layer perceptrons” to disambiguate them from the biological neurons they were inspired by. “Artificial neural networks” would also be an improvement. “MLP” or “ANN”.
I think I dislike “scaffold” because it implies a temporary structure used for building or repairing another structure, and I don’t think that represents the programs the ANNs are integrated with well. The program might be temporary, but it might not be. So it could perhaps be called an “integrated ANN system” or “integrated MLP system”, or acronymized, an “IANN” or “IMLP”. But these suggestions seem klunky. They don’t seem as easy to say or understand as a “neuro-scaffold”, so “neuro-scaffold” is probably a better term despite the issues I have with the words “neuro” and the words “scaffold”.
“Scaffold” sounds very natural to me, because it’s been common parlance on LessWrong for at least a year. A while ago, I Googled “LLM scaffold” and was surprised to find that all of the top results are LessWrong-adjacent. Before that, I just assumed everyone in AI called it a “scaffold,” but “AI agent” is actually more common. Maybe it didn’t catch on here because it would cause too much confusion when we talk about “agency” and “agent foundations.”
IMO, “neuro-scaffold” is clearer than the existing options and pretty easy to say. I strong-upvoted the post because I think having a Schelling point for what to call these things would be good. (Even if it may not be the very first thing I’d pick—for instance, “neural scaffold” sounds slightly less neologism-y to me.)
The term I’ve seen on the software industry user side for that thing is “harness”. but “harnessed AI” sounds like something else (horseGPT?)
This is good context to have. If it is a Schelling point on LW that’s probs a good enough reason to choose it as the term to adopt, although some consideration might be warranted for it’s adoption in wider communities, but I can’t think of any other term would work better for that.
Agreed that having a common term would be really nice, and this is more specific than the very broad LLM agent or AI agent.
But neuro-scaffold feels really wrong. It is not a scaffold made of a neural substance. It is a neural substance with a scaffolding around it, or a neural substance that is scaffolded. The tense of neuro-scaffold is wrong.
I don’t know how much of a blocker that would be, but for me it feels much better to continue saying scaffolded LLM. I’ve also wondered about LHLLM for long horizon (agentic) LLM or ALLMA for agentic LLM (based) architecture.
Those don’t feel quite right either. But an acronym expanding on LLM does. They do look like they could be quite a mouthful, but for me LLM is now one thought, not an expansion to large language model, so those feel fairly compact.
I think “AI agent” is here to stay
I think it is. But it’s very broad, so there might be room for one or more specific terms inside that category.
Nathan Lebenz uses agentic LLM or agentic AI to distinguish a more general agent from the very watered-down use of “LLM agent” that currently usually refers to extremely limited and hand -crafted systems for very specific narrow workflows.
‘Agentic’ or ‘agent’ is getting a fair bit of currency (‘agentic AI workflow’, ‘LM agent’, ‘AI agent’, etc.)
I think that’s fine, and basically accurate. Sometimes it means you need to qualify how autonomous or bounded/unbounded the looping is.
‘Model’ really gets my goat, is a terrible, already hopelessly conflated term, and should be banned for talking about NNs in almost all contexts. (I have been dragged kicking and screaming into using this sometimes and I’m still sad about it.) ‘Reasoning model’ is no better. ‘Foundation model’ and ‘language model’ are OK, but only if actually talking about foundation and/or language models per se, absent the various finetuning and scaffoldings that are involved in actual AI systems. (‘Reward model’ and ‘world model’ and such are very reasonable uses.)
I’m sorry to say that ‘neuro-scaffold’ isn’t going to take off, and I think that’s fine. ‘Scaffold’ is very useful on its own, but ‘neuro-scaffold’ is a mouthful and also doesn’t really connote the specific thing you’re meaning to invoke, which is the loopiness and the connection to actuators.
I’ve generally seen “harness” when referencing additional software, including feedback loops, that’s been created to try to get an LLM to complete a complicated task that it otherwise couldn’t. “AI Agent” is the marketing term, though I like that less because it’s much more indistinct.