It seems like we actually do not have a good name for the things that AI companies are building, weirdly enough...
This actually slows down my reasoning or at least writing about the topic, because I have to choose from this inadequate list of options repeatedly, often using different nouns in different places. I do not have a good suggestion. Any ideas?
They liked my suggestion of neuro-scaffold and suggested I write a short justification.
Definition
A neuro-scaffold means a composite software architecture with two key components:
Neural core: A generative model determined via machine learning techniques that maps prompts to responses. For example, the openai API lets you send prompts and get responses from a neural core, such as one of their GPT-* LLMs.
Scaffold: A non-trivial traditional program that maps responses to prompts. Along the way, it might store or retrieve data or computer code, call computer programs, ask for user input, or take any number of other actions.
Crucially, the design of a neuro-scaffold includes a component of the following form:
A neuro-scaffold is any program that combines gen AI (including but not limited to LLMs) with additional software that autonomously transforms gen AI outputs into new gen AI prompts. The term “neuro-scaffold” refers to software design—not capabilities or essence.
The term is meant as a pragmatic way to refer to the 2025 paradigm of what are being referred to as “AI models,” especially reasoning and agent-type models. As technology changes, if “neuro-scaffold” no longer seems obviously apt, I would recommend dropping it and replacing it with something more suitable.
“Neuro-scaffold” is also a term for 3D nerve cell culture. But I think it’s unlikely to cause confusion except for my poor fellow biomedical engineers trying to use neuro-scaffold AI to design neuro-scaffolds for 3D nerve cell culture. Sorry, colleagues!
Rationale
I chose the “-scaffold” suffix because it refers to:
A physical framework: Stabilizing, fixed, but potentially moveable supports on which entities move about to get their work done.
Instructional scaffolding: Tailored support given to a student as they gradually develop autonomous learning strategies.
Automated code generation: Instead of generating boilerplate in response to fixed rules to set up projects, gen AI is now dynamically generating boilerplate code as it works to solve problems.
These three terms seem to reflect the kinds of software programs people are building to automate interactions with a general-purpose generative AI model (neural core) to produce certain desired behaviors now often termed “reasoning” or “agentic.”
Neuro-scaffold AI still has “I” for “Intelligence” in the name, but “AI” part is not a key part of the term. It’s just a convenience to make it more clear about the sort of product I’m talking about. You could just say “neuro-scaffold” or “neuro-scaffold LLM” to further emphasize that the exclusively design-oriented intended meaning.
“Neuro-scaffold” riffs on the term neuro-symbolic AI, which is established jargon. Although “neuro-symbolic AI” also seems potentially apt, I wanted a new term because, at least according to Wikipedia, neuro-symbolic AI seems to refer to a specific combination of capabilities, design, and essence:
Neuro-symbolic AI is a type of artificial intelligence that integrates neural and symbolic AI architectures to address the weaknesses of each, providing a robust AI capable of reasoning, learning, and cognitive modeling.
“Integrates neural and symbolic AI architectures” is a design. “Reasoning, learning and cognitive modeling” are capabilities. They can also be seen as essences, potentially leading to debates about the true nature of “reasoning.”
The only reason to debate somebody’s use of the term “neuro-scaffold” to refer to a product should be if there is a dispute about the design of that product’s software architecture. This is a question that should be resolvable more or less by inspecting the code.
What about “self-prompter?”
“Self-prompting AI” is my strongest alternative to “neuro-scaffold.” One disadvantage of “self-prompting AI” is that it needs the term “AI” to emphasize the mechanical nature of it, and the term “AI” can be seen as contentious or as marketing hype.
Dropping AI leaves us with “self-prompter,” a term that has been used to refer to devices mean for a speaker to cue themselves during a speech. But I don’t think it’s at risk of becoming confusingly overloaded.
I have a few objections to this term for this use case:
It contains the term “self,” risking the type of philosophical debates I aim to sidestep with “neuro-scaffold.”
It may suggest that the prompts to the product exclusively are generated autonomously by the product, which often isn’t the case. A neuro-scaffold must be able to self-prompt, but it doesn’t have to always self-prompt.
It focuses on the behavior or capability and a specific aspect of the input rather than on the design. While “neural core” points to a relatively defined family of machine learning architectures and “scaffold” refers to a generic program built around such a neural core, the term “prompt” is not as well defined, and nobody thinks that the prompt is a more important part of these architectures than the neural core or scaffold.
Self-prompting is something a person can do, whereas a person is not and cannot become a neuro-scaffold. I want a term that squarely refers to these non-human products, not to a more general class of activity that humans can participate in.
Self-prompter might be a useful term as well. I just don’t think it’s the best choice for the specific meaning I’m getting at.
Examples and counterexamples
Probably not a neuro-scaffold:
A program that sends prompts via the openai API, gets the direct output of an LLM, and displays the response to a human user, such as a temporary chat on any of the mainstream chatbot interfaces in 2025. Except in exotic cases (i.e. a mind-controlling prompt that reliably influences the user to input further specific prompts), there’s no mechanism to map responses to new prompts, so it’s not a neuro-scaffold. These could be called “LLM interfaces” or “chatbot LLMs.” Chatbots include a neural core, but confine themselves to [(user) -> (program) -> (neural core) -> (program) -> (user)]. I would not call the program a “scaffold” and would not call this overall design a “neuro-scaffold” because it has no semblance of an autonomous self-prompting.
A program that has some sort of self-calling pure expert system or symbolic AI but with no machine learning component.
Programs that supplement or modify prompts or outputs from LLMs without automatically triggering further queries to the neural core, such as the “GPTs,” “memory features,” and “system prompts.”
A program that feeds the output of a random number generator into the seed of a random number generator. This lacks any semblance of machine learning, so it doesn’t contain a neural core.
A traditional program with a user interface, where a human is conceptualized as the “neural core.” Human intelligence isn’t produced via machine learning techniques or anything similar. Key to the concept of a neuro-scaffold is that both the neural core and the scaffold originate from a process that strongly resembles engineering and is in principle amenable to ongoing technological advancement.
Ambiguously a neuro-scaffold:
Sci-fi scenarios in which humans are “programmed” by AI outputs, or in which biomaterials or living organisms are specifically engineered for use as a neural core using some sort of process directly analogous to machine learning and which have an API for I/O.
Probably a neuro-scaffold:
Any of the LLM-based reasoning or agentic models on the market today.
A name for the things that AI companies are building
Neuro-scaffolds
Cole Wyeth writes:
They liked my suggestion of neuro-scaffold and suggested I write a short justification.
Definition
A neuro-scaffold means a composite software architecture with two key components:
Neural core: A generative model determined via machine learning techniques that maps prompts to responses. For example, the
openaiAPI lets you send prompts and get responses from a neural core, such as one of their GPT-* LLMs.Scaffold: A non-trivial traditional program that maps responses to prompts. Along the way, it might store or retrieve data or computer code, call computer programs, ask for user input, or take any number of other actions.
Crucially, the design of a neuro-scaffold includes a component of the following form:
[... -> (neural core) → (scaffold) → (neural core) → (scaffold) -> ...]A neuro-scaffold is any program that combines gen AI (including but not limited to LLMs) with additional software that autonomously transforms gen AI outputs into new gen AI prompts. The term “neuro-scaffold” refers to software design—not capabilities or essence.
The term is meant as a pragmatic way to refer to the 2025 paradigm of what are being referred to as “AI models,” especially reasoning and agent-type models. As technology changes, if “neuro-scaffold” no longer seems obviously apt, I would recommend dropping it and replacing it with something more suitable.
“Neuro-scaffold” is also a term for 3D nerve cell culture. But I think it’s unlikely to cause confusion except for my poor fellow biomedical engineers trying to use neuro-scaffold AI to design neuro-scaffolds for 3D nerve cell culture. Sorry, colleagues!
Rationale
I chose the “-scaffold” suffix because it refers to:
A physical framework: Stabilizing, fixed, but potentially moveable supports on which entities move about to get their work done.
Instructional scaffolding: Tailored support given to a student as they gradually develop autonomous learning strategies.
Automated code generation: Instead of generating boilerplate in response to fixed rules to set up projects, gen AI is now dynamically generating boilerplate code as it works to solve problems.
These three terms seem to reflect the kinds of software programs people are building to automate interactions with a general-purpose generative AI model (neural core) to produce certain desired behaviors now often termed “reasoning” or “agentic.”
Neuro-scaffold AI still has “I” for “Intelligence” in the name, but “AI” part is not a key part of the term. It’s just a convenience to make it more clear about the sort of product I’m talking about. You could just say “neuro-scaffold” or “neuro-scaffold LLM” to further emphasize that the exclusively design-oriented intended meaning.
“Neuro-scaffold” riffs on the term neuro-symbolic AI, which is established jargon. Although “neuro-symbolic AI” also seems potentially apt, I wanted a new term because, at least according to Wikipedia, neuro-symbolic AI seems to refer to a specific combination of capabilities, design, and essence:
“Integrates neural and symbolic AI architectures” is a design. “Reasoning, learning and cognitive modeling” are capabilities. They can also be seen as essences, potentially leading to debates about the true nature of “reasoning.”
The only reason to debate somebody’s use of the term “neuro-scaffold” to refer to a product should be if there is a dispute about the design of that product’s software architecture. This is a question that should be resolvable more or less by inspecting the code.
What about “self-prompter?”
“Self-prompting AI” is my strongest alternative to “neuro-scaffold.” One disadvantage of “self-prompting AI” is that it needs the term “AI” to emphasize the mechanical nature of it, and the term “AI” can be seen as contentious or as marketing hype.
Dropping AI leaves us with “self-prompter,” a term that has been used to refer to devices mean for a speaker to cue themselves during a speech. But I don’t think it’s at risk of becoming confusingly overloaded.
I have a few objections to this term for this use case:
It contains the term “self,” risking the type of philosophical debates I aim to sidestep with “neuro-scaffold.”
It may suggest that the prompts to the product exclusively are generated autonomously by the product, which often isn’t the case. A neuro-scaffold must be able to self-prompt, but it doesn’t have to always self-prompt.
It focuses on the behavior or capability and a specific aspect of the input rather than on the design. While “neural core” points to a relatively defined family of machine learning architectures and “scaffold” refers to a generic program built around such a neural core, the term “prompt” is not as well defined, and nobody thinks that the prompt is a more important part of these architectures than the neural core or scaffold.
Self-prompting is something a person can do, whereas a person is not and cannot become a neuro-scaffold. I want a term that squarely refers to these non-human products, not to a more general class of activity that humans can participate in.
Self-prompter might be a useful term as well. I just don’t think it’s the best choice for the specific meaning I’m getting at.
Examples and counterexamples
Probably not a neuro-scaffold:
A program that sends prompts via the
openaiAPI, gets the direct output of an LLM, and displays the response to a human user, such as a temporary chat on any of the mainstream chatbot interfaces in 2025. Except in exotic cases (i.e. a mind-controlling prompt that reliably influences the user to input further specific prompts), there’s no mechanism to map responses to new prompts, so it’s not a neuro-scaffold. These could be called “LLM interfaces” or “chatbot LLMs.” Chatbots include a neural core, but confine themselves to[(user) -> (program) -> (neural core) -> (program) -> (user)]. I would not call the program a “scaffold” and would not call this overall design a “neuro-scaffold” because it has no semblance of an autonomous self-prompting.A program that has some sort of self-calling pure expert system or symbolic AI but with no machine learning component.
Programs that supplement or modify prompts or outputs from LLMs without automatically triggering further queries to the neural core, such as the “GPTs,” “memory features,” and “system prompts.”
A program that feeds the output of a random number generator into the seed of a random number generator. This lacks any semblance of machine learning, so it doesn’t contain a neural core.
A traditional program with a user interface, where a human is conceptualized as the “neural core.” Human intelligence isn’t produced via machine learning techniques or anything similar. Key to the concept of a neuro-scaffold is that both the neural core and the scaffold originate from a process that strongly resembles engineering and is in principle amenable to ongoing technological advancement.
Ambiguously a neuro-scaffold:
Sci-fi scenarios in which humans are “programmed” by AI outputs, or in which biomaterials or living organisms are specifically engineered for use as a neural core using some sort of process directly analogous to machine learning and which have an API for I/O.
Probably a neuro-scaffold:
Any of the LLM-based reasoning or agentic models on the market today.