Language model cognitive architectures (LMCAs) are also called language model agents or scaffold agents. AutoGPT is the best-known example. It and other LMAs use a large language model and other components to create a goal-directed, agentic system. They use a large language model as a sort of “cognitive engine” performing much of the cognitive work, and incorporate other components to make the composite system goal-directed, and to fill in gaps in and extend the LLMs cognitive abilities.
The term language model agent better captures the inherent agency of such a system, while “language model cognitive architecture” better captures the extensive additions to the LLM, and the resulting change in function and capabilities. Episodic memory may be one key cognitive system; vector-based systems in LMCAs to-date are quite limited, and humans with episodic memory problems are limited in the complexity and time-horizon of problems they can solve.
LMCAs are arguably a likely candidate for the first agentic, self-aware and self-improving AGIs.