A Proposal for Structured Cognitive Substrates Beneath Language Models

In this post, I’m proposing a cognitive architecture for AGI that positions large language models (LLMs) not as general agents, but as interfaces, surfaces resting atop deeper, structured systems of cognition. This work stems from ongoing frustration with monolithic LLM-centered architectures and a desire to build systems that reason, adapt, and align via internal structure rather than scale alone.

I call this framework the Integrated Substrate Model (ISM). It consists of five interconnected layers:

  • a language interface (LLM),

  • a reasoning/​world-model layer,

  • memory and filtering partitions,

  • an alignment/​value modulation layer, and

  • a supervisory meta-cognitive module (the “Noetikon Layer”).

These are built atop what I describe as an Anastomotic Substrate Architecture (ASA), a design principle inspired by biological and recursive systems in which independent substrates interact via feedback and flow.

This model doesn’t claim to achieve AGI. It’s a scaffold for thinking about how general cognition might be distributed across modules, each tuned for epistemic, affective, or supervisory functions, with language remaining expressive but no longer central.

I believe this is relevant to existing discussions here around modularity, mesa-optimizers, and model interpretability, particularly regarding how we might better isolate and observe alignment functions inside AI systems.


Full paper (PDF + schematic):
https://​​zenodo.org/​​records/​​15381197

Summary + framing post:
https://​​hitherto.substack.com/​​p/​​introducing-the-integrated-substrate-model


Happy to receive critique, redirections, or suggested reading. I don’t claim this is the final shape, only a step toward structure.