Modern AI systems increasingly combine reasoning, memory, personalization, and identity within centralized infrastructures operated by model providers. The same entity that performs inference often also controls persistent user context and long-term behavioral data.
This creates a structural governance issue. The system generating outputs also owns the memory that shapes future reasoning.
Most current mitigation approaches focus on policy guarantees, transparency commitments, or contractual controls. These approaches assume centralized architectures and attempt to regulate behavior within them. They do not change where authority over persistent identity and memory actually resides.
I have been exploring an architectural alternative called Artificially Distributed System Intelligence (ADSI).
The core premise is simple: Persistent identity and memory should be structurally separated from centralized AI model providers.
Instead of allowing models to accumulate user context internally, persistent memory is placed within a user-controlled control plane, while centralized models are treated purely as stateless capability engines.
In this model, intelligence delegation becomes a layered system. Execution Layer User devices or local processes initiate requests.
Persona Layer Persistent identity and memory are maintained under user authority.
Intent Mediation Layer Requests are filtered and transformed according to explicit policies before leaving the user boundary.
Model Orchestration Layer Abstracted task requests are routed to external model providers.
Centralized Capability Providers Foundation models perform inference but do not retain persistent cross-request state.
In simplified form, the interaction becomes: User intent ↓ Intent mediation and policy filtering ↓ Stateless model inference ↓ Output
Persistent memory remains within the sovereign control plane rather than within the model provider’s infrastructure.
This architectural separation has several implications.
First, governance constraints become structural rather than contractual. A provider cannot disclose persistent identity or memory that it never possessed. Second, model providers become closer to compute utilities. If models operate as stateless capability engines, orchestration layers could route requests across multiple providers without transferring persistent user context. Third, it raises questions about where alignment problems should be addressed. If attention and context are mediated externally, part of the alignment surface may shift from the model itself to the systems controlling context and memory.
There are also obvious limitations.
Intent mediation cannot fully eliminate semantic leakage. Metadata leakage is likely unavoidable. And if the sovereign control plane itself is compromised, the architecture provides little protection.
So ADSI should probably be viewed as a systems architecture proposal, not a complete solution to alignment or AI safety.
Some open questions I am still thinking about:
Could externalizing persistent memory change how alignment problems manifest in large model systems?
How realistic is policy-constrained intent mediation in practice?
Would architectures like this meaningfully change incentives around data accumulation by AI providers?
If context is externally mediated, could attention itself become a governed resource?
I wrote a working paper exploring the architecture.
Curious whether people working on AI systems architecture or alignment see this kind of memory-capability separation as a useful direction, or if there are obvious failure modes I am overlooking.
Architectural Separation of Memory and Capability in AI Systems
Modern AI systems increasingly combine reasoning, memory, personalization, and identity within centralized infrastructures operated by model providers. The same entity that performs inference often also controls persistent user context and long-term behavioral data.
This creates a structural governance issue. The system generating outputs also owns the memory that shapes future reasoning.
Most current mitigation approaches focus on policy guarantees, transparency commitments, or contractual controls. These approaches assume centralized architectures and attempt to regulate behavior within them. They do not change where authority over persistent identity and memory actually resides.
I have been exploring an architectural alternative called Artificially Distributed System Intelligence (ADSI).
The core premise is simple:
Persistent identity and memory should be structurally separated from centralized AI model providers.
Instead of allowing models to accumulate user context internally, persistent memory is placed within a user-controlled control plane, while centralized models are treated purely as stateless capability engines.
In this model, intelligence delegation becomes a layered system.
Execution Layer
User devices or local processes initiate requests.
Persona Layer
Persistent identity and memory are maintained under user authority.
Intent Mediation Layer
Requests are filtered and transformed according to explicit policies before leaving the user boundary.
Model Orchestration Layer
Abstracted task requests are routed to external model providers.
Centralized Capability Providers
Foundation models perform inference but do not retain persistent cross-request state.
In simplified form, the interaction becomes:
User intent
↓
Intent mediation and policy filtering
↓
Stateless model inference
↓
Output
Persistent memory remains within the sovereign control plane rather than within the model provider’s infrastructure.
This architectural separation has several implications.
First, governance constraints become structural rather than contractual. A provider cannot disclose persistent identity or memory that it never possessed.
Second, model providers become closer to compute utilities. If models operate as stateless capability engines, orchestration layers could route requests across multiple providers without transferring persistent user context.
Third, it raises questions about where alignment problems should be addressed. If attention and context are mediated externally, part of the alignment surface may shift from the model itself to the systems controlling context and memory.
There are also obvious limitations.
Intent mediation cannot fully eliminate semantic leakage. Metadata leakage is likely unavoidable. And if the sovereign control plane itself is compromised, the architecture provides little protection.
So ADSI should probably be viewed as a systems architecture proposal, not a complete solution to alignment or AI safety.
Some open questions I am still thinking about:
Could externalizing persistent memory change how alignment problems manifest in large model systems?
How realistic is policy-constrained intent mediation in practice?
Would architectures like this meaningfully change incentives around data accumulation by AI providers?
If context is externally mediated, could attention itself become a governed resource?
I wrote a working paper exploring the architecture.
On SSRN: Artificially Distributed System Intelligence (ADSI)
Curious whether people working on AI systems architecture or alignment see this kind of memory-capability separation as a useful direction, or if there are obvious failure modes I am overlooking.