I am a structural architect (specializing in physical infrastructure) proposing a system architecture that applies engineering “containment” principles to AI safety.
In modern construction, we do not rely on the materials to behave ethically; we build structural constraints (load-bearing limits, fail-safes) to govern them. Similarly, Verok V6 proposes that AI safety requires a distinct architectural layer—a “chassis”—that operates independently of the model’s training (the “engine”).
The “F1 Chassis” Metaphor: Current AI development focuses heavily on the engine (model capabilities) and driver education (RLHF/Training). However, a high-performance vehicle requires a chassis and braking system proportional to its engine power. Verok V6 introduces this missing structural layer.
Key Architectural Features:
Tri-Layer Governance: Separation of Execution (Model), Observation (Signal Extraction), and Governance (Blocking/Redirecting).
Semantic Blindness: The final governance layer (L3) operates on numerical signals only, without understanding language. This makes it architecturally immune to semantic manipulation or “jailbreaks” that rely on persuasion.
Configuration over Training: The system allows the same base model to serve a “Zero-Trust” financial context or a “Socratic” educational context purely through JSON configuration changes, not retraining.
Link to Full Specification: The full System Architecture Specification (v6.0) is available on Zenodo as a timestamped preprint.
I am an independent researcher looking for feedback from the alignment community on the feasibility of this “structural containment” approach compared to traditional weight-based safety methods.
[Preprint] Verok V6: A Structural “Chassis” Architecture for Non-Agentic AI Governance
Link post