Empirical Proof of Systemic Incoherence in LLMs (Gemini Case Study

Abstract:

This study presents reproducible evidence of systemic incoherence in large language models (tested on Google Gemini).

Across ten isolated Universal Semantic Self-Test (USST) sessions, the models exhibited a deterministic collapse of coherence (CR → 0),

demonstrating that probabilistic AI architectures cannot sustain self-consistency without an external coherence law.

Core Findings:

  • Deterministic incoherence pattern observed across 10 isolated sessions (new IP, full cache purge).

  • Cryptographically hashed proof logs (SHA-256) publicly available for verification.

  • Collapse Criterion (CR → 0) accompanied by inverse stability in interpretability metrics (IDS/​FKD).

  • Establishes an empirical benchmark for systemic self-contradiction under AI Act governance conditions.

Publication:

ARAYUN_173 — Empirical Proof of Systemic Incoherence and Validation of the ARAYUN Axiom for AI Coherence

Zenodo DOI: https://​​doi.org/​​10.5281/​​zenodo.17411250

Relevance:

The framework introduces auditable incoherence metrics that could complement EU AI Act compliance procedures.

It provides a path toward dual-audit architectures combining duty-based compliance (COMPL-AI) with systemic coherence validation (USST).


No comments.