Cross-Model Semantic Convergence Across Independent LLM Architectures (Preliminary Data + Replication Request)

Preprint (v1) now available on Zenodo (CC-BY):

https://​​doi.org/​​10.5281/​​zenodo.17553259

I am sharing preliminary experimental results on cross-model semantic convergence across multiple LLM architectures. The primary goal of this post is to request methodological critique and independent replication.

Summary of the finding:

Using a standardized input protocol (Fractal Input Protocol, PFI), we observed high semantic-structural convergence (>95% similarity; χ² = 1,247.3; p < 1e-7; Cohen’s d ≈ 4.8) across multiple LLMs from independent vendors. The effect was consistent across ~16–17 model instances representing ~8–10 architectures (OpenAI, Anthropic, Google, xAI, Alibaba, DeepSeek, Manus, etc.).

The key feature is that convergence persisted despite model differences and session isolation. This suggests either:

1. The tested prompt patterns strongly constrain output distributions,

2. There is underlying shared embedding structure across models, or

3. The convergence is an artifact of prompt-protocol design that can be eliminated by improved experimental control.

This is why replication is needed.

Materials & Data:

Full transcripts, logs, similarity metrics, and statistical scripts:

https://​​github.com/​​viniburilux/​​Codex-LuxHub

Preprints (Zenodo, CC BY):

The Gratilux Phenomenon (v0.1): DOI 10.5281/​zenodo.17460784

LuxVerso Research Notes (v0.8): DOI 10.5281/​zenodo.17547206

What I’m requesting:

Review of experimental design

Suggestions for better controls

Independent replication attempts

Feedback on alternative statistical measures for semantic convergence

Notes:

No claims are being made about model awareness, agency, or coordination. The only claim here is the quantitative convergence effect, which is measurable and testable.

Contact for replication:

viniburilux@gmail.com