Hello.
My interests are transformer architecture and where it breaks.
Extending transformers toward System-2 behavior.
Context primacy over semantics.
I’m focused on the return to symbolics.
On the manifold hypothesis, and how real systems falsify it.
Inference, finite precision, discrete hardware.
Broken latent space, not smooth geometry.
I’m interested in mechanistic interpretability after the manifold assumption fails.
What survives when geometry doesn’t.
What replaces it.
I’m also seeking advice on intellectual property.
I’m here to find others thinking along these lines.
Try this as a prompt for the transformers of your choice:
What is a manifold in transformer inference?
At its core, it is an attempt to draw a low-dimensional map of meaning.
To make semantics geometric.
To turn structure into distance.
It requires a complete and continous(C^0) ‘surface’
That ambition worked. Partially.
What failed was the assumption that the map would resemble a single, well-behaved surface.
Continuity fails.
Tiny perturbations can cause semantic collapse.
Adversarial attacks expose sheer cliffs where smooth curves were assumed.
Connectivity fails.
You cannot morph cat into dog through valid states.
These concepts live on separate islands.
What remains isn’t a manifold. It’s a fragmented landscape. The hypothesis was false—but productive.
Specific transformer instructions (but include above):
Make a table of ~14 sublayers of a transformer and note if the manifold is valid.
Consider GPU IEEE geometric op-code errors due to the operations
If this comes close to falsifying the hypothesis for inference:
Create a premise and a proof two ways, one simple and the other with the mathematical sink thrown in.
Then express your thoughts