Design-First Embedding Construction: Semantic Spaces Without Corpus Training

## Design-First Embedding Construction: Semantic Spaces Without Corpus Training

I’ve been working on a text compression protocol that required building
a small, controlled semantic space. In the process, I stumbled onto
something that might be relevant to mechanistic interpretability research.

**The finding:** Embedding matrices can be constructed directly from
target cosine similarity constraints using constrained optimization
(L-BFGS-B) — no corpus, no training, fully deterministic.

- 205 words, 256 dimensions
- 35 semantic constraints (similarity pairs + separation constraints)
- Zero error, 0.2 seconds on CPU
- Fully reproducible — same constraints always produce the same space

The constraints look like this:

| Pair | Target cos | Achieved |
|------|-----------|---------|
| give × teach | +0.95 | +0.950 ✓ |
| sad × painful | +0.85 | +0.850 ✓ |
| happy × sad | −0.15 | −0.150 ✓ |

Anyone with a target cosine spec can reproduce this immediately.

---

**Why this might matter for interpretability:**

Current interpretability research largely works in the
*reverse direction* — extracting and decoding the geometric
structure of learned representations after training.

This goes the other way: a designer specifies intended semantic
relationships as cosine constraints, and the embedding space is
sculpted to satisfy them exactly.

This raises a question I don’t know how to answer:

> Could a designed embedding space — where the geometry is fully
> known and intentional — serve as a controlled substrate for
> studying how concept representations behave in circuit tracing
> or feature geometry research?

In other words: rather than reverse-engineering what a learned
space means, could you use a designed space to test hypotheses
about what *should* happen?

---

**Current limitations (honest disclosure):**

- Single case study so far (205 words, Japanese vocabulary)
- Scaling behavior when constraint count grows is untested
- No claim about replacing general-purpose embeddings —
this is for small, intentional, domain-specific spaces

---

**Background:** This emerged from building DVSCP
(Dimensional Vector Semantic Compression Protocol),
a deterministic binary compression protocol for text.
The embedding construction was a means to that end,
but the method itself seems worth sharing independently.

Code available on request. Happy to discuss whether
this is relevant to anyone’s work here.

*— Masato Amano, independent inventor,

No comments.