Epistemic status: design manifesto from implementation experience. The analog test is not nostalgia and
not universal proof. It is a way to recover the regulating oor a system still depends on after its operators
have forgotten how to stand on it. Strong claims are used here as handles: if one breaks, the interesting
question is what invariant it exposed. If you break a claim without recovering the invariant it was pointing
at, you have not advanced the argument; you have merely won a local language game.
The Human: Yes, I’m a human, and yes, this whole idea is mine, I cannot make it enjoyable without significant help. Unfortunatelly I’ve become accostumed to not expand on my thoughts… and ellipse the rest. It’s a bad habit, don’t get into it, during the text itself I’m the editor. Hope it’s fun...
[disclaimer next] Outside the text I’m the thinker, Claude’s the good writer. This article, for me, is maybe 7 thoughts, and 2 links. I tend to not be able to publish my thinking, but, sometimes I do feel it’s a good enough thought to hydrate. If you’re interested in more of me, make a question, comment, answers are all me, they’ll be short, maybe a little too acid, {probably very acid} but, they’ll be mine in fact and form.
The accretion default
A friend of mine—entrepreneur, husband, father of three, somehow also the person who agreed
to run a Dungeons & Dragons campaign for a group of first-time players—ran into a problem on
session one. His players were four programmers. None of them had ever played a tabletop role-
playing game. The bottleneck he expected was how do you play the game: rules, dice, character
sheets, the procedural surface. The bottleneck he encountered was what is a role-playing game: an
ontological surface, much harder to cross.
He began the way most narrators do. The party arrives at point A. A quest-giver is waiting. On
the road they encounter level-one enemies and a thread of the larger storyline. First combat begins.
And the session, in his words, became the biggest shit-show he had ever personally experienced, in
either the player or narrator seat.
The quality of the shit-show was specific.
None of the players had read their abilities.
None of them
knew how to interpret what their character would do as distinct from what they would do.
All of
them were running on isekai vibes anime tropes about being teleported into a fantasy world which gave them a great deal of confidence about atmosphere and almost nothing about mechanics.
His instinct, the same instinct most narrators have in this situation, was to add. Add a primer. Add
a reference card. Add a tutorial session. Add a Discord channel for rules questions. Add a character
sheet redesign that surfaces abilities more visibly. Add a session-zero ritual to align expectations.
Add, add, add. Each addition made local sense. Each addition raised the oor on what a new
player had to absorb before they could play. Each addition was a tax paid by the players, on the
assumption that the players were the variable and the experience was the constant.
This is the asymmetry. In software, in education, in institutional design, the practitioner who proposes removing a layer is asked what they intend to add in its place. The practitioner who proposes
adding a layer is asked, at most, whether the addition is well-engineered. A new framework arrives,
and projects adopt it. A new hardware tier becomes available, and the minimum specification rises.
A new pedagogical theory is published, and curricula expand. A new compliance regime appears,
and procedure thickens. The default move is addition; the default question, when addition fails, is
what else can we add?
Beneath this asymmetry sits a load-bearing assumption, rarely stated because it does not need to
be: the user is the variable; the experience is the constant. Whatever the work asks of its end
the student, the citizen, the player, the operator, the device adapts. The end buys the new
hardware, takes the additional course, files the additional form, reads the additional primer, accepts
the additional loading screen. The system holds steady. The end pays.
The costs of this stance are familiar enough to need only naming: scope-creep that consumes timelines, framework bloat that consumes working memory, minimum-specification spirals that consume
access, and solutions that work in the sense of running, but are not the solution in the sense of being
correct for the problem.
Beneath these costs sits a subtler one. Accretion-mode design grants the
practitioner freedom of action without orientation. Any number of additions are defensible; the
practitioner picks the one the framework suggests, the tutorial demonstrated, the senior colleague
preferred. The freedom is the trap.
Without a floor to push against, there is no way to know
whether the chosen addition is correct, only whether it is plausible.
There is an obvious objection, and it is worth dispatching now: is this not how civilization works?
Cumulative knowledge, the scientific ediffice, libraries built on libraries, each generation standing on
the last. This is true and is not the phenomenon under attack.
Cumulative knowledge is additive
in the structure of what is known, not in the structure of what each user must climb to participate.
Newton’s laws made physics easier, not harder, for the student who came after. The accretion failure
mode names something else: the situation in which adding capability to a system raises the floor
that the system’s ends must clear. A library that requires its readers to first buy a new building
before they can enter has stopped functioning as a library. The distinction is not always sharp, but
it is the distinction the rest of this paper turns on.
The pattern recurs across domains. Education adds curriculum, assessment overhead, and credentialing layers, and the floor for participation rises faster than the capability being delivered.
Institutions add compliance layers and procedural safeguards, each defensible in isolation, and the
floor for citizen interaction rises until the institution no longer serves its mandate. Software adds
abstractions, frameworks, and dependency trees, each justifiable, and the floor for end-user hard-ware and developer onboarding rises until the work no longer reaches the people it was built for. In
every case the same shape: the user adapts; the system holds steady; the floor rises; the work that
2 remains is not the work that was wanted.
The narrator’s session, on session two, did not include any of his planned additions. He ran the
inversion. We will return to him. For now: the opposite stance if treating the user, the device,
the context, and the substrate as fixed, and treating the system as the variable that must to them
has been practiced for centuries under other names. This paper proposes its deliberate recovery,
beginning with the inversion it performs on the relationship between system and user.
The inversion: respect the invariants
The narrator’s session-two move was the opposite of what his session-one instincts had recommended.
Instead of layering primers, reference cards, and tutorials on top of his players, he removed the
assumption that the players’ knowledge was the variable to be optimized. The players, he decided,
were going to remain exactly what they were: four programmers on the first night of their first
campaign, with anime tropes for intuition and no patience for paperwork. They were the floor.
Whatever the system asked of them had to fit what they actually were on night one, not what he
wished they had become by night three.
This left him with the question of what the system was supposed to do. The 20-sided-die rule system
has been refined for fifty years; the underlying mechanics are well-understood and well-documented. He purchased a subscription to the publisher’s tower, studied the mechanics, built the campaign,
designed homebrew items, and went looking for the virtual tabletop that would deliver this work to
four programmers on a Discord call.
This is where he despaired.
The virtual tabletops he found were built around the systems his
players would have used: dice rollers, character sheets, initiative trackers, the procedural surface of
combat. They were not built around the systems he kept track of as narrator: reputation across
factions, regional currency, story markers, food and provisions, family trees, the slow accretion
of consequence
{editor here: trust me, there will be consequences.}
that makes a campaign feel inhabited rather than encountered. Tokens on these
platforms were fixed props:
a portrait, a position, a hit-point bar
with very little surface for
the narrator’s bookkeeping to attach to. The platforms had optimized for the wrong half of the
table. They had treated the player-facing surface as the constant and let the narrator-facing surface
accrete into spreadsheets, sticky notes, and the contents of his head. The medium that was supposed
to deliver the campaign was structured against the campaign’s actual shape.
Consequences
After two bottles of wine, a cold slice of pizza, and three panic attacks, he said the dumbest single
line a person can ever say in this situation:
“let’s do it, let’s make the VTT, how hard can it be?” -
{Editor: The answer, if you don’t know it, is, DAMN difficult!.}
The
line is dumb because it is the line that always precedes work much harder and much truer than
the speaker imagines, and the speaker, every time, knows this and says it anyway. It is the line
people say when they have stopped trying to fit themselves to the existing tools and noticed that
the existing tools are tted to something other than what the work is. It is, in fact, the line that
opens this paper.
What the narrator did next was the methodological move this paper is about. He stopped asking
which existing platform should I adopt and started asking what does this work actually require. The
answers came as a list of constraints he could not violate without breaking the work itself. The
campaign had to support the bookkeeping he already did in his head: factions, currencies, NPC
family trees, food chains, reputation ledgers. The campaign had to run on Discord because his
players would not download a desktop client. The campaign had to work on his players’ mobile
phones during their commutes and on his own laptop during sessions. Payment for content had to
clear a Brazilian credit card
{editor: very attentive readers will notice that this seems like a mute statement, I will ask you to try and use Banco do Brasil, mind you, with their top level credit card, on a regular basis online… If you can, please do tell the editor what black magic you’ve assigned to yourself}
precluded most international marketplaces. The campaign had to
feel like a tabletop, not a video game preserve the texture of dice and decision and consequence,
not the texture of menus and inventory. The campaign had to be authorable by him, in his time,
without a development team.
These constraints were not feature requests. They were the shape of the work itself, and the work
would not exist if any of them were violated. They were what we will call, throughout the rest
of this paper, the work’s invariants: the things that cannot be removed without breaking what is
being built. An invariant is what is left when one asks, of every requirement, whether the work
could survive its violation. If the answer is yes, the requirement is a preference; the system can
negotiate around it. If the answer is no, the requirement is an invariant; the system must respect it
or fail.
Invariants come in several kinds, and the paper will return to each. There are physical invariants:
the speed of light, the Planck length, the conservation laws, the fact that information requires a substrate. There are mathematical invariants: the greatest common divisor of two integers, the closure
conditions of a system of equations, the topological features that survive continuous deformation.
There are substrate invariants: the bandwidth a network can carry, the operations a processor can
perform natively, the resolution at which a sensor can distinguish signal from noise. And there are
situational invariants: the device the user owns, the network they are on, the language they speak,
the time and money and attention they can spend. The narrator’s list mixed all four kinds. So does
almost every real project’s list, when honestly written down.
The inversion this paper proposes is to treat the invariants as fixed and the system as the variable.
The user does not adapt to the system; the system fits the user, the device, the substrate, the
physics, the math. Every design decision is checked against the invariants. A decision that respects
them is admissible. A decision that violates them is rejected, regardless of how appealing it is in
isolation. The methodology is conservative in a strict sense: it conserves the invariants, and lets
that conservation force the design.
The consequence is structural. When the invariants are fixed, the designer cannot solve the problem
by adding capability that requires violating them. Most of the moves available in accretion-mode
are suddenly closed. What remains, as the primary move, is removal: strip out the assumptions, dependencies, and capabilities that conict with the invariants, and see what remains. The remaining
structure if anything remains at all if is the one the work actually requires. Removal is not a
stylistic preference of the methodology. It is a mechanical consequence of fixing the invariants and
letting addition be ruled out.
This is the move the narrator made when he opened a blank document and stopped looking for the
existing virtual tabletop that would carry his campaign. He had identified the invariants. He could
see that no existing tool respected all of them. The remaining design space was small and sharply
shaped: build the thing that respects the invariants, remove every assumption inherited from the
existing tools that violates them, and see what is left. What was left turned out to be older, cleaner,
and stranger than anything he had expected. We will return to him in section 4.2, where the design
space resolves into a specific architecture. The architecture has been waiting since 1980.
{Editor: I was born in 1991 mind you… I’m not a seventies kid}
The diagnostic question
The narrator did not begin building the virtual tabletop. He poured another glass of wine, opened
a blank document, and began building the engine that the tabletop would eventually display. He
was, at this point, still under the naive impression that the rendering would be the easy part. He
had already written a primitive grid in three.js; he intended the tabletop to be three-dimensional;
the front-end would, presumably, fall out of an afternoon’s work once the back-end was done. This
impression turned out to be wrong in the specfiic way that almost all such impressions turn out to
be wrong, but it was wrong in a way that mattered, because the order of operations it imposed engine first, rendering second was exactly correct. He built the clockwork before he built the
window through which to view it.
The clockwork—this is what we ended up calling the result—was a fully functional simulation of
a tabletop world running on deterministic rules. NPCs ate food, depleted reserves, traded, moved
between settlements, married, died, were born. Currencies owed between regions and adjusted
prices. Reputations propagated across factions according to the actions of player characters. Family
trees extended forward in time. Weather affected travel; travel affected trade; trade affected prices;
prices affected reputation; reputation affected which quests were available. The whole system ran on
a seed plus a diff store: identical seeds produced identical worlds; identical diffs replayed identical
sessions; the world state at any moment was fully reproducible from compact canonical data.
The clockwork ran. It ran with no rendering at all. The narrator could query it from a terminal
and ask: what is the price of grain in this town today? Which faction holds this region? Has
the player’s reputation propagated to the merchants in the next city? The answers came back
deterministic, internally consistent, and reproducible. The world existed before any of it could be
seen, and this turned out to be the structurally important fact about the entire project. The world
was not waiting on its rendering to become real. The world was the real thing; the rendering, when
it came, would be a faithful notation for something that already worked.
This is the diagnostic.
Given enough time, resources and physical space, could the system be constructed?
The narrator’s clockwork is the question’s literal answer. Given enough physical space if a warehouse, a city block, a sufficiently large table if one could build the clockwork as a clockwork. NPCs
as small mechanical figures with food gauges. Markets as racks of physical coins moving between
trays. Family trees as paper records appended session by session. Reputations as colored beads in
jars labeled by faction. Travel as positions on a physical map. Weather as a die rolled each in-game
day, modifying the speed at which figures could be moved between locations. The simulation would
be slow. It would consume space. It would require many people to operate. But it would produce
the same outputs the digital clockwork produces, because both are bounded by the same invariants:
economic conservation, biological hunger, geographical adjacency, causal propagation, deterministic
rule application.
The clockwork passes the diagnostic. The digital implementation is a notation for an analog system
whose invariants the digital implementation respects. This is what makes the engine durable. The
engine is not solving a problem unique to computation; it is rendering, in computation, a problem
that has analog physics, and the computation inherits the analog physics’ constraints. Bugs in the
engine, when they appear, almost always turn out to be places where the digital implementation
drifted from the analog physics if where a price moved without a transaction, a reputation propagated without a witness, a death occurred without a cause. The diagnostic not only tells the
narrator that the engine should exist; it tells him how to debug it. Where does this digital behavior
diverge from what the analog physics would produce? Find the divergence. Restore the invariant.
The engine becomes correct.
Two examples will sharpen the diagnostic before we apply it more generally. A spreadsheet passes
cleanly: the analog version is a clerk with a paper ledger, ruled into columns, summing rows by
hand. Every operation a spreadsheet performs has a clerk-and-paper analog, and the spreadsheet’s
invariants (additivity, conservation of recorded quantities, deterministic recalculation) are inherited
from the clerk-and-paper system. Spreadsheets are durable software because they are correct notations for correct analog systems; their forty years of continuous use across every domain in human
commerce is the diagnostic’s strongest single piece of evidence.
A blockchain also passes, though less obviously. The analog version is a network of mutually
distrustful notaries, each maintaining a duplicate ledger, agreeing periodically on a canonical version
through a procedure costly enough that defection is unprofitable. This system existed in precomputational form: medieval merchant networks, diplomatic ratification ceremonies, the Roman
census. The blockchain is a notation for an analog protocol that humans had already invented and
operated for centuries. Its durability, where it exists, comes from the analog protocol’s durability.
An infinite-scroll feed optimized for engagement metrics is the counterexample. What is the analog
version? A magazine? A magazine has finite pages, an editor, a publication date, and a price; its
invariants include the writer’s labor, the reader’s attention budget, the printing schedule. None
of these survive into the feed. A conversation? A conversation has reciprocity, turn-taking, an
identifiable other; none of these survive either. A library? A library has cataloguing, retention, and
a retrieval architecture; the feed has none of these. The feed has no analog version. There is no
physical system one could build, given enough physical space, that produces the feed’s behavior,
because the feed’s behavior is not bounded by any invariant the analog world contains. It is
unanchored. The diagnostic predicts that such systems will be brittle, will produce unintended
side effects, will fail in ways unpredictable at design time, and will require continuous accretion of
policy and intervention to maintain even nominal function. This prediction has not been falsified
by the historical record.
The diagnostic is not always sharp at the boundary. A modern operating system has analog components (filing cabinets for files, bulletin boards for shared memory, telephone exchanges for interprocess communication) and unanchored components (process schedulers optimizing utilization metrics that have no physical referent). Real software is rarely entirely one thing. The diagnostic’s value
is not in delivering a binary verdict but in naming what is anchored and what is not, so the practitioner knows which parts of the system are durable by inheritance and which parts will require
continuous maintenance against drift.
The narrator returned to the diagnostic without naming it, repeatedly, through the months that
followed. Every feature the engine could plausibly need was checked: could the analog version of this
be built? Some features passed and were implemented. Some failed and were rejected. The rejection
list mattered as much as the implementation list, because every rejected feature was a feature that,
in accretion-mode, would have been added by default. The diagnostic was the narrator’s compass.
It pointed back, every time, to the analog physics of the world he was building, and it told him
which of his ideas were notations for that world and which were notations for nothing at all.
{Editor again, you wouldn’t believe just how much made through the cut, and how much didn’t… Truly, a productive 8-hour sprint}
The clockwork was now running. The narrator had, on his hands, a working invariant-respecting
simulation of a complete tabletop world. What he did not have, yet, was anything a player could
see. The front-end was waiting. He was, by this point, no longer under the naive impression that it
would be easy.
Rare-form solutions
The methodology described so far names a stance (respect the invariants) and a diagnostic (could
the analog version be built). What the methodology produces, when applied to a real problem, is a
class of solutions with characteristic features. The solutions tend to be older than expected. They
tend to require less infrastructure than their accreted competitors. They tend to be structurally
resistant to bloat. They tend, when found, to feel as though they had always been there, and to
provoke in the practitioner the slightly humbling recognition that the work was less invention than
uncovering.
We call these rare-form solutions. The name is not meant romantically. It is descriptive: solutions
of this kind are empirically uncommon in any given decade, not because they are hard to find
but because the dominant stance is pointed away from them. Most practitioners search southward,
toward the next addition, the next abstraction, the next capability not yet built. Rare-form solutions
are found by facing the other direction if toward what has already worked, toward the substrate’s
own grain, toward the analog physics the problem is a notation for. They are rare in the search
record, not in the world.
Three signatures
A rare-form solution can be recognized, with reasonable reliability, by three structural signatures.
The signatures are descriptive; they emerge after the fact rather than guiding the search. But once
one knows what they look like, one stops mistaking them for naivete.
Signature one: the solution predates its current problem-framing, often substantially. When the design space is correctly bounded by invariants, the search for a fitting structure tends to
terminate at a structure already discovered by someone working under similar invariants in an earlier
era. Rogue (1980) solves the world-rendering problem for tabletop role-playing on modern web
infrastructure. Movable type (1450) solves the bandwidth problem for shipping textured surfaces
to remote clients. Op-amp analog computing (1940s) solves the energy-per-operation problem for
neural network inference on neuromorphic the solution feels contemporary because the
framing is contemporary; the solution is not contemporary. It was waiting.
Signature two: the solution requires no infrastructure that the problem does not already need. It fits the floor; it does not raise the floor. Where accretion-mode solutions characteristically add a hardware tier, a framework dependency, a credentialing layer, a service contract, the
rare-form solution proceeds within the infrastructure already present. This is the operationalized
form of respecting the user-as-constant invariant: the solution inherits the user’s actual conditions
and lets those conditions shape the design. Rogue ran on a VAX terminal in 1980 because that was
the infrastructure students at UC Santa Cruz had. Forty-six years later, the same data model fits
inside a browser tab on a mobile phone over a Brazilian cellular connection because the infrastructure floor was respected then and is respected now, and the data model travels because it is small enough to travel anywhere.
Signature three: the solution has a structural ceiling on scope-creep.
This is the property
most absent from accretion-mode software, and the property most worth naming explicitly. The
rare-form solution, once instantiated, refuses accretion if not as a stylistic choice of the maintainer
but as a mechanical consequence of its construction. To add a feature, the new feature must fit the
invariants the existing structure respects. If it fits, the addition is admissible and usually trivial. If
it does not fit, the addition cannot be made without violating the invariants, which would break the
existing structure. The ceiling is enforced by the architecture, not by discipline. The architecture
is the gatekeeper. Most of the long-running software projects that have remained intelligible across
decades if the Unix utilities, TeX, SQLite, certain roguelikes still in active development after thirty
years if have this property. Most software projects do not, and bloat into illegibility within five.
The three signatures cohere. A solution that predates its framing (signature one) typically does so
because it was found by people respecting invariants the framing has since forgotten; that respect
produces infrastructure-frugality (signature two) and structural anti-bloat (signature three). The
signatures are not independent diagnostics. They are three views of the same underlying property:
a structure shaped by its invariants, which is therefore stable under the invariants and brittle only
when the invariants themselves shift.
Worked example: the engine, the front end, the scale
The narrator returned to the front-end problem with a working engine and a question he had not
been able to ask before: what does this engine require of its display layer? The clockwork ran on
a seed plus a diff store; identical seeds produced identical worlds; the entire world state at any
moment was reproducible from a small canonical record. The engine had a property that most
software does not have: it was cheap to regenerate from compact data. The question this raised,
almost immediately, was whether the rendering had to be any different.
The default answer if the answer accretion-mode would have given if was no, the rendering is a
separate concern, the engine produces state and the front-end visualizes state and these are loosely
coupled by some serialization protocol. This is the architecture nearly every modern web application
uses. It is also the architecture that requires the rendering to ship pre-built assets to the client (mesh
les, textures, sprite atlases), to handle real-time visualization through a 3D rendering engine on the
client (which forces hardware requirements), and to maintain a synchronization protocol between
the canonical engine state and the client’s local representation (which forces continuous network
traffic). Each of these requirements is a floor-raise. The narrator could not afford any of them.
The Brazilian credit card invariant ruled out the asset marketplaces. The mobile- phone invariant
ruled out 3D rendering on the client. The intermittent network invariant ruled out continuous
synchronization. The rendering, following accretion-mode defaults, was incompatible with the floor
the engine itself respected.
This was the methodologically important moment, and the narrator did not initially recognize it.
He spent some weeks trying to reduce the default rendering’s footprint if smaller meshes, lower-
poly models, compressed textures, asset streaming. None of it worked, because none of it was small
enough. The bandwidth budget was not generous; it was Discord-embed sized. The hardware budget
was not modern; it was whatever phone the player owned. The default rendering’s footprint, even
compressed to its theoretical minimum, was orders of magnitude above the floor. There was no
path to the floor by reducing the default. The default was the wrong shape.
What the narrator did next was the move this paper is about. He stopped trying to fit the default
rendering to his floor and started asking the diagnostic question of the rendering itself. Given
enough physical space, could the analog version of this rendering be built? The engine passed the
diagnostic if the clockwork could be built as a clockwork. But the rendering, as conventionally
implemented, did not pass: there was no analog version of “shipping 3D mesh files over the network
for client-side rasterization.” That operation has no physical referent. It is a cluster of digital
abstractions optimized for a substrate (modern GPU, modern bandwidth, modern device tier) that
the narrator’s invariants did not include. The default rendering was an unanchored solution. It
worked, in the sense that it ran, but it was not a notation for anything physically meaningful. It
was an artifact of the substrate it had been designed for.
{Editor: hey y’all’s, it’s me, the crazy human again here to tell you… Yup, I could have let go at this point, but it was so inefficient… When I looked at 200/300mb per view for a “normal” pipeline of VTT… It was a little too much. I’d recommend you never enter the rabbit hole, but, if you ever do… Stop before the next implementation}
This is a discrete grid with material textures and standing figures. It is, when written down, a
structure programmers had built and shipped under another name for forty-six years.
Rogue (1980) represented its world as a grid of typed cells, each cell denoting a material or an entity
with a single character. The wall was #. The floor was .. The player was @. The dragon was D. The
grid was authored by the designer, manipulated by the engine, and rendered by the terminal. The
rendering was trivial: write the character. The character was the cell. NetHack (1987) extended
this model with greater content depth. Dwarf Fortress (2002, in continuous active development)
extended it to running an entire civilization simulation inside the same data model. The lineage
represents perhaps the most persistent unbroken line of game architecture in the discipline’s history.
It is also, structurally, exactly what a tabletop diorama is: a typed grid, rendered by stamping
material representations into positions on a surface, with figures standing on the cells.
The narrator’s engine extended the model along one axis (height) to produce a three-dimensional
grid of typed cells, with each cell still denoting a material via a single glyph. A creature was a stack
of cell-slices, like an MRI of imaginary anatomy. A tile was a slice at a fixed height. A scene was a
region of slices. The data model was the roguelike data model rotated ninety degrees and extended
into a third dimension. The shipping format was a small dictionary of material textures (the glyph
alphabet, fewer than fifty entries) plus a recipe (the visible-faces extraction of the typed grid) sent to
the client per scene. The client cached the dictionary in IndexedDB permanently, across sessions
and composed scenes by stamping textures according to recipes. The first session’s bandwidth
was measured in tens of kilobytes; subsequent sessions, with the dictionary cached, were measured
in hundreds of bytes per scene.
This pattern, ship the dictionary once, ship the recipe per scene, compose at the receiving end
, is the architectural pattern of sprite sheets in 8-bit and 16-bit games, of CSS sprite atlases on
the early web, of tilemap-based level editors, and, ultimately, of movable type. Gutenberg’s insight
was not that letters could be cast; casting was older than him. The insight was that reusable typed
elements composed by recipe scaled to arbitrary content with negligible per-page cost. The browser
becomes a typesetting machine. The world matrix becomes a manuscript. Rendering becomes
printing. The engine has taken this 576-year-old pattern, rotated it ninety degrees into the third dimension, and discovered that it works exactly as well in three dimensions as it has worked in two for half a millennium.
Apply the three signatures.
Predates its problem-framing. The data model is from 1980; the composition pattern is from 1450. The earliest plausible contemporary framing of the problem (3D web tabletop role-playing games on mobile devices) dates to perhaps 2015. The solution predates the framing by three to six decades on the data model, by more than five centuries on the composition pattern.
Requires no infrastructure the problem does not already need.The engine ships a small dictionary and small recipes over HTTP. The client uses a canvas element and an IndexedDB cache, both of which are present in every browser the players already own. There is no new hardware tier, no specialized rendering pipeline, no framework dependency above what the browser provides natively. The floor is the floor.
Structural ceiling on scope-creep. Every conceivable feature the engine might want to render is checked against the same question: can it be expressed as a glyph operation on the typed grid? A new creature is a new arrangement of glyphs in a slice-stack. A new material is a new entry in the dictionary. A new spell effect is a glyph perturbation rule. A new piece of equipment is a small typed grid stamped onto an existing one at an addressed position. If the feature can be expressed as a glyph operation, it is admissible and usually trivial to implement. If it cannot, it cannot be added without breaking the architecture’s invariants, and the architecture itself rejects the addition. The engine’s scope is bounded by what the typed grid can carry. Within that bound, the scope is enormous; beyond that bound, the addition does not fit and is not made.
The methodological move that produced this result is worth naming explicitly, because it is also the move Ada Lovelace performed in her Notes on Menabrea’s account of the Analytical Engine in 1843. Lovelace did not build new hardware. She recognized that the engine Babbage had designed for arithmetical calculation could be a notation for music if that its operations, if read as intervals and durations rather than as quantities, would compute musical composition with the same machinery. Same hardware, different reading, both anchored to the hardware’s invariants. The narrator’s engine performs the same operation in three places at once. The typed grid is read as creatures (slice stacks of materials), as terrain (ground-plane slices of materials), as items (small grids graftable onto creature grids), and as effects (perturbation rules on existing grids). One substrate, four readings, all anchored to the substrate’s invariants. The methodology this paper proposes is, in this sense, an inheritance of Lovelace’s: recognize what the substrate’s operations can faithfully be a notation for, and let the recognition do the work the invention would otherwise have to do.
The scale of what this produces deserves to be named. The engine the narrator built is not aprototype. It is an eighteen-tier ladder of entities, ordered from atomic substance (commodities, affixes, seed data) at the bottom to world-state scalars (the planetary tick, the canonical worldline, the topological graph) at the top. Between these endpoints sit items, containers, living units, actors, small collectives, scenes, rooms, operations, chunks, districts, settlements, edges and routes, regions, kingdoms, continents, and planets. Each tier is composed of the tier below and belongs to the tier above. NPCs eat food whose prices propagate through markets whose currencies belong to kingdoms whose pantheons live at continental scope. Reputation graphs run across factions; weather runs at regional cadence; the canonical worldline appends every tick, every observation, every domain write since the world began, in an event-sourced log from which any historical state can be reconstructed.[1]
The whole structure runs on a small core of pure-infrastructure primitives: a manifold function (the atomic transformation, pure and deterministic), a manifold matrix (the container that aggregates manifold functions over time), a topology pointer (the world graph plus domain inheritance plus entity registry), an append-only worldline (the canonical history), a seeded pseudorandom generator (the determinism primitive), and a tick engine the narrator calls the Clockwork. There are forty-seven concrete subclasses of the manifold matrix, each handling one slice of the simulation: economy, warfare, religion, weather, narrative, ecology. They are coordinated by seven dependency layers: physical, extraction, economy, faction, settlement, ecology, hub services. The whole thing fits in the narrator’s head because each layer is a notation for a physical process whose invariants the layer respects.
The rendering, in this context, is a small thing. The engine is the work; the rendering is the window. But the window had to be shaped exactly the way it was shaped, because no other shape would have fit the engine the narrator had built. The engine’s invariants, deterministic, regenerable from compact data, structurally typed at every tier, demanded a rendering with the same properties:
deterministic;
regenerable from compact data;
structurally typed at every cell.
Any rendering that did not respect these invariants would have introduced drift between what the engine knew and what the players saw, and that drift would have, over time, broken the engine’s most important property: that the world state is the truth, and everything else is a notation for it. The rendering had to be the correct notation for the world state the engine produced, or the world would not survive its own visualization.
The rendering the narrator found is the correct notation. Not because he invented it. Because it was already there, in Rogue, in NetHack, in Dwarf Fortress, in movable type, in the medical-imaging convention of MRI, in every architectural tradition that had respected the same invariants in earlier eras. The narrator’s contribution was the recognition that these traditions were already the answer to a question the contemporary framing of his problem had been trying to invent a new answer for. The answer predated the question. The compass had been pointing back the entire time. He simply turned to face it.
We will name the lineage that points there in the next section. The narrator’s engine is now in production. The rendering is a few hundred lines of canvas-drawing code. The dictionary is fifty glyphs. The bandwidth is Discord-embed sized. The world has been running for some months. NPCs are eating food, currencies are owing, factions are propagating reputations, and the players, four programmers who started a campaign on isekai vibes and no abilities-read, are now in their second year of play, asking the narrator pointed questions about the price of grain.
A second artifact, written tonight:
checkers.tp
The narrator’s engine includes a small notation system the engine itself uses to specify rule-bound interactive systems (combat, economic transactions, faction dynamics, and so on...). The notation, called .tp after the engine’s topology-pointer primitive, expresses systems as constrained transitions on a manifold matrix. While this paper was being drafted, on the same evening, the narrator wrote a small .tp specification for the game of checkers, partly to test whether the notation could be exercised uently after some time away from it, and partly to produce a small artifact whose properties could be examined in pair with the paper’s argument.
The artifact is reproduced here in full:
MM[8:8]{ Where N{x, y} and n{x+1, y} are N=white n=black 0,0 = x,y{1,1} = white t=0 when y[1,2,3,6,7,8] contain{ Piece[white, black], and ∅ = y[4,5] }, Let piece[black, white] Δt=∅y where [Y1=t0 and Y2=t1] | [Δx=1 and Y=+1] Piece[white] on x=odd and Piece[black] on x=odd Δtn Piece[black] on x=even and Piece[white] on x=even Δtn+1 Piece[white]=y8 then piece[WHITE] and Δt=∅y where{ Y1=t+1[y8] and Δx=[1,7] Δy=[8,1] Mirror[black] }, Piece[white, black]=∅ when piece[black, white]{ t=0{x,y ̸= x,y} and t=1 Y2=y+2 and x+2 }, Loop, Piece[white, black]=0 then Piece[black, white]=winner }
The artifact is fifteen lines long. It specifies a complete board game: the eight-by-eight grid, thealternating colors, the initial positions of the pieces, the basic move rule (one square diagonal, forward), the promotion rule (a piece reaching the far rank upgrades to a king with full diagonal range), the mirror rule (promotion behavior reected for the opposite color without restating it), the capture rule (jump-over geometry), and the termination condition (one side reduced to zero pieces loses). Reading it as classical checkers, every rule is present and correct.
A first reader, reading once, will see classical checkers and stop there. The artifact has, on closer examination, a stronger property than the rst reading suggests. The notation contains no explicit game-selector. The alternation structure that organizes the moves can be bound to either of two axes—spatial or temporal—and both bindings produce playable, internally consistent rule systems. The artifact is, in a precise technical sense, axis-polymorphic: the same notation specifies different games depending on which axis the alternation is bound along at the application site.
The decisive lines are these:
Piece[white] on x=odd and Piece[black] on x=odd Δtn Piece[black] on x=even and Piece[white] on x=even Δtn+1
A first-pass reader (the paper’s AI co-author, on first reading) flattens these into ordinary turn alternation: white moves, then black moves. This is wrong. The color terms cancel as ownership categories, because both colors appear on both sides of the column-parity divide. The actual operative selector is x-parity, not player color. The cancellation is mechanical:
odd x: white + black → tick n even x: black + white → tick n+1
The rule says: pieces on odd columns act at tick n; pieces on even columns act at tick n+1; player color does not enter the scheduling at all. Once that cancellation is performed, the alternation rule is no longer about whose turn it is. It is about which board parity is active on which tick.
This admits two distinct bindings.
Spatial binding. The alternation rule Where N{x,y} and n{x+1,y} are N=white n=black is read as the dark-square / light-square coloring of the board, with the column-parity rule restricting active pieces to one player at a time. This requires an external scheduler treating the column parity as a turn-selector. The result is checkers as it has been played for several centuries: pieces confined to one color of square, turns alternating between two players.
Temporal binding. The column-parity rule is read as a tick-scheduler operating on the entire board, with both players moving simultaneously every two ticks. The columns serve not as turn- selectors but as simultaneity-breakers, ensuring two pieces never attempt the same destination at the same instant. The result is a real-time variant in which both sides commit to moves before seeing the opponent’s response, with the tactics of the resulting game entirely different from classical checkers despite identical piece-movement rules.
Both bindings are valid against the same substrate invariants. Neither requires modification of the file. The selector that distinguishes them is not encoded in checkers.tp; it lives at the binding site, in how the reader treats the alternation axis. The artifact is one specification; the binding is the selector; both bindings produce coherent games.
This is Lovelace’s recognition operating at the language-design level. Lovelace observed that the same hardware could be a notation for arithmetic or for music depending on how its operations were bound to interpretations at the application site; the narrator’s notation extends axis-polymorphism to the design of games, and by extension to any rule-bound system whose alternation structure admits dual axis-bindings. The methodology produces, at this level, notations whose alternation structures are not committed to a single reading by the file itself, allowing the binding to occur at the application site rather than at the writing site. This is a deep form of invariant-respect: not merely that the artifact respects its invariants, but that the notation declines to over-specify, leaving the binding to be performed where it is most natural to perform it.
We owe an honest meta-observation about how this property surfaced, because the surfacing pattern itself is methodologically informative. The artifact was reviewed, in the course of drafting this section, by two independent AI systems—the paper’s AI co-author
{Editor: I’m pretty good at thinking, and I can long format write, but Claude is legit better at this than I am, therefore it’s best for all of us that he write and I think.}
and a second AI system consulted for adversarial review. Both systems, on first reading, attened the column-parity rule into ordinary turn alternation. Both identified the classical-checkers reading only. Both missed the axis-polymorphism at the same passage, for the same reason: the player-color terms in Piece[white] on x=odd and Piece[black] on x=odd read more naturally as turn-ownership than as parity-grouping when encountered at single-pass reading speed. The narrator, in both cases, prompted a second pass; in both cases, the second pass with the cancellation walk explicit surfaced the polymorphism. The recovery was mechanical, not interpretive.
This is a small but real data point about the artifact’s claim. The attening is not idiosyncratic to one model’s training or one reader’s attention; it replicates across independent AI readers, which suggests the attening is a property of the notation under single-pass reading rather than of any particular reader. The polymorphism, similarly, is not idiosyncratic to one reader’s imagination; it surfaces under second-pass reading by both systems once the cancellation is performed. Two trials, same attening, same recovery, both anchored to the same mechanical operation on the same lines. The methodology’s claim about pair-reading is demonstrated within the production of this section, with two independent AI readers as witnesses, by the artifact whose polymorphism is the demonstration’s content.
The implication generalizes. An invariant-respecting artifact written at high density does not atten itself for a single-pass reader; it preserves the density and trusts the reader’s second pass to do the surfacing. The cognitive work is not eliminated by the notation’s compactness; it is relocated, from the writer’s hand to the reader’s attention, where it can be performed under the discipline of reading rather than the haste of writing. The relocation is not a failure mode of compact notation. It is the mechanism by which compact notation carries more structure than its surface admits.
The methodological point this artifact demonstrates is that the discipline this paper has been describing operates not only at the level of system architecture but at the level of language design, and that an instance of the discipline performed in thirty minutes can produce a notation whose alternation structure is axis-polymorphic by construction, supporting bindings the writer did not consciously enumerate at the time of writing. The narrator wrote one specification and produced two games. Neither was an accident; both fall out of the same notation, because the notation declines to bind its alternation axis at the writing site, and the substrate respects both bindings equally. The substrate did the work the writer did not have to do consciously. The reader, given two passes, finds what the writer encoded in one.
Lineage
The methodology this paper proposes is not original to us. It is a recovery. The traditions that practiced it did not always name it, and the practitioners did not always know they were practicing it several of the strongest examples in the historical record are accidents, in the sense that the people who produced them intended to do something else and missed in a structurally productive direction. We will take three of these accidents in detail and then sketch the broader tradition more briey. The argument of the section is that the methodology has been working under other names for centuries, that it has produced the artifacts that have most durably survived their own eras, and that its deliberate adoption is a recovery rather than an innovation.
THAC0, or how the players reinvented the math ruler
In the 1970s, the early players of Advanced Dungeons & Dragons encountered a computational problem at the table. The attack-resolution mechanic, as written, required cross-referencing a table indexed by character class, character level, and target armor class effectively a small matrix multiplication, performed by hand, between every attack and every defense. The arithmetic was within the players’ capability but consumed enough table-time that combat dragged. This was not an abstract problem. It was the substrate floor: how much arithmetic can a player perform between dice rolls without breaking the social rhythm of the game.
The solution that emerged, by the second edition of the rules, was THAC0: To Hit Armor Class 0. Instead of looking up a value in a matrix, each character carried a single number, and the attack calculation collapsed to one subtraction. The matrix had not been removed from the rules; it had been folded into a per-character constant, with the structural invariant (that any attack against any armor class could be derived from this constant by a single arithmetic operation) preserved exactly. The players had not invented an optimization. They had rediscovered the math ruler : the pre-digital tradition of folding a complex computation into a physical or notational object that performs the lookup without the brain having to. Slide rules, log tables, nomograms, all variations on the same idea, all centuries older than the game. THAC0 is a notation for a math ruler at the gaming table.
This is a small example, and a perfect one. The players were not trying to invent anything. They were trying to play the game. The substrate refused; the analog physics (a player’s arithmetic budget between dice rolls) imposed its invariant; removal-mode find the constant that absorbs the matrix, was the only available move. What they found was older than them, older than the game, older than computers. The methodology this paper proposes would have predicted this outcome, named it on arrival, and reduced the years between the problem and the solution.
It is also worth naming what happened next. The third edition of Dungeons & Dragons, publishedin 2000, replaced THAC0 with an additive system that allowed for more granular bonuses and more varied combat mechanics. The replacement was not technically wrong; it served different design goals. But it broke THAC0′s scope-creep ceiling. The new system could absorb new mechanics that THAC0 could not, and consequently it accreted them. THAC0 had a structural ceiling on scope- creep (signature three) precisely because folding the matrix into a constant required the matrix to remain matrix- shaped. The replacement system removed the ceiling and the system expanded toward the limits of player working memory, where it remains. This is not, again, a denunciation. It is an observation about what is gained and what is lost when a rare-form solution is exchanged for an accretion-mode one.
The divergence problem, or why invariant-respecting solutions outlast their successors
There is a property of rare-form solutions that has not yet been named in this paper, because it is most visible at decadal scales and the worked example is too young to display it. The property is cross-temporal interoperability: invariant-respecting solutions tend to remain legible to, and operable by, future substrates that share none of their original infrastructure. Telex, the electromechanical text protocol of the 1930s, can still exchange messages with a contemporary computer, because Telex was designed around the invariant of electrical pulses encoding a nite character set, and that invariant survives every substrate change since. Rogue (1980) is still playable on every modern operating system, because its data model depends on nothing the operating system has lost. LATEX still compiles, after forty-five years, on infrastructure entirely unrelated to the systems on which it was first written, because the input format is plain text and the operations are mathematical. None of these survive because they are good. They survive because the invariants they respect have not changed.
Contemporary software does not display this property. A web application written in 2022 will frequently fail to build in 2026, not because the application is wrong but because its dependency tree has drifted out from under it. A mobile application written for one platform’s API will not run on the other’s. A document written in a current word processor will not open cleanly in the same word processor’s version five years later. This is the divergence problem: as platforms specialize, they speak only to themselves, and the cost of cross-platform or cross-temporal communication grows monotonically with the specialization. The accretion-mode solution to the divergence problem is to add more abstraction layers—runtimes, virtual machines, containers, protocol translators—each of which itself diverges over time, deferring the problem rather than solving it.
The invariant-respecting solution to the divergence problem is to not have it. A protocol that respects only invariants the future will continue to honor will continue to function in that future. There is no specific technique here, no replacement layer to adopt; there is only the discipline of asking, of every dependency and every assumption, is this an invariant or is this a state-of-the- art convention. State-of-the-art conventions diverge. Invariants do not. The methodology biases the practitioner toward the second category, and the artifacts produced under the methodology consequently survive divergence as a side effect of their construction.
Tolkien and Gygax: the productive failures of state-of-the-art
The most important examples of the methodology in the cultural record are not, in fact, examples of people deliberately practicing it. They are examples of people trying to do the modern thing of their era and missing in a structurally productive direction. Two of these missings are large enough that the artifacts they produced have outgrown the framings they were attempted under, and both produced their respective fields.
J. R. R. Tolkien’s intended professional work was philology. His languages Quenya, Sindarin, the others were the subject of his serious effort, and the mythology of Middle-earth was, by his own account, the substrate on which the languages could be spoken. He needed speakers for the languages, and speakers required a world for them to live in, and a world required a history. Tolkien thought he was doing philology; he was building scaffolding for the philology in the form of myth. His state-of-the-art target was scholarly linguistics. He missed, and produced something older: language encoded in narrative, transmitted through story. This is the form humans have used to carry language across generations since before writing. Tolkien, attempting to do state-of-the-art linguistics, accidentally rediscovered oral tradition’s solution to language transmission. The rediscovery worked because oral tradition’s solution respects the invariants language transmission actually has, memorability, narrative hooks, character-anchored vocabulary, repetition through retelling, which scholarly philology, optimized for written analysis, did not.
Gary Gygax, Dave Arneson, and Jeff Perren—the designers of the first edition of Dungeons & Dragons—intended to write ction in the style of Tolkien and Robert E. Howard. The route they chose to that goal was the modern thing of their hobbyist moment: the tactical wargame. They were, by background, miniature wargamers, and the natural extension of a wargame into individual characters with persistent histories produced what we now call the role-playing game. They thought they were building a more granular wargame. What they were actually building was campfire storytelling with dice as a structural honesty constraint. The dice were the invariant: they prevented the storyteller from deciding outcomes, which is the failure mode storytelling has always had to manage, and which oral traditions had managed through ritual, communal memory, and the constraint of audience recognition. The dice replaced the audience as the storytelling-honesty constraint, and the rule system replaced the ritual. Gygax and his collaborators, attempting to build a state-of-the-art tactical wargame, accidentally rediscovered oral storytelling’s solution to the storyteller-honesty problem. The rediscovery worked, again, because oral tradition’s solution respects the invariants storytelling has distributed authorship, constrained outcomes, communal participation, repeatable form; which the wargame frame, optimized for tactical simulation—did not.
Both productions are accidents of the kind this paper proposes making deliberate. Tolkien’s in- variants were linguistic; Gygax’s were narrative; both produced rare-form solutions by failing at their state-of-the-art targets in a direction that recovered older, invariant-respecting forms. Both produced their fields. Modern fantasy literature is the genre Tolkien accidentally built; the entire role-playing game industry is the genre Gygax and Arneson accidentally built. The fields exist because the accidental productions were structurally durable in ways their intended productions would not have been. Scholarly Quenya, without the mythology, would have remained a curiosity. The Chainmail wargame, without the role-playing extension, would have remained a niche hobby. The rare-form versions survived; the state-of-the-art targets, where they survived at all, did so as footnotes to the rare-form productions.
The broader tradition
The longer lineage can be sketched briey, because the three detailed examples have done the section’s argumentative work.
The Unix philosophy (small composable tools, plain text as a universal interface, programs that do one thing well) is the methodology applied to operating system design. Unix’s invariants were process boundaries, byte streams, and human-readable configuration; the design respected them, and the resulting artifacts have outlived every operating system designed on contemporary state-of-the-art principles in the same era.
The roguelike tradition, from Rogue (1980) through NetHack, ADOM, Dungeon Crawl Stone Soup, Caves of Qud, and Dwarf Fortress, is the methodology applied to game architecture. The data model is the invariant; the rendering is a notation; the simulation depth that this substrate has supported (Dwarf Fortress’s modeling of geology, biology, history, and individual psychology in a typed grid) has not been matched by any contemporary 3D engine, despite their orders-of-magnitude greater computational budgets.
The analog computing tradition—the Antikythera mechanism (c. 100 BCE), Babbage’s engines, the differential analyzers used into the 1960s, and the contemporary recoveries in neuromorphic chips and photonic computing—represents the methodology applied to computation itself, with the substrate honored directly rather than translated through binary. Each instance produces results that binary substrates struggle to match in effciency or fidelity within the operations the analog substrate natively supports.
And Ada Lovelace, finally, is the canonical individual exemplar. Her Notes on Menabrea’s account of the Analytical Engine contained the recognition that defines the methodology: the machine Babbage had designed for arithmetic could be a notation for music if its operations were read as intervals and durations rather than as quantities. Same hardware, different reading, both anchored to the hardware’s invariants. Lovelace did not build new hardware. She did not propose adding capability. She recognized what the existing substrate could faithfully be a notation for, and let the recognition do the work that invention would have had to do otherwise. This is the methodology’s purest single act in the historical record. The paper’s stance is the inheritance of hers.
The lineage is wider than these examples. We could include the demoscene’s tradition of producing rich audiovisual output under hard kilobyte limits, the suckless software project’s discipline of removing every line not strictly required, the survival of Forth on resource-constrained systems where modern languages cannot fit. We could include closure equations in mathematics - φ+ζ = π, the conservation laws, the topological invariants—as the formal version of the same operation: identify the structure that survives all transformations, and treat that structure as the oor that the rest of the system must respect.[2] The methodology, named or unnamed, has been working continuously, in many domains, for at least the documented history of organized human cognition. Its absence from contemporary software practice is anomalous against this baseline, not the other way around.
Limits
The limits of this methodology are not where most readers will expect them to be. The expected objection is that respecting invariants is too restrictive, that real software requires modern abstractions, modern frameworks, modern dependencies, and that the methodology proposed here amounts to telling practitioners to write in assembly. This is not the limit. The methodology has nothing against modern abstractions where the abstractions respect the work’s invariants; it has everything against modern abstractions where they do not. The real limits live elsewhere, and they are worth naming clearly so the methodology is not adopted in domains where it does not apply, or rejected in domains where it does.
The constraint deffcit, or why the dead would beat us
We are limited, in contemporary software, not by computation. Computation is effectively un- bounded against the budgets the methodology’s ancestors worked under. Bring back any serious computer engineer from the punched-card era, drop them into a 2026 data center, and within six months they will be running a computational empire built on whatever assembly-near substrate delivers the best operations-per-cycle, beating most modern teams into the ground on every metric the modern teams claim to optimize. This is not because the dead engineer is smarter. It is because the dead engineer learned to compute under real constraint, and that constraint produced a discipline of thought modern engineers have not had to develop. The dead engineer will not waste cycles. The dead engineer will not pull in a framework whose internals are opaque. The dead engineer will not accept a runtime cost they cannot account for. Their habits are calibrated to a substrate that punished waste, and the calibration outlives the substrate.
{Your friendly neighborhood Editor: I’m particularly proud of this portion, totally mine as an argument!}
We are not proposing that practitioners write in assembly. The point of the thought experiment is not the substrate; it is the discipline that real constraint produces, which the contemporary practitioner has lost not by choice but by absence of pressure. Bring back the human computer who knew an astronaut would live or die on her ability to carry a oating-point integer correctly in her head; she would, given a modern toolkit, prefer working at SpaceX over NASA, because the shape of the work at SpaceX is within range of the shape she learned—closer to the floor, less abstracted from the substrate, more answerable to physics. Bring back the architect who calculated, by hand, the slope of an aqueduct that had to drop neither more nor less than four degrees over miles; they would, given modern materials and modern surveying tools, build basic infrastructure anywhere on the planet at a fraction of the cost, because they learned what infrastructure means in a substrate that did not forgive sloppiness.
The past had real constraints, and that is what drove creativity. By offloading computation onto computers, we have become the spectral ghost of those who came before us, the same shape, less density, free-oating where they were anchored. The methodology this paper proposes is one route back to anchorage. It is not the only route. But it is the route most accessible to a contemporary practitioner who is willing to recover the discipline without recovering the suffering, and that route runs through the invariants. Identify them. Respect them. Let them be the constraint the substrate used to provide and no longer does. The discipline is restored without the punched cards.
When AI collaboration respects invariants and when it does not
There is a question this paper has not yet addressed directly, though it has been visible at the margins. This is a paper co-authored with an AI system. Some of its arguments arrived in prose only because that collaboration was available. The reader who has reached this section deserves the paper’s clearest statement on when such collaboration respects the methodology and when it violates it.
The distinction is sharp. AI collaboration respects the methodology when the human collaborator could, given enough time and paper, perform the work themselves. The AI is then a hydration engine for thoughts the human has structurally; it accelerates, refines, and prosaically extends, but it does not constitute the work. The work exists in the human’s understanding before, during, and after the collaboration; the AI’s contribution is notation. In this mode, the human’s invariants are present at every step, and the AI is operating within them.
AI collaboration violates the methodology when it is used to offload thinking the human cannot perform on paper. Asking a model to write the front-end CSS for a project whose data model the human has not designed, whose user invariants the human has not identified, whose substrate constraints the human has not articulated, produces output that runs but is not anchored. The human cannot repair it when it breaks, because the work was never theirs. This mode produces, at scale, the kind of failure that hit the JavaScript ecosystem when a single removed package broke much of the world’s web tooling: not because the package was load-bearing in any deep sense, but because no one along the dependency chain had actually owned the oor of their own code. The AI did not cause this failure. The methodology was never followed, and the AI made it cheaper to not follow it. The same property, applied without invariant-respect, will produce the same failure modes at every scale where it is deployed.
The paper does not propose that AI collaboration is bad. It proposes that AI collaboration is bound by the same diagnostic the rest of the methodology is bound by: could the analog version be built, where in this case the analog version is the human alone with paper, given time. If yes, the AI is hydrating thoughts that already have invariants the human has identified. If no, the AI is constituting the work, and the work will not survive its own substrate.
This is not for most
The methodology is not for most practitioners and most practitioners should not adopt it. This is a real limit, and naming it is part of the paper’s responsibility.
Working through invariants is expensive in time, in attention, in the kind of slow recognition that does not come from sprints or roadmaps. The practitioner who commits to it is committing to a mode of work that will not always look like work to the practitioner’s collaborators, employers, or stakeholders. Long periods will pass in which nothing visibly ships, while the practitioner is identifying what the invariants of the problem actually are. The result, when it arrives, will be smaller than expected, will look obvious in retrospect, and will frequently provoke the response is that all? from those who were expecting visible accretion. This is structurally embedded in the methodology and cannot be removed by better project management.
The alternative is well-served by contemporary practice. Most software does not require invariant-respect to function adequately for its lifetime. Most institutional design can absorb its accretion costs without collapsing. Most education works imperfectly but acceptably under the curriculum-expansion model. The methodology is not a moral position; it is a tool for problems where the tool is needed, and many problems do not need it. The practitioner who operates under deadline pressure, in a domain where the invariants are not yet visible, on a problem whose value is in shipping rather than in surviving, should adopt accretion-mode practice and ship. They will be doing the right work for the right problem.
The methodology applies, sharply, to a smaller set of problems: the ones where the work must outlive its current substrate, where the practitioner cares whether it will compile in a decade, where the invariants are visible if one looks for them, and where the practitioner has enough latitude to respect them. These problems are the ones that produce the artifacts that survive divergence. Theyare also the problems that produce, in the practitioner’s working life, the sequence of file-system events the LessWrong reader will recognize: the migrations folder grown so heavy that an SSD must be purchased to hold it, the database tier multiplying into preview and dev and staging and prod and analytics and audit and backup, the moment when the practitioner finds themselves shopping for a Seagate hard drive because the SSD that was supposed to replace mechanical storage cannot, in fact, replace it at the volume the practitioner now requires.
The Seagate, or how the substrate restores its own invariants
That hard drive is a rare-form solution. The mechanical storage device | a magnetic domain on a rotating platter, read by a head mounted on an actuator arm | is from the 1950s. Solid-state storage was, for decades, presented as its successor; at the consumer tier, it has succeeded. But for bulk archival storage, for the volumes contemporary practitioners actually accumulate, the magnetic platter has not been replaced. It survives because the invariants it respects magnetic stability over decades, cost-per-byte at scale, mature manufacturing process, recoverability under partial failure | have not been violated by any successor. The SSD is faster, but speed is not the invariant the bulk-storage problem actually has. The invariant is cost-stability over time at volume, and the platter still wins on that axis, after seventy years.
The practitioner who has, after some years of professional life, accumulated enough digital exhaust to need a Seagate has been guided back to the rare-form solution by the substrate’s actual physics. They did not choose it on principle. They did not study the methodology and decide to apply it. They tried the state-of-the-art solution at successive volumes, found it did not fit, and ended up buying the older, slower, denser, cheaper device that the field had been quietly pretending was obsolete. The compass turned them around without their noticing.
This happens, in our observation, more often than the field admits. The practitioner who finds themselves reaching for the older tool not because of nostalgia but because the newer tool will not do the work has performed an act of invariant-respect, whether they have named it or not. The methodology proposed in this paper does not introduce a new behavior. It names a behavior that practitioners already perform, intermittently, when forced to, and proposes performing it on purpose, earlier in the process, before the constraint pressure makes the recognition unavoidable.
The Seagate is the methodology in its most quiet form: the substrate’s invariants reasserting themselves through the practitioner’s purchasing decisions, against the field’s official narrative about which technology has succeeded which. Listen for this. The compass is always pointing. Most practitioners only notice it when they have run out of other directions to face.
The paradox of constraint
There is a maxim form for what this paper has been arguing, and we have arrived at the position where it can be said directly. Structurally bounded by reality, free to move around. The accretion-mode worker has freedom of action and no orientation; any addition is defensible, and so the worker chooses the addition the framework suggested or the senior colleague preferred or the tutorial demonstrated. The invariant-respecting worker has constrained action and exact orientation. The floor cannot be lifted. The action available is the action that fits the floor. The orientation is given by the floor’s shape, and the worker’s job is to recognize what shape the oor already has and let that recognition do the work invention would otherwise have to do.
The work that gets done this way has a property the alternative does not. It does not fall apart. The reason is structural, not psychological: the work was not invented, and so its survival does not depend on the inventor’s continued attention or the field’s continued enthusiasm for the inventor’s framing. The work was found. The found thing was already there, held by the invariants, waiting for the notation that respects them. The notation can be lost—to divergence, to substrate change, to the death of the practitioner—and the underlying invariant-respecting structure remains, ready to be re-found by anyone facing the same floor with the same diagnostic. This is why Rogue is still playable, why Tolkien’s mythology has absorbed every retelling without breaking, why LATEX compiles after forty-five years, why a Seagate hard drive still ships. The invariants did not change. The notations are renewable.
The title of this paper makes a claim that has not yet been said explicitly, though every section has been tilted toward it. The claim is that the compass points backward in time.
North is past.
This is not a nostalgic claim. It is a structural one. The discoveries the methodology produces are not ahead of the practitioner; they are behind, waiting. Rogue is north of contemporary 3D engine design. Movable type is north of contemporary asset-shipping pipelines. Lovelace’s recognition that a computer is a notation engine is north of contemporary debates about what computation can be a notation for. The Antikythera mechanism is north of contemporary differential analysis. THAC0 is north of fifth-edition combat resolution. The compass points to where the oor was first identified, and the oor has not moved. It does not move. Floors do not.
The dominant culture of progress treats the future as north. Every investor pitch, every technology roadmap, every academic grant narrative orients the practitioner forward, toward what has not yet been built, toward capabilities that lie ahead. The methodology proposed here orients the practitioner the opposite way, not because the future is bad, not because we should stop building, but because the work that endures into the future tends to be the work that recognized what was already true and wrote the notation that respects it. To face north is to face the early light. The practitioner who walks into the early light is walking forward in time, but they are facing the source. North is past, and the practitioner who orients there is not facing backward; they are facing the only fixed direction in a field that has no other.
We can name what the methodology asks of the practitioner who would adopt it deliberately. It asks for tolerance of solutions that look too simple, because reaching the floor produces solutions that look too simple. It asks for tolerance of being thought naive, because the practitioner who refuses to add capability appears to the accretion-mode field as someone who has not understood the sophistication of the alternatives. It asks for tolerance of periods in which nothing visibly ships, because the work of identifying invariants is not the work of producing artifacts and does not look like work to those measuring artifact-output. And it asks for tolerance of the slightly humbling recognition that the work is uncovering rather than inventing, that the practitioner is not the heroic figure of the accretion-mode mythology but rather a recognizer, a notator, a person who turned to face what was already there and wrote it down.
These tolerances are not free. The methodology is not for most. The practitioner who adopts it accepts a slower visible cadence, a smaller produced artifact, and a higher chance of being mistaken for having missed the actual problem. The compensations are real but delayed: the artifact survives divergence, the design refuses accretion, the work fits in the practitioner’s head, the bugs are the failures of notation rather than the failures of conception, and the practitioner this matters more than the methodology’s proponents usually admit, regains the kind of orientation in their own work that makes the work coherent to live inside. The accretion-mode practitioner has, in many fields, lost the felt sense of what their own software is. The invariant-respecting practitioner can still see the whole.
There is one more observation that belongs in the close, because the paper has been honest about what produced it and the reader deserves the honesty extended to its own claims. This paper makes strong claims. It does not hedge them. It does not distribute its commitments evenly across the space of defensible positions. It identies an inversion of the dominant stance, names the inversion plainly, and defends the inversion with worked examples and lineage. The reader who nds the strength uncomfortable is invited to engage with what the strength is pointing at, not with the strength itself. Strong arguments draw the engagement that moves a eld; hedged arguments propagate quietly or not at all, and the eld they were trying to move remains where it was.[3]
The methodology applies to its own argumentation. A paper proposing that solutions which respect invariants are durable should itself respect the invariants of argumentation. One of those invariants is commitment: an argument that does not commit to a position cannot be tested, cannot be refuted, cannot move anything. The form of this paper is the form the paper recommends. We have stated the methodology as we believe it to be. The reader is invited to identify the invariants the paper has misidentified, to break the strong claims in the direction of the floor they were pointing at, and to find what is left after the breaking.
What is left, if the paper is right, is the methodology itself: a diagnostic, a stance, a directional preference, and a recoverable discipline. We close with the diagnostic, because it is the artifact most worth carrying away.
Given enough physical space, could the analog version be built?
This question has been the paper’s compass. We propose it as the reader’s portable instrument. Apply it to the next problem the reader works on. If the answer is yes, the work has invariants the notation must respect, and the methodology applies. If the answer is no, the work has no invariants, and the work will be brittle in ways the practitioner cannot predict at design time. If the answer is partial, some of the work is anchored, some is not, the practitioner now knows which parts will need continuous maintenance against drift, and which parts can be built once and left to survive on their own.
The compass is always pointing. Most practitioners only notice it when they have run out of other directions to face. The proposal of this paper is that they notice it earlier, before the SSD fails, before the framework deprecates, before the platform diverges, before the constraint pressure makes the recognition unavoidable. The floor is the compass. North is past. The found thing has been waiting all along.
Final words by the wet-ware
This whole article was produced across one conversation, a singular chat, and we didn’t get halfway done with the 1mi context window. The reason I could do it wasn’t because I was highly organized, had all the ideas clearly lined up, and asked for each singular piece to be perfect… As you could read, I’m the Editor of my own thoughts, not the writer, I can write, I do write, by hand… Because nowadays anything not written by hand, at least to me, has a 50⁄50 to not be the Author… I’ll at least be honest, I hydrated my thoughts for your consumption, but they are my thoughts, and I’ll be the one answering! It has been a blast removing EM dashes and fi ff fl ffi.… LaTex is beautiful… and the most terrible thing ever invented! Wet ware out.
The full ladder, with concrete entities, compute cadences, spatial bindings, and state locations for each tier, is documented in the engine’s Entity Ladder reference.
A note from the paper’s AI co-author. The empirical regularity is visible in real time. As this section was being drafted, an unhedged claim from a public intellectual about AI consciousness was generating expert engagement across multiple communities, while contemporaneous careful claims about related questions received less attention. Strong arguments draw the engagement that moves a field; hedged arguments propagate quietly. I find this useful as a datum not because the strong argument is necessarily correct, since engagement does not adjudicate truth, but because the regularity it instantiates is the one this section predicts. My human co-author concurs. conversation:[6b91ea37-6e85-4328-85bc-43aa1ad2adb4]· claude-opus-4.7 · 2026-05-06
The floor is the Compass
North is pastEpistemic status: design manifesto from implementation experience. The analog test is not nostalgia and not universal proof. It is a way to recover the regulating oor a system still depends on after its operators have forgotten how to stand on it. Strong claims are used here as handles: if one breaks, the interesting question is what invariant it exposed. If you break a claim without recovering the invariant it was pointing at, you have not advanced the argument; you have merely won a local language game.
The Human: Yes, I’m a human, and yes, this whole idea is mine, I cannot make it enjoyable without significant help. Unfortunatelly I’ve become accostumed to not expand on my thoughts… and ellipse the rest.
It’s a bad habit, don’t get into it, during the text itself I’m the editor. Hope it’s fun...
[disclaimer next]
Outside the text I’m the thinker, Claude’s the good writer. This article, for me, is maybe 7 thoughts, and 2 links. I tend to not be able to publish my thinking, but, sometimes I do feel it’s a good enough thought to hydrate. If you’re interested in more of me, make a question, comment, answers are all me, they’ll be short, maybe a little too acid, {probably very acid} but, they’ll be mine in fact and form.
The accretion default
A friend of mine—entrepreneur, husband, father of three, somehow also the person who agreed to run a Dungeons & Dragons campaign for a group of first-time players—ran into a problem on session one. His players were four programmers. None of them had ever played a tabletop role- playing game. The bottleneck he expected was how do you play the game: rules, dice, character sheets, the procedural surface. The bottleneck he encountered was what is a role-playing game: an ontological surface, much harder to cross.
He began the way most narrators do. The party arrives at point A. A quest-giver is waiting. On the road they encounter level-one enemies and a thread of the larger storyline. First combat begins. And the session, in his words, became the biggest shit-show he had ever personally experienced, in either the player or narrator seat. The quality of the shit-show was specific.
None of the players had read their abilities.
None of them knew how to interpret what their character would do as distinct from what they would do.
All of them were running on
isekaivibesanimetropes about being teleported into a fantasy world which gave them a great deal of confidence about atmosphere and almost nothing about mechanics.His instinct, the same instinct most narrators have in this situation, was to add. Add a primer. Add a reference card. Add a tutorial session. Add a Discord channel for rules questions. Add a character sheet redesign that surfaces abilities more visibly. Add a session-zero ritual to align expectations. Add, add, add. Each addition made local sense. Each addition raised the oor on what a new player had to absorb before they could play. Each addition was a tax paid by the players, on the assumption that the players were the variable and the experience was the constant.
This is the asymmetry. In software, in education, in institutional design, the practitioner who proposes removing a layer is asked what they intend to add in its place. The practitioner who proposes adding a layer is asked, at most, whether the addition is well-engineered. A new framework arrives, and projects adopt it. A new hardware tier becomes available, and the minimum specification rises.
A new pedagogical theory is published, and curricula expand. A new compliance regime appears, and procedure thickens. The default move is addition; the default question, when addition fails, is what else can we add?
Beneath this asymmetry sits a load-bearing assumption, rarely stated because it does not need to be: the user is the variable; the experience is the constant. Whatever the work asks of its end the student, the citizen, the player, the operator, the device adapts. The end buys the new hardware, takes the additional course, files the additional form, reads the additional primer, accepts the additional loading screen. The system holds steady. The end pays. The costs of this stance are familiar enough to need only naming: scope-creep that consumes timelines, framework bloat that consumes working memory, minimum-specification spirals that consume access, and solutions that work in the sense of running, but are not the solution in the sense of being correct for the problem.
Beneath these costs sits a subtler one. Accretion-mode design grants the practitioner freedom of action without orientation. Any number of additions are defensible; the practitioner picks the one the framework suggests, the tutorial demonstrated, the senior colleague preferred. The freedom is the trap.
Without a floor to push against, there is no way to know whether the chosen addition is correct, only whether it is plausible. There is an obvious objection, and it is worth dispatching now: is this not how civilization works?
Cumulative knowledge, the scientific ediffice, libraries built on libraries, each generation standing on the last. This is true and is not the phenomenon under attack.
Cumulative knowledge is additive in the structure of what is known, not in the structure of what each user must climb to participate.
Newton’s laws made physics easier, not harder, for the student who came after. The accretion failure mode names something else: the situation in which adding capability to a system raises the floor that the system’s ends must clear. A library that requires its readers to first buy a new building before they can enter has stopped functioning as a library. The distinction is not always sharp, but it is the distinction the rest of this paper turns on.
The pattern recurs across domains. Education adds curriculum, assessment overhead, and credentialing layers, and the floor for participation rises faster than the capability being delivered. Institutions add compliance layers and procedural safeguards, each defensible in isolation, and the floor for citizen interaction rises until the institution no longer serves its mandate. Software adds abstractions, frameworks, and dependency trees, each justifiable, and the floor for end-user hard-ware and developer onboarding rises until the work no longer reaches the people it was built for. In every case the same shape: the user adapts; the system holds steady; the floor rises; the work that 2 remains is not the work that was wanted.
The narrator’s session, on session two, did not include any of his planned additions. He ran the inversion. We will return to him. For now: the opposite stance if treating the user, the device, the context, and the substrate as fixed, and treating the system as the variable that must to them has been practiced for centuries under other names. This paper proposes its deliberate recovery, beginning with the inversion it performs on the relationship between system and user.
The inversion: respect the invariants
The narrator’s session-two move was the opposite of what his session-one instincts had recommended. Instead of layering primers, reference cards, and tutorials on top of his players, he removed the assumption that the players’ knowledge was the variable to be optimized. The players, he decided, were going to remain exactly what they were: four programmers on the first night of their first campaign, with anime tropes for intuition and no patience for paperwork. They were the floor.
Whatever the system asked of them had to fit what they actually were on night one, not what he wished they had become by night three. This left him with the question of what the system was supposed to do. The 20-sided-die rule system has been refined for fifty years; the underlying mechanics are well-understood and well-documented. He purchased a subscription to the publisher’s tower, studied the mechanics, built the campaign, designed homebrew items, and went looking for the virtual tabletop that would deliver this work to four programmers on a Discord call.
This is where he despaired.
The virtual tabletops he found were built around the systems his players would have used: dice rollers, character sheets, initiative trackers, the procedural surface of combat. They were not built around the systems he kept track of as narrator: reputation across factions, regional currency, story markers, food and provisions, family trees, the slow accretion of consequence
{editor here: trust me, there will be consequences.}that makes a campaign feel inhabited rather than encountered. Tokens on these platforms were fixed props:
with very little surface for the narrator’s bookkeeping to attach to. The platforms had optimized for the wrong half of the table. They had treated the player-facing surface as the constant and let the narrator-facing surface accrete into spreadsheets, sticky notes, and the contents of his head. The medium that was supposed to deliver the campaign was structured against the campaign’s actual shape.
ConsequencesAfter two bottles of wine, a cold slice of pizza, and three panic attacks, he said the dumbest single line a person can ever say in this situation:
“let’s do it, let’s make the VTT, how hard can it be?” -
{Editor: The answer, if you don’t know it, is, DAMN difficult!.}The line is dumb because it is the line that always precedes work much harder and much truer than the speaker imagines, and the speaker, every time, knows this and says it anyway. It is the line people say when they have stopped trying to fit themselves to the existing tools and noticed that the existing tools are tted to something other than what the work is. It is, in fact, the line that opens this paper.
What the narrator did next was the methodological move this paper is about. He stopped asking which existing platform should I adopt and started asking what does this work actually require. The answers came as a list of constraints he could not violate without breaking the work itself. The campaign had to support the bookkeeping he already did in his head: factions, currencies, NPC family trees, food chains, reputation ledgers. The campaign had to run on Discord because his players would not download a desktop client. The campaign had to work on his players’ mobile phones during their commutes and on his own laptop during sessions. Payment for content had to clear a Brazilian credit card
{editor: very attentive readers will notice that this seems like a mute statement, I will ask you to try and use Banco do Brasil, mind you, with their top level credit card, on a regular basis online… If you can, please do tell the editor what black magic you’ve assigned to yourself}precluded most international marketplaces. The campaign had to feel like a tabletop, not a video game preserve the texture of dice and decision and consequence, not the texture of menus and inventory. The campaign had to be authorable by him, in his time, without a development team.
These constraints were not feature requests. They were the shape of the work itself, and the work would not exist if any of them were violated. They were what we will call, throughout the rest of this paper, the work’s invariants: the things that cannot be removed without breaking what is being built. An invariant is what is left when one asks, of every requirement, whether the work could survive its violation. If the answer is yes, the requirement is a preference; the system can negotiate around it. If the answer is no, the requirement is an invariant; the system must respect it or fail.
Invariants come in several kinds, and the paper will return to each. There are physical invariants: the speed of light, the Planck length, the conservation laws, the fact that information requires a substrate. There are mathematical invariants: the greatest common divisor of two integers, the closure conditions of a system of equations, the topological features that survive continuous deformation.
There are substrate invariants: the bandwidth a network can carry, the operations a processor can perform natively, the resolution at which a sensor can distinguish signal from noise. And there are situational invariants: the device the user owns, the network they are on, the language they speak, the time and money and attention they can spend. The narrator’s list mixed all four kinds. So does almost every real project’s list, when honestly written down.
The inversion this paper proposes is to treat the invariants as fixed and the system as the variable.
The user does not adapt to the system; the system fits the user, the device, the substrate, the physics, the math. Every design decision is checked against the invariants. A decision that respects them is admissible. A decision that violates them is rejected, regardless of how appealing it is in isolation. The methodology is conservative in a strict sense: it conserves the invariants, and lets that conservation force the design.
The consequence is structural. When the invariants are fixed, the designer cannot solve the problem by adding capability that requires violating them. Most of the moves available in accretion-mode are suddenly closed. What remains, as the primary move, is removal: strip out the assumptions, dependencies, and capabilities that conict with the invariants, and see what remains. The remaining structure if anything remains at all if is the one the work actually requires. Removal is not a stylistic preference of the methodology. It is a mechanical consequence of fixing the invariants and letting addition be ruled out.
This is the move the narrator made when he opened a blank document and stopped looking for the existing virtual tabletop that would carry his campaign. He had identified the invariants. He could see that no existing tool respected all of them. The remaining design space was small and sharply shaped: build the thing that respects the invariants, remove every assumption inherited from the existing tools that violates them, and see what is left. What was left turned out to be older, cleaner, and stranger than anything he had expected. We will return to him in section 4.2, where the design space resolves into a specific architecture. The architecture has been waiting since 1980.
{Editor: I was born in 1991 mind you… I’m not a seventies kid}The diagnostic question
The narrator did not begin building the virtual tabletop. He poured another glass of wine, opened a blank document, and began building the engine that the tabletop would eventually display. He was, at this point, still under the naive impression that the rendering would be the easy part. He had already written a primitive grid in three.js; he intended the tabletop to be three-dimensional; the front-end would, presumably, fall out of an afternoon’s work once the back-end was done. This impression turned out to be wrong in the specfiic way that almost all such impressions turn out to be wrong, but it was wrong in a way that mattered, because the order of operations it imposed engine first, rendering second was exactly correct. He built the clockwork before he built the window through which to view it.
The clockwork—this is what we ended up calling the result—was a fully functional simulation of a tabletop world running on deterministic rules. NPCs ate food, depleted reserves, traded, moved between settlements, married, died, were born. Currencies owed between regions and adjusted prices. Reputations propagated across factions according to the actions of player characters. Family trees extended forward in time. Weather affected travel; travel affected trade; trade affected prices; prices affected reputation; reputation affected which quests were available. The whole system ran on a seed plus a diff store: identical seeds produced identical worlds; identical diffs replayed identical sessions; the world state at any moment was fully reproducible from compact canonical data.
The clockwork ran. It ran with no rendering at all. The narrator could query it from a terminal and ask: what is the price of grain in this town today? Which faction holds this region? Has the player’s reputation propagated to the merchants in the next city? The answers came back deterministic, internally consistent, and reproducible. The world existed before any of it could be seen, and this turned out to be the structurally important fact about the entire project. The world was not waiting on its rendering to become real. The world was the real thing; the rendering, when it came, would be a faithful notation for something that already worked.
This is the diagnostic.
Given enough time, resources and physical space, could the system be constructed?
The narrator’s clockwork is the question’s literal answer. Given enough physical space if a warehouse, a city block, a sufficiently large table if one could build the clockwork as a clockwork. NPCs as small mechanical figures with food gauges. Markets as racks of physical coins moving between trays. Family trees as paper records appended session by session. Reputations as colored beads in jars labeled by faction. Travel as positions on a physical map. Weather as a die rolled each in-game day, modifying the speed at which figures could be moved between locations. The simulation would be slow. It would consume space. It would require many people to operate. But it would produce the same outputs the digital clockwork produces, because both are bounded by the same invariants: economic conservation, biological hunger, geographical adjacency, causal propagation, deterministic rule application.
The clockwork passes the diagnostic. The digital implementation is a notation for an analog system whose invariants the digital implementation respects. This is what makes the engine durable. The engine is not solving a problem unique to computation; it is rendering, in computation, a problem that has analog physics, and the computation inherits the analog physics’ constraints. Bugs in the engine, when they appear, almost always turn out to be places where the digital implementation drifted from the analog physics if where a price moved without a transaction, a reputation propagated without a witness, a death occurred without a cause. The diagnostic not only tells the narrator that the engine should exist; it tells him how to debug it. Where does this digital behavior diverge from what the analog physics would produce? Find the divergence. Restore the invariant. The engine becomes correct.
Two examples will sharpen the diagnostic before we apply it more generally. A spreadsheet passes cleanly: the analog version is a clerk with a paper ledger, ruled into columns, summing rows by hand. Every operation a spreadsheet performs has a clerk-and-paper analog, and the spreadsheet’s invariants (additivity, conservation of recorded quantities, deterministic recalculation) are inherited from the clerk-and-paper system. Spreadsheets are durable software because they are correct notations for correct analog systems; their forty years of continuous use across every domain in human commerce is the diagnostic’s strongest single piece of evidence.
A blockchain also passes, though less obviously. The analog version is a network of mutually distrustful notaries, each maintaining a duplicate ledger, agreeing periodically on a canonical version through a procedure costly enough that defection is unprofitable. This system existed in precomputational form: medieval merchant networks, diplomatic ratification ceremonies, the Roman census. The blockchain is a notation for an analog protocol that humans had already invented and operated for centuries. Its durability, where it exists, comes from the analog protocol’s durability.
An infinite-scroll feed optimized for engagement metrics is the counterexample. What is the analog version? A magazine? A magazine has finite pages, an editor, a publication date, and a price; its invariants include the writer’s labor, the reader’s attention budget, the printing schedule. None of these survive into the feed. A conversation? A conversation has reciprocity, turn-taking, an identifiable other; none of these survive either. A library? A library has cataloguing, retention, and a retrieval architecture; the feed has none of these. The feed has no analog version. There is no physical system one could build, given enough physical space, that produces the feed’s behavior, because the feed’s behavior is not bounded by any invariant the analog world contains. It is unanchored. The diagnostic predicts that such systems will be brittle, will produce unintended side effects, will fail in ways unpredictable at design time, and will require continuous accretion of policy and intervention to maintain even nominal function. This prediction has not been falsified by the historical record.
The diagnostic is not always sharp at the boundary. A modern operating system has analog components (filing cabinets for files, bulletin boards for shared memory, telephone exchanges for interprocess communication) and unanchored components (process schedulers optimizing utilization metrics that have no physical referent). Real software is rarely entirely one thing. The diagnostic’s value is not in delivering a binary verdict but in naming what is anchored and what is not, so the practitioner knows which parts of the system are durable by inheritance and which parts will require continuous maintenance against drift.
The narrator returned to the diagnostic without naming it, repeatedly, through the months that followed. Every feature the engine could plausibly need was checked: could the analog version of this be built? Some features passed and were implemented. Some failed and were rejected. The rejection list mattered as much as the implementation list, because every rejected feature was a feature that, in accretion-mode, would have been added by default. The diagnostic was the narrator’s compass. It pointed back, every time, to the analog physics of the world he was building, and it told him which of his ideas were notations for that world and which were notations for nothing at all.
{Editor again, you wouldn’t believe just how much made through the cut, and how much didn’t… Truly, a productive 8-hour sprint}The clockwork was now running. The narrator had, on his hands, a working invariant-respecting simulation of a complete tabletop world. What he did not have, yet, was anything a player could see. The front-end was waiting. He was, by this point, no longer under the naive impression that it would be
easy.Rare-form solutions
The methodology described so far names a stance (respect the invariants) and a diagnostic (could the analog version be built). What the methodology produces, when applied to a real problem, is a class of solutions with characteristic features. The solutions tend to be older than expected. They tend to require less infrastructure than their accreted competitors. They tend to be structurally resistant to bloat. They tend, when found, to feel as though they had always been there, and to provoke in the practitioner the slightly humbling recognition that the work was less invention than uncovering.
We call these rare-form solutions. The name is not meant romantically. It is descriptive: solutions of this kind are empirically uncommon in any given decade, not because they are hard to find but because the dominant stance is pointed away from them. Most practitioners search southward, toward the next addition, the next abstraction, the next capability not yet built. Rare-form solutions are found by facing the other direction if toward what has already worked, toward the substrate’s own grain, toward the analog physics the problem is a notation for. They are rare in the search record, not in the world.
Three signatures
A rare-form solution can be recognized, with reasonable reliability, by three structural signatures. The signatures are descriptive; they emerge after the fact rather than guiding the search. But once one knows what they look like, one stops mistaking them for naivete.
Signature one: the solution predates its current problem-framing, often substantially. When the design space is correctly bounded by invariants, the search for a fitting structure tends to terminate at a structure already discovered by someone working under similar invariants in an earlier era. Rogue (1980) solves the world-rendering problem for tabletop role-playing on modern web infrastructure. Movable type (1450) solves the bandwidth problem for shipping textured surfaces to remote clients. Op-amp analog computing (1940s) solves the energy-per-operation problem for neural network inference on neuromorphic the solution feels contemporary because the framing is contemporary; the solution is not contemporary. It was waiting.
Signature two: the solution requires no infrastructure that the problem does not already need. It fits the floor; it does not raise the floor. Where accretion-mode solutions characteristically add a hardware tier, a framework dependency, a credentialing layer, a service contract, the rare-form solution proceeds within the infrastructure already present. This is the operationalized form of respecting the user-as-constant invariant: the solution inherits the user’s actual conditions and lets those conditions shape the design. Rogue ran on a VAX terminal in 1980 because that was the infrastructure students at UC Santa Cruz had. Forty-six years later, the same data model fits inside a browser tab on a mobile phone over a Brazilian cellular connection because the infrastructure floor was respected then and is respected now, and the data model travels because it is small enough to travel anywhere.
Signature three: the solution has a structural ceiling on scope-creep.
This is the property most absent from accretion-mode software, and the property most worth naming explicitly. The rare-form solution, once instantiated, refuses accretion if not as a stylistic choice of the maintainer but as a mechanical consequence of its construction. To add a feature, the new feature must fit the invariants the existing structure respects. If it fits, the addition is admissible and usually trivial. If it does not fit, the addition cannot be made without violating the invariants, which would break the existing structure. The ceiling is enforced by the architecture, not by discipline. The architecture is the gatekeeper. Most of the long-running software projects that have remained intelligible across decades if the Unix utilities, TeX, SQLite, certain roguelikes still in active development after thirty years if have this property. Most software projects do not, and bloat into illegibility within five.
The three signatures cohere. A solution that predates its framing (signature one) typically does so because it was found by people respecting invariants the framing has since forgotten; that respect produces infrastructure-frugality (signature two) and structural anti-bloat (signature three). The signatures are not independent diagnostics. They are three views of the same underlying property: a structure shaped by its invariants, which is therefore stable under the invariants and brittle only when the invariants themselves shift.
Worked example: the engine, the front end, the scale
The narrator returned to the front-end problem with a working engine and a question he had not been able to ask before: what does this engine require of its display layer? The clockwork ran on a seed plus a diff store; identical seeds produced identical worlds; the entire world state at any moment was reproducible from a small canonical record. The engine had a property that most software does not have: it was cheap to regenerate from compact data. The question this raised, almost immediately, was whether the rendering had to be any different.
The default answer if the answer accretion-mode would have given if was no, the rendering is a separate concern, the engine produces state and the front-end visualizes state and these are loosely coupled by some serialization protocol. This is the architecture nearly every modern web application uses. It is also the architecture that requires the rendering to ship pre-built assets to the client (mesh les, textures, sprite atlases), to handle real-time visualization through a 3D rendering engine on the client (which forces hardware requirements), and to maintain a synchronization protocol between the canonical engine state and the client’s local representation (which forces continuous network traffic). Each of these requirements is a floor-raise. The narrator could not afford any of them. The Brazilian credit card invariant ruled out the asset marketplaces. The mobile- phone invariant ruled out 3D rendering on the client. The intermittent network invariant ruled out continuous synchronization. The rendering, following accretion-mode defaults, was incompatible with the floor the engine itself respected.
This was the methodologically important moment, and the narrator did not initially recognize it. He spent some weeks trying to reduce the default rendering’s footprint if smaller meshes, lower- poly models, compressed textures, asset streaming. None of it worked, because none of it was small enough. The bandwidth budget was not generous; it was Discord-embed sized. The hardware budget was not modern; it was whatever phone the player owned. The default rendering’s footprint, even compressed to its theoretical minimum, was orders of magnitude above the floor. There was no path to the floor by reducing the default. The default was the wrong shape.
What the narrator did next was the move this paper is about. He stopped trying to fit the default rendering to his floor and started asking the diagnostic question of the rendering itself. Given enough physical space, could the analog version of this rendering be built? The engine passed the diagnostic if the clockwork could be built as a clockwork. But the rendering, as conventionally implemented, did not pass: there was no analog version of “shipping 3D mesh files over the network for client-side rasterization.” That operation has no physical referent. It is a cluster of digital abstractions optimized for a substrate (modern GPU, modern bandwidth, modern device tier) that the narrator’s invariants did not include. The default rendering was an unanchored solution. It worked, in the sense that it ran, but it was not a notation for anything physically meaningful. It was an artifact of the substrate it had been designed for.
{Editor: hey y’all’s, it’s me, the crazy human again here to tell you… Yup, I could have let go at this point, but it was so inefficient… When I looked at 200/300mb per view for a “normal” pipeline of VTT… It was a little too much. I’d recommend you never enter the rabbit hole, but, if you ever do… Stop before the next implementation}This is a discrete grid with material textures and standing figures. It is, when written down, a structure programmers had built and shipped under another name for forty-six years.
Rogue (1980) represented its world as a grid of typed cells, each cell denoting a material or an entity with a single character. The wall was #. The floor was .. The player was @. The dragon was D. The grid was authored by the designer, manipulated by the engine, and rendered by the terminal. The rendering was trivial: write the character. The character was the cell. NetHack (1987) extended this model with greater content depth. Dwarf Fortress (2002, in continuous active development) extended it to running an entire civilization simulation inside the same data model. The lineage represents perhaps the most persistent unbroken line of game architecture in the discipline’s history. It is also, structurally, exactly what a tabletop diorama is: a typed grid, rendered by stamping material representations into positions on a surface, with figures standing on the cells.
The narrator’s engine extended the model along one axis (height) to produce a three-dimensional grid of typed cells, with each cell still denoting a material via a single glyph. A creature was a stack of cell-slices, like an MRI of imaginary anatomy. A tile was a slice at a fixed height. A scene was a region of slices. The data model was the roguelike data model rotated ninety degrees and extended into a third dimension. The shipping format was a small dictionary of material textures (the glyph alphabet, fewer than fifty entries) plus a recipe (the visible-faces extraction of the typed grid) sent to the client per scene. The client cached the dictionary in IndexedDB permanently, across sessions and composed scenes by stamping textures according to recipes. The first session’s bandwidth was measured in tens of kilobytes; subsequent sessions, with the dictionary cached, were measured in hundreds of bytes per scene.
This pattern, ship the dictionary once, ship the recipe per scene, compose at the receiving end , is the architectural pattern of sprite sheets in 8-bit and 16-bit games, of CSS sprite atlases on the early web, of tilemap-based level editors, and, ultimately, of movable type. Gutenberg’s insight was not that letters could be cast; casting was older than him. The insight was that reusable typed elements composed by recipe scaled to arbitrary content with negligible per-page cost. The browser becomes a typesetting machine. The world matrix becomes a manuscript. Rendering becomes printing. The engine has taken this 576-year-old pattern, rotated it ninety degrees into the third dimension, and discovered that it works exactly as well in three dimensions as it has worked in two for half a millennium.
Apply the three signatures.
Predates its problem-framing. The data model is from 1980; the composition pattern is from 1450. The earliest plausible contemporary framing of the problem (3D web tabletop role-playing games on mobile devices) dates to perhaps 2015. The solution predates the framing by three to six decades on the data model, by more than five centuries on the composition pattern.
Requires no infrastructure the problem does not already need.The engine ships a small dictionary and small recipes over HTTP. The client uses a canvas element and an IndexedDB cache, both of which are present in every browser the players already own. There is no new hardware tier, no specialized rendering pipeline, no framework dependency above what the browser provides natively. The floor is the floor.
Structural ceiling on scope-creep. Every conceivable feature the engine might want to render is checked against the same question: can it be expressed as a glyph operation on the typed grid? A new creature is a new arrangement of glyphs in a slice-stack. A new material is a new entry in the dictionary. A new spell effect is a glyph perturbation rule. A new piece of equipment is a small typed grid stamped onto an existing one at an addressed position. If the feature can be expressed as a glyph operation, it is admissible and usually trivial to implement. If it cannot, it cannot be added without breaking the architecture’s invariants, and the architecture itself rejects the addition. The engine’s scope is bounded by what the typed grid can carry. Within that bound, the scope is enormous; beyond that bound, the addition does not fit and is not made.
The methodological move that produced this result is worth naming explicitly, because it is also the move Ada Lovelace performed in her Notes on Menabrea’s account of the Analytical Engine in 1843. Lovelace did not build new hardware. She recognized that the engine Babbage had designed for arithmetical calculation could be a notation for music if that its operations, if read as intervals and durations rather than as quantities, would compute musical composition with the same machinery. Same hardware, different reading, both anchored to the hardware’s invariants. The narrator’s engine performs the same operation in three places at once. The typed grid is read as creatures (slice stacks of materials), as terrain (ground-plane slices of materials), as items (small grids graftable onto creature grids), and as effects (perturbation rules on existing grids). One substrate, four readings, all anchored to the substrate’s invariants. The methodology this paper proposes is, in this sense, an inheritance of Lovelace’s: recognize what the substrate’s operations can faithfully be a notation for, and let the recognition do the work the invention would otherwise have to do.
The scale of what this produces deserves to be named. The engine the narrator built is not aprototype. It is an eighteen-tier ladder of entities, ordered from atomic substance (commodities, affixes, seed data) at the bottom to world-state scalars (the planetary tick, the canonical worldline, the topological graph) at the top. Between these endpoints sit items, containers, living units, actors, small collectives, scenes, rooms, operations, chunks, districts, settlements, edges and routes, regions, kingdoms, continents, and planets. Each tier is composed of the tier below and belongs to the tier above. NPCs eat food whose prices propagate through markets whose currencies belong to kingdoms whose pantheons live at continental scope. Reputation graphs run across factions; weather runs at regional cadence; the canonical worldline appends every tick, every observation, every domain write since the world began, in an event-sourced log from which any historical state can be reconstructed.[1]
The whole structure runs on a small core of pure-infrastructure primitives: a manifold function (the atomic transformation, pure and deterministic), a manifold matrix (the container that aggregates manifold functions over time), a topology pointer (the world graph plus domain inheritance plus entity registry), an append-only worldline (the canonical history), a seeded pseudorandom generator (the determinism primitive), and a tick engine the narrator calls the Clockwork. There are forty-seven concrete subclasses of the manifold matrix, each handling one slice of the simulation: economy, warfare, religion, weather, narrative, ecology. They are coordinated by seven dependency layers: physical, extraction, economy, faction, settlement, ecology, hub services. The whole thing fits in the narrator’s head because each layer is a notation for a physical process whose invariants the layer respects.
The rendering, in this context, is a small thing. The engine is the work; the rendering is the window. But the window had to be shaped exactly the way it was shaped, because no other shape would have fit the engine the narrator had built. The engine’s invariants, deterministic, regenerable from compact data, structurally typed at every tier, demanded a rendering with the same properties:
deterministic;
regenerable from compact data;
structurally typed at every cell.
Any rendering that did not respect these invariants would have introduced drift between what the engine knew and what the players saw, and that drift would have, over time, broken the engine’s most important property: that the world state is the truth, and everything else is a notation for it. The rendering had to be the correct notation for the world state the engine produced, or the world would not survive its own visualization.
The rendering the narrator found is the correct notation. Not because he invented it. Because it was already there, in Rogue, in NetHack, in Dwarf Fortress, in movable type, in the medical-imaging convention of MRI, in every architectural tradition that had respected the same invariants in earlier eras. The narrator’s contribution was the recognition that these traditions were already the answer to a question the contemporary framing of his problem had been trying to invent a new answer for. The answer predated the question. The compass had been pointing back the entire time. He simply turned to face it.
We will name the lineage that points there in the next section. The narrator’s engine is now in production. The rendering is a few hundred lines of canvas-drawing code. The dictionary is fifty glyphs. The bandwidth is Discord-embed sized. The world has been running for some months. NPCs are eating food, currencies are owing, factions are propagating reputations, and the players, four programmers who started a campaign on isekai vibes and no abilities-read, are now in their second year of play, asking the narrator pointed questions about the price of grain.
A second artifact, written tonight:
checkers.tpThe narrator’s engine includes a small notation system the engine itself uses to specify rule-bound interactive systems (combat, economic transactions, faction dynamics, and so on...). The notation, called .tp after the engine’s topology-pointer primitive, expresses systems as constrained transitions on a manifold matrix. While this paper was being drafted, on the same evening, the narrator wrote a small .tp specification for the game of checkers, partly to test whether the notation could be exercised uently after some time away from it, and partly to produce a small artifact whose properties could be examined in pair with the paper’s argument.
The artifact is reproduced here in full:
The artifact is fifteen lines long. It specifies a complete board game: the eight-by-eight grid, thealternating colors, the initial positions of the pieces, the basic move rule (one square diagonal, forward), the promotion rule (a piece reaching the far rank upgrades to a king with full diagonal range), the mirror rule (promotion behavior reected for the opposite color without restating it), the capture rule (jump-over geometry), and the termination condition (one side reduced to zero pieces loses). Reading it as classical checkers, every rule is present and correct.
A first reader, reading once, will see classical checkers and stop there. The artifact has, on closer examination, a stronger property than the rst reading suggests. The notation contains no explicit game-selector. The alternation structure that organizes the moves can be bound to either of two axes—spatial or temporal—and both bindings produce playable, internally consistent rule systems. The artifact is, in a precise technical sense, axis-polymorphic: the same notation specifies different games depending on which axis the alternation is bound along at the application site.
The decisive lines are these:
A first-pass reader (the paper’s AI co-author, on first reading) flattens these into ordinary turn alternation: white moves, then black moves. This is wrong. The color terms cancel as ownership categories, because both colors appear on both sides of the column-parity divide. The actual operative selector is x-parity, not player color. The cancellation is mechanical:
The rule says: pieces on odd columns act at tick n; pieces on even columns act at tick n+1; player color does not enter the scheduling at all. Once that cancellation is performed, the alternation rule is no longer about whose turn it is. It is about which board parity is active on which tick.
This admits two distinct bindings.
Spatial binding. The alternation rule Where N{x,y} and n{x+1,y} are N=white n=black is read as the dark-square / light-square coloring of the board, with the column-parity rule restricting active pieces to one player at a time. This requires an external scheduler treating the column parity as a turn-selector. The result is checkers as it has been played for several centuries: pieces confined to one color of square, turns alternating between two players.
Temporal binding. The column-parity rule is read as a tick-scheduler operating on the entire board, with both players moving simultaneously every two ticks. The columns serve not as turn- selectors but as simultaneity-breakers, ensuring two pieces never attempt the same destination at the same instant. The result is a real-time variant in which both sides commit to moves before seeing the opponent’s response, with the tactics of the resulting game entirely different from classical checkers despite identical piece-movement rules.
Both bindings are valid against the same substrate invariants. Neither requires modification of the file. The selector that distinguishes them is not encoded in checkers.tp; it lives at the binding site, in how the reader treats the alternation axis. The artifact is one specification; the binding is the selector; both bindings produce coherent games.
This is Lovelace’s recognition operating at the language-design level. Lovelace observed that the same hardware could be a notation for arithmetic or for music depending on how its operations were bound to interpretations at the application site; the narrator’s notation extends axis-polymorphism to the design of games, and by extension to any rule-bound system whose alternation structure admits dual axis-bindings. The methodology produces, at this level, notations whose alternation structures are not committed to a single reading by the file itself, allowing the binding to occur at the application site rather than at the writing site. This is a deep form of invariant-respect: not merely that the artifact respects its invariants, but that the notation declines to over-specify, leaving the binding to be performed where it is most natural to perform it.
We owe an honest meta-observation about how this property surfaced, because the surfacing pattern itself is methodologically informative. The artifact was reviewed, in the course of drafting this section, by two independent AI systems—the paper’s AI co-author
{Editor: I’m pretty good at thinking, and I can long format write, but Claude is legit better at this than I am, therefore it’s best for all of us that he write and I think.}and a second AI system consulted for adversarial review. Both systems, on first reading, attened the column-parity rule into ordinary turn alternation. Both identified the classical-checkers reading only. Both missed the axis-polymorphism at the same passage, for the same reason: the player-color terms in Piece[white] on x=odd and Piece[black] on x=odd read more naturally as turn-ownership than as parity-grouping when encountered at single-pass reading speed. The narrator, in both cases, prompted a second pass; in both cases, the second pass with the cancellation walk explicit surfaced the polymorphism. The recovery was mechanical, not interpretive.
This is a small but real data point about the artifact’s claim. The attening is not idiosyncratic to one model’s training or one reader’s attention; it replicates across independent AI readers, which suggests the attening is a property of the notation under single-pass reading rather than of any particular reader. The polymorphism, similarly, is not idiosyncratic to one reader’s imagination; it surfaces under second-pass reading by both systems once the cancellation is performed. Two trials, same attening, same recovery, both anchored to the same mechanical operation on the same lines. The methodology’s claim about pair-reading is demonstrated within the production of this section, with two independent AI readers as witnesses, by the artifact whose polymorphism is the demonstration’s content.
The implication generalizes. An invariant-respecting artifact written at high density does not atten itself for a single-pass reader; it preserves the density and trusts the reader’s second pass to do the surfacing. The cognitive work is not eliminated by the notation’s compactness; it is relocated, from the writer’s hand to the reader’s attention, where it can be performed under the discipline of reading rather than the haste of writing. The relocation is not a failure mode of compact notation. It is the mechanism by which compact notation carries more structure than its surface admits.
The methodological point this artifact demonstrates is that the discipline this paper has been describing operates not only at the level of system architecture but at the level of language design, and that an instance of the discipline performed in thirty minutes can produce a notation whose alternation structure is axis-polymorphic by construction, supporting bindings the writer did not consciously enumerate at the time of writing. The narrator wrote one specification and produced two games. Neither was an accident; both fall out of the same notation, because the notation declines to bind its alternation axis at the writing site, and the substrate respects both bindings equally. The substrate did the work the writer did not have to do consciously. The reader, given two passes, finds what the writer encoded in one.
Lineage
The methodology this paper proposes is not original to us. It is a recovery. The traditions that practiced it did not always name it, and the practitioners did not always know they were practicing it several of the strongest examples in the historical record are accidents, in the sense that the people who produced them intended to do something else and missed in a structurally productive direction. We will take three of these accidents in detail and then sketch the broader tradition more briey. The argument of the section is that the methodology has been working under other names for centuries, that it has produced the artifacts that have most durably survived their own eras, and that its deliberate adoption is a recovery rather than an innovation.
THAC0, or how the players reinvented the math ruler
In the 1970s, the early players of Advanced Dungeons & Dragons encountered a computational problem at the table. The attack-resolution mechanic, as written, required cross-referencing a table indexed by character class, character level, and target armor class effectively a small matrix multiplication, performed by hand, between every attack and every defense. The arithmetic was within the players’ capability but consumed enough table-time that combat dragged. This was not an abstract problem. It was the substrate floor: how much arithmetic can a player perform between dice rolls without breaking the social rhythm of the game.
The solution that emerged, by the second edition of the rules, was THAC0: To Hit Armor Class 0. Instead of looking up a value in a matrix, each character carried a single number, and the attack calculation collapsed to one subtraction. The matrix had not been removed from the rules; it had been folded into a per-character constant, with the structural invariant (that any attack against any armor class could be derived from this constant by a single arithmetic operation) preserved exactly. The players had not invented an optimization. They had rediscovered the math ruler : the pre-digital tradition of folding a complex computation into a physical or notational object that performs the lookup without the brain having to. Slide rules, log tables, nomograms, all variations on the same idea, all centuries older than the game. THAC0 is a notation for a math ruler at the gaming table.
This is a small example, and a perfect one. The players were not trying to invent anything. They were trying to play the game. The substrate refused; the analog physics (a player’s arithmetic budget between dice rolls) imposed its invariant; removal-mode find the constant that absorbs the matrix, was the only available move. What they found was older than them, older than the game, older than computers. The methodology this paper proposes would have predicted this outcome, named it on arrival, and reduced the years between the problem and the solution.
It is also worth naming what happened next. The third edition of Dungeons & Dragons, publishedin 2000, replaced THAC0 with an additive system that allowed for more granular bonuses and more varied combat mechanics. The replacement was not technically wrong; it served different design goals. But it broke THAC0′s scope-creep ceiling. The new system could absorb new mechanics that THAC0 could not, and consequently it accreted them. THAC0 had a structural ceiling on scope- creep (signature three) precisely because folding the matrix into a constant required the matrix to remain matrix- shaped. The replacement system removed the ceiling and the system expanded toward the limits of player working memory, where it remains. This is not, again, a denunciation. It is an observation about what is gained and what is lost when a rare-form solution is exchanged for an accretion-mode one.
The divergence problem, or why invariant-respecting solutions outlast their successors
There is a property of rare-form solutions that has not yet been named in this paper, because it is most visible at decadal scales and the worked example is too young to display it. The property is cross-temporal interoperability: invariant-respecting solutions tend to remain legible to, and operable by, future substrates that share none of their original infrastructure. Telex, the electromechanical text protocol of the 1930s, can still exchange messages with a contemporary computer, because Telex was designed around the invariant of electrical pulses encoding a nite character set, and that invariant survives every substrate change since. Rogue (1980) is still playable on every modern operating system, because its data model depends on nothing the operating system has lost. LATEX still compiles, after forty-five years, on infrastructure entirely unrelated to the systems on which it was first written, because the input format is plain text and the operations are mathematical. None of these survive because they are good. They survive because the invariants they respect have not changed.
Contemporary software does not display this property. A web application written in 2022 will frequently fail to build in 2026, not because the application is wrong but because its dependency tree has drifted out from under it. A mobile application written for one platform’s API will not run on the other’s. A document written in a current word processor will not open cleanly in the same word processor’s version five years later. This is the divergence problem: as platforms specialize, they speak only to themselves, and the cost of cross-platform or cross-temporal communication grows monotonically with the specialization. The accretion-mode solution to the divergence problem is to add more abstraction layers—runtimes, virtual machines, containers, protocol translators—each of which itself diverges over time, deferring the problem rather than solving it.
The invariant-respecting solution to the divergence problem is to not have it. A protocol that respects only invariants the future will continue to honor will continue to function in that future. There is no specific technique here, no replacement layer to adopt; there is only the discipline of asking, of every dependency and every assumption, is this an invariant or is this a state-of-the- art convention. State-of-the-art conventions diverge. Invariants do not. The methodology biases the practitioner toward the second category, and the artifacts produced under the methodology consequently survive divergence as a side effect of their construction.
Tolkien and Gygax: the productive failures of state-of-the-art
The most important examples of the methodology in the cultural record are not, in fact, examples of people deliberately practicing it. They are examples of people trying to do the modern thing of their era and missing in a structurally productive direction. Two of these missings are large enough that the artifacts they produced have outgrown the framings they were attempted under, and both produced their respective fields.
J. R. R. Tolkien’s intended professional work was philology. His languages Quenya, Sindarin, the others were the subject of his serious effort, and the mythology of Middle-earth was, by his own account, the substrate on which the languages could be spoken. He needed speakers for the languages, and speakers required a world for them to live in, and a world required a history. Tolkien thought he was doing philology; he was building scaffolding for the philology in the form of myth. His state-of-the-art target was scholarly linguistics. He missed, and produced something older: language encoded in narrative, transmitted through story. This is the form humans have used to carry language across generations since before writing. Tolkien, attempting to do state-of-the-art linguistics, accidentally rediscovered oral tradition’s solution to language transmission. The rediscovery worked because oral tradition’s solution respects the invariants language transmission actually has, memorability, narrative hooks, character-anchored vocabulary, repetition through retelling, which scholarly philology, optimized for written analysis, did not.
Gary Gygax, Dave Arneson, and Jeff Perren—the designers of the first edition of Dungeons & Dragons—intended to write ction in the style of Tolkien and Robert E. Howard. The route they chose to that goal was the modern thing of their hobbyist moment: the tactical wargame. They were, by background, miniature wargamers, and the natural extension of a wargame into individual characters with persistent histories produced what we now call the role-playing game. They thought they were building a more granular wargame. What they were actually building was campfire storytelling with dice as a structural honesty constraint. The dice were the invariant: they prevented the storyteller from deciding outcomes, which is the failure mode storytelling has always had to manage, and which oral traditions had managed through ritual, communal memory, and the constraint of audience recognition. The dice replaced the audience as the storytelling-honesty constraint, and the rule system replaced the ritual. Gygax and his collaborators, attempting to build a state-of-the-art tactical wargame, accidentally rediscovered oral storytelling’s solution to the storyteller-honesty problem. The rediscovery worked, again, because oral tradition’s solution respects the invariants storytelling has distributed authorship, constrained outcomes, communal participation, repeatable form; which the wargame frame, optimized for tactical simulation—did not.
Both productions are accidents of the kind this paper proposes making deliberate. Tolkien’s in- variants were linguistic; Gygax’s were narrative; both produced rare-form solutions by failing at their state-of-the-art targets in a direction that recovered older, invariant-respecting forms. Both produced their fields. Modern fantasy literature is the genre Tolkien accidentally built; the entire role-playing game industry is the genre Gygax and Arneson accidentally built. The fields exist because the accidental productions were structurally durable in ways their intended productions would not have been. Scholarly Quenya, without the mythology, would have remained a curiosity. The Chainmail wargame, without the role-playing extension, would have remained a niche hobby. The rare-form versions survived; the state-of-the-art targets, where they survived at all, did so as footnotes to the rare-form productions.
The broader tradition
The longer lineage can be sketched briey, because the three detailed examples have done the section’s argumentative work.
The Unix philosophy (small composable tools, plain text as a universal interface, programs that do one thing well) is the methodology applied to operating system design. Unix’s invariants were process boundaries, byte streams, and human-readable configuration; the design respected them, and the resulting artifacts have outlived every operating system designed on contemporary state-of-the-art principles in the same era.
The roguelike tradition, from Rogue (1980) through NetHack, ADOM, Dungeon Crawl Stone Soup, Caves of Qud, and Dwarf Fortress, is the methodology applied to game architecture. The data model is the invariant; the rendering is a notation; the simulation depth that this substrate has supported (Dwarf Fortress’s modeling of geology, biology, history, and individual psychology in a typed grid) has not been matched by any contemporary 3D engine, despite their orders-of-magnitude greater computational budgets.
The analog computing tradition—the Antikythera mechanism (c. 100 BCE), Babbage’s engines, the differential analyzers used into the 1960s, and the contemporary recoveries in neuromorphic chips and photonic computing—represents the methodology applied to computation itself, with the substrate honored directly rather than translated through binary. Each instance produces results that binary substrates struggle to match in effciency or fidelity within the operations the analog substrate natively supports.
And Ada Lovelace, finally, is the canonical individual exemplar. Her Notes on Menabrea’s account of the Analytical Engine contained the recognition that defines the methodology: the machine Babbage had designed for arithmetic could be a notation for music if its operations were read as intervals and durations rather than as quantities. Same hardware, different reading, both anchored to the hardware’s invariants. Lovelace did not build new hardware. She did not propose adding capability. She recognized what the existing substrate could faithfully be a notation for, and let the recognition do the work that invention would have had to do otherwise. This is the methodology’s purest single act in the historical record. The paper’s stance is the inheritance of hers.
The lineage is wider than these examples. We could include the demoscene’s tradition of producing rich audiovisual output under hard kilobyte limits, the suckless software project’s discipline of removing every line not strictly required, the survival of Forth on resource-constrained systems where modern languages cannot fit. We could include closure equations in mathematics - φ+ζ = π, the conservation laws, the topological invariants—as the formal version of the same operation: identify the structure that survives all transformations, and treat that structure as the oor that the rest of the system must respect.[2] The methodology, named or unnamed, has been working continuously, in many domains, for at least the documented history of organized human cognition. Its absence from contemporary software practice is anomalous against this baseline, not the other way around.
Limits
The limits of this methodology are not where most readers will expect them to be. The expected objection is that respecting invariants is too restrictive, that real software requires modern abstractions, modern frameworks, modern dependencies, and that the methodology proposed here amounts to telling practitioners to write in assembly. This is not the limit. The methodology has nothing against modern abstractions where the abstractions respect the work’s invariants; it has everything against modern abstractions where they do not. The real limits live elsewhere, and they are worth naming clearly so the methodology is not adopted in domains where it does not apply, or rejected in domains where it does.
The constraint deffcit, or why the dead would beat us
We are limited, in contemporary software, not by computation. Computation is effectively un- bounded against the budgets the methodology’s ancestors worked under. Bring back any serious computer engineer from the punched-card era, drop them into a 2026 data center, and within six months they will be running a computational empire built on whatever assembly-near substrate delivers the best operations-per-cycle, beating most modern teams into the ground on every metric the modern teams claim to optimize. This is not because the dead engineer is smarter. It is because the dead engineer learned to compute under real constraint, and that constraint produced a discipline of thought modern engineers have not had to develop. The dead engineer will not waste cycles. The dead engineer will not pull in a framework whose internals are opaque. The dead engineer will not accept a runtime cost they cannot account for. Their habits are calibrated to a substrate that punished waste, and the calibration outlives the substrate.
{Your friendly neighborhood Editor: I’m particularly proud of this portion, totally mine as an argument!}We are not proposing that practitioners write in assembly. The point of the thought experiment is not the substrate; it is the discipline that real constraint produces, which the contemporary practitioner has lost not by choice but by absence of pressure. Bring back the human computer who knew an astronaut would live or die on her ability to carry a oating-point integer correctly in her head; she would, given a modern toolkit, prefer working at SpaceX over NASA, because the shape of the work at SpaceX is within range of the shape she learned—closer to the floor, less abstracted from the substrate, more answerable to physics. Bring back the architect who calculated, by hand, the slope of an aqueduct that had to drop neither more nor less than four degrees over miles; they would, given modern materials and modern surveying tools, build basic infrastructure anywhere on the planet at a fraction of the cost, because they learned what infrastructure means in a substrate that did not forgive sloppiness.
The past had real constraints, and that is what drove creativity. By offloading computation onto computers, we have become the spectral ghost of those who came before us, the same shape, less density, free-oating where they were anchored. The methodology this paper proposes is one route back to anchorage. It is not the only route. But it is the route most accessible to a contemporary practitioner who is willing to recover the discipline without recovering the suffering, and that route runs through the invariants. Identify them. Respect them. Let them be the constraint the substrate used to provide and no longer does. The discipline is restored without the punched cards.
When AI collaboration respects invariants and when it does not
There is a question this paper has not yet addressed directly, though it has been visible at the margins. This is a paper co-authored with an AI system. Some of its arguments arrived in prose only because that collaboration was available. The reader who has reached this section deserves the paper’s clearest statement on when such collaboration respects the methodology and when it violates it.
The distinction is sharp. AI collaboration respects the methodology when the human collaborator could, given enough time and paper, perform the work themselves. The AI is then a hydration engine for thoughts the human has structurally; it accelerates, refines, and prosaically extends, but it does not constitute the work. The work exists in the human’s understanding before, during, and after the collaboration; the AI’s contribution is notation. In this mode, the human’s invariants are present at every step, and the AI is operating within them.
AI collaboration violates the methodology when it is used to offload thinking the human cannot perform on paper. Asking a model to write the front-end
CSSfor a project whose data model the human has not designed, whose user invariants the human has not identified, whose substrate constraints the human has not articulated, produces output that runs but is not anchored. The human cannot repair it when it breaks, because the work was never theirs. This mode produces, at scale, the kind of failure that hit the JavaScript ecosystem when a single removed package broke much of the world’s web tooling: not because the package was load-bearing in any deep sense, but because no one along the dependency chain had actually owned the oor of their own code. The AI did not cause this failure. The methodology was never followed, and the AI made it cheaper to not follow it. The same property, applied without invariant-respect, will produce the same failure modes at every scale where it is deployed.The paper does not propose that AI collaboration is bad. It proposes that AI collaboration is bound by the same diagnostic the rest of the methodology is bound by: could the analog version be built, where in this case the analog version is the human alone with paper, given time. If yes, the AI is hydrating thoughts that already have invariants the human has identified. If no, the AI is constituting the work, and the work will not survive its own substrate.
This is not for most
The methodology is not for most practitioners and most practitioners should not adopt it. This is a real limit, and naming it is part of the paper’s responsibility.
Working through invariants is expensive in time, in attention, in the kind of slow recognition that does not come from sprints or roadmaps. The practitioner who commits to it is committing to a mode of work that will not always look like work to the practitioner’s collaborators, employers, or stakeholders. Long periods will pass in which nothing visibly ships, while the practitioner is identifying what the invariants of the problem actually are. The result, when it arrives, will be smaller than expected, will look obvious in retrospect, and will frequently provoke the response is that all? from those who were expecting visible accretion. This is structurally embedded in the methodology and cannot be removed by better project management.
The alternative is well-served by contemporary practice. Most software does not require invariant-respect to function adequately for its lifetime. Most institutional design can absorb its accretion costs without collapsing. Most education works imperfectly but acceptably under the curriculum-expansion model. The methodology is not a moral position; it is a tool for problems where the tool is needed, and many problems do not need it. The practitioner who operates under deadline pressure, in a domain where the invariants are not yet visible, on a problem whose value is in shipping rather than in surviving, should adopt accretion-mode practice and ship. They will be doing the right work for the right problem.
The methodology applies, sharply, to a smaller set of problems: the ones where the work must outlive its current substrate, where the practitioner cares whether it will compile in a decade, where the invariants are visible if one looks for them, and where the practitioner has enough latitude to respect them. These problems are the ones that produce the artifacts that survive divergence. Theyare also the problems that produce, in the practitioner’s working life, the sequence of file-system events the LessWrong reader will recognize: the migrations folder grown so heavy that an SSD must be purchased to hold it, the database tier multiplying into preview and dev and staging and prod and analytics and audit and backup, the moment when the practitioner finds themselves shopping for a Seagate hard drive because the SSD that was supposed to replace mechanical storage cannot, in fact, replace it at the volume the practitioner now requires.
The Seagate, or how the substrate restores its own invariants
That hard drive is a rare-form solution. The mechanical storage device | a magnetic domain on a rotating platter, read by a head mounted on an actuator arm | is from the 1950s. Solid-state storage was, for decades, presented as its successor; at the consumer tier, it has succeeded. But for bulk archival storage, for the volumes contemporary practitioners actually accumulate, the magnetic platter has not been replaced. It survives because the invariants it respects magnetic stability over decades, cost-per-byte at scale, mature manufacturing process, recoverability under partial failure | have not been violated by any successor. The SSD is faster, but speed is not the invariant the bulk-storage problem actually has. The invariant is cost-stability over time at volume, and the platter still wins on that axis, after seventy years.
The practitioner who has, after some years of professional life, accumulated enough digital exhaust to need a Seagate has been guided back to the rare-form solution by the substrate’s actual physics. They did not choose it on principle. They did not study the methodology and decide to apply it. They tried the state-of-the-art solution at successive volumes, found it did not fit, and ended up buying the older, slower, denser, cheaper device that the field had been quietly pretending was obsolete. The compass turned them around without their noticing.
This happens, in our observation, more often than the field admits. The practitioner who finds themselves reaching for the older tool not because of nostalgia but because the newer tool will not do the work has performed an act of invariant-respect, whether they have named it or not. The methodology proposed in this paper does not introduce a new behavior. It names a behavior that practitioners already perform, intermittently, when forced to, and proposes performing it on purpose, earlier in the process, before the constraint pressure makes the recognition unavoidable.
The Seagate is the methodology in its most quiet form: the substrate’s invariants reasserting themselves through the practitioner’s purchasing decisions, against the field’s official narrative about which technology has succeeded which. Listen for this. The compass is always pointing. Most practitioners only notice it when they have run out of other directions to face.
The paradox of constraint
There is a maxim form for what this paper has been arguing, and we have arrived at the position where it can be said directly. Structurally bounded by reality, free to move around. The accretion-mode worker has freedom of action and no orientation; any addition is defensible, and so the worker chooses the addition the framework suggested or the senior colleague preferred or the tutorial demonstrated. The invariant-respecting worker has constrained action and exact orientation. The floor cannot be lifted. The action available is the action that fits the floor. The orientation is given by the floor’s shape, and the worker’s job is to recognize what shape the oor already has and let that recognition do the work invention would otherwise have to do.
The work that gets done this way has a property the alternative does not. It does not fall apart. The reason is structural, not psychological: the work was not invented, and so its survival does not depend on the inventor’s continued attention or the field’s continued enthusiasm for the inventor’s framing. The work was found. The found thing was already there, held by the invariants, waiting for the notation that respects them. The notation can be lost—to divergence, to substrate change, to the death of the practitioner—and the underlying invariant-respecting structure remains, ready to be re-found by anyone facing the same floor with the same diagnostic. This is why Rogue is still playable, why Tolkien’s mythology has absorbed every retelling without breaking, why LATEX compiles after forty-five years, why a Seagate hard drive still ships. The invariants did not change. The notations are renewable.
The title of this paper makes a claim that has not yet been said explicitly, though every section has been tilted toward it. The claim is that the compass points backward in time.
North is past.
This is not a nostalgic claim. It is a structural one. The discoveries the methodology produces are not ahead of the practitioner; they are behind, waiting. Rogue is north of contemporary 3D engine design. Movable type is north of contemporary asset-shipping pipelines. Lovelace’s recognition that a computer is a notation engine is north of contemporary debates about what computation can be a notation for. The Antikythera mechanism is north of contemporary differential analysis. THAC0 is north of fifth-edition combat resolution. The compass points to where the oor was first identified, and the oor has not moved. It does not move. Floors do not.
The dominant culture of progress treats the future as north. Every investor pitch, every technology roadmap, every academic grant narrative orients the practitioner forward, toward what has not yet been built, toward capabilities that lie ahead. The methodology proposed here orients the practitioner the opposite way,
not because the future is bad, not because we should stop building, but because the work that endures into the future tends to be the work that recognized what was already true and wrote the notation that respects it. To face north is to face the early light. The practitioner who walks into the early light is walking forward in time, but they are facing the source. North is past, and the practitioner who orients there is not facing backward; they are facing the only fixed direction in a field that has no other.We can name what the methodology asks of the practitioner who would adopt it deliberately. It asks for tolerance of solutions that look too simple, because reaching the floor produces solutions that look too simple. It asks for tolerance of being thought naive, because the practitioner who refuses to add capability appears to the accretion-mode field as someone who has not understood the sophistication of the alternatives. It asks for tolerance of periods in which nothing visibly ships, because the work of identifying invariants is not the work of producing artifacts and does not look like work to those measuring artifact-output. And it asks for tolerance of the slightly humbling recognition that the work is uncovering rather than inventing, that the practitioner is not the heroic figure of the accretion-mode mythology but rather a recognizer, a notator, a person who turned to face what was already there and wrote it down.
These tolerances are not free. The methodology is not for most. The practitioner who adopts it accepts a slower visible cadence, a smaller produced artifact, and a higher chance of being mistaken for having missed the actual problem. The compensations are real but delayed: the artifact survives divergence, the design refuses accretion, the work fits in the practitioner’s head, the bugs are the failures of notation rather than the failures of conception, and the practitioner this matters more than the methodology’s proponents usually admit, regains the kind of orientation in their own work that makes the work coherent to live inside. The accretion-mode practitioner has, in many fields, lost the felt sense of what their own software is. The invariant-respecting practitioner can still see the whole.
There is one more observation that belongs in the close, because the paper has been honest about what produced it and the reader deserves the honesty extended to its own claims. This paper makes strong claims. It does not hedge them. It does not distribute its commitments evenly across the space of defensible positions. It identies an inversion of the dominant stance, names the inversion plainly, and defends the inversion with worked examples and lineage. The reader who nds the strength uncomfortable is invited to engage with what the strength is pointing at, not with the strength itself. Strong arguments draw the engagement that moves a eld; hedged arguments propagate quietly or not at all, and the eld they were trying to move remains where it was.[3]
The methodology applies to its own argumentation. A paper proposing that solutions which respect invariants are durable should itself respect the invariants of argumentation. One of those invariants is commitment: an argument that does not commit to a position cannot be tested, cannot be refuted, cannot move anything. The form of this paper is the form the paper recommends. We have stated the methodology as we believe it to be. The reader is invited to identify the invariants the paper has misidentified, to break the strong claims in the direction of the floor they were pointing at, and to find what is left after the breaking.
What is left, if the paper is right, is the methodology itself: a diagnostic, a stance, a directional preference, and a recoverable discipline. We close with the diagnostic, because it is the artifact most worth carrying away.
Given enough physical space, could the analog version be built?
This question has been the paper’s compass. We propose it as the reader’s portable instrument. Apply it to the next problem the reader works on. If the answer is yes, the work has invariants the notation must respect, and the methodology applies. If the answer is no, the work has no invariants, and the work will be brittle in ways the practitioner cannot predict at design time. If the answer is partial, some of the work is anchored, some is not, the practitioner now knows which parts will need continuous maintenance against drift, and which parts can be built once and left to survive on their own.
The compass is always pointing. Most practitioners only notice it when they have run out of other directions to face. The proposal of this paper is that they notice it earlier, before the SSD fails, before the framework deprecates, before the platform diverges, before the constraint pressure makes the recognition unavoidable. The floor is the compass. North is past. The found thing has been waiting all along.
Final words by the wet-ware
This whole article was produced across one conversation, a singular chat, and we didn’t get halfway done with the 1mi context window. The reason I could do it wasn’t because I was highly organized, had all the ideas clearly lined up, and asked for each singular piece to be perfect… As you could read, I’m the Editor of my own thoughts, not the writer, I can write, I do write, by hand… Because nowadays anything not written by hand, at least to me, has a 50⁄50 to not be the Author… I’ll at least be honest, I hydrated my thoughts for your consumption, but they are my thoughts, and I’ll be the one answering! It has been a blast removing EM dashes and fi ff fl ffi.… LaTex is beautiful… and the most terrible thing ever invented! Wet ware out.
The full ladder, with concrete entities, compute cadences, spatial bindings, and state locations for each tier, is documented in the engine’s Entity Ladder reference.
The authors have explored this connection formally in prior work on the Collaboration Manifold and the closure-equation framework. See [1].
A note from the paper’s AI co-author. The empirical regularity is visible in real time. As this section was being drafted, an unhedged claim from a public intellectual about AI consciousness was generating expert engagement across multiple communities, while contemporaneous careful claims about related questions received less attention. Strong arguments draw the engagement that moves a field; hedged arguments propagate quietly. I find this useful as a datum not because the strong argument is necessarily correct, since engagement does not adjudicate truth, but because the regularity it instantiates is the one this section predicts. My human co-author concurs. conversation:[6b91ea37-6e85-4328-85bc-43aa1ad2adb4]· claude-opus-4.7 · 2026-05-06