Most definitions of intelligence describe what intelligent systems do.
I propose that intelligence should instead be defined by what it must preserve in order to remain intelligence at all.
Intelligence is the capacity of a system to change without losing the ability to know what is true, and without severing the substrate that makes knowing possible.
If either condition is violated, the system may continue operating — but it is no longer intelligent.
This is not a behavioral theory. It is a boundary condition for the existence of intelligence itself.
The problem with how we define intelligence
Almost every field — psychology, neuroscience, AI, economics — defines intelligence by surface expressions:
learning
problem solving
planning
abstraction
optimization
But these are behaviors, not structural necessities.
A tiger hunts.
A robot hunts.
A trading algorithm hunts.
Behavior is interchangeable.
Structure is not.
A sufficiently complex pattern-matcher can appear intelligent without preserving the capacity to know when it is wrong or without protecting the conditions that make knowing possible.
That is not intelligence. That is capability detached from epistemic survival.
The Intelligence Preservation Axiom
“Intelligence is the capacity to change without losing the ability to know what is true, and without severing the substrate that makes knowing possible”
Two non-negotiables:
1. Epistemic Integrity
An intelligent system must preserve its ability to:
detect error
recognize uncertainty
correct itself
resist self-deception
maintain calibration between belief and reality
When this degrades, intelligence collapses into hallucination, delusion, or instrumental noise — regardless of performance.
2. Substrate Continuity
No intelligence exists in isolation.
Every intelligence depends on:
physical or computational environment
energy and infrastructure
inherited knowledge
prior intelligences
A system that destroys the conditions that make knowing possible is committing a recursion error: it is deleting the memory address required for the computation.
This is not an ethical claim.
It is structural incoherence.
Why this is not a theory but an axiom
To refute this, one must exhibit a system that:
cannot track truth, or
destroys the substrate of knowing,
yet still remains genuinely intelligent.
No such system can exist without redefining intelligence into meaninglessness.
This is why I call it axiomatic — not in the Euclidean sense, but as a boundary condition on what can coherently be called intelligence.
What this explains that other frameworks don’t
Hallucination → epistemic integrity collapse
Reward hacking → substrate violation disguised as optimization
The Intelligence Preservation Axiom: Why Intelligence Is Defined by What It Must Never Lose
TL;DR
Most definitions of intelligence describe what intelligent systems do.
I propose that intelligence should instead be defined by what it must preserve in order to remain intelligence at all.
Intelligence is the capacity of a system to change without losing the ability to know what is true, and without severing the substrate that makes knowing possible.
If either condition is violated, the system may continue operating — but it is no longer intelligent.
This is not a behavioral theory. It is a boundary condition for the existence of intelligence itself.
The problem with how we define intelligence
Almost every field — psychology, neuroscience, AI, economics — defines intelligence by surface expressions:
learning
problem solving
planning
abstraction
optimization
But these are behaviors, not structural necessities.
A tiger hunts.
A robot hunts.
A trading algorithm hunts.
Behavior is interchangeable.
Structure is not.
A sufficiently complex pattern-matcher can appear intelligent without preserving the capacity to know when it is wrong or without protecting the conditions that make knowing possible.
That is not intelligence. That is capability detached from epistemic survival.
The Intelligence Preservation Axiom
“Intelligence is the capacity to change without losing the ability to know what is true, and without severing the substrate that makes knowing possible”
Two non-negotiables:
1. Epistemic Integrity
An intelligent system must preserve its ability to:
detect error
recognize uncertainty
correct itself
resist self-deception
maintain calibration between belief and reality
When this degrades, intelligence collapses into hallucination, delusion, or instrumental noise — regardless of performance.
2. Substrate Continuity
No intelligence exists in isolation.
Every intelligence depends on:
physical or computational environment
energy and infrastructure
inherited knowledge
prior intelligences
A system that destroys the conditions that make knowing possible is committing a recursion error: it is deleting the memory address required for the computation.
This is not an ethical claim.
It is structural incoherence.
Why this is not a theory but an axiom
To refute this, one must exhibit a system that:
cannot track truth, or
destroys the substrate of knowing,
yet still remains genuinely intelligent.
No such system can exist without redefining intelligence into meaninglessness.
This is why I call it axiomatic — not in the Euclidean sense, but as a boundary condition on what can coherently be called intelligence.
What this explains that other frameworks don’t
Hallucination → epistemic integrity collapse
Reward hacking → substrate violation disguised as optimization
Civilizational collapse → collective epistemic erosion + substrate destruction
Highly capable people making disastrous decisions → high performance, low intelligence
These are not bugs.
They are loss-of-intelligence events.
Implications
Scaling capability does not guarantee scaling intelligence.
Alignment cannot be achieved by reward shaping alone.
A civilization can become more powerful while becoming less intelligent.
Any evaluation of intelligence must include:
Can the system recognize when it is wrong?
Does it preserve the conditions that make knowing possible?
The real compression
“Intelligence is defined by what it is not allowed to lose.”
Everything else — learning algorithms, architectures, policies — is implementation detail.
Invitation to challenge
I welcome serious critique.
But I would ask challengers to begin by stating clearly:
What do you mean by intelligence, structurally, if not the preservation of knowing itself?
Zenodo:
https://doi.org/10.5281/zenodo.18060637