What If Consciousness Is Just a Graph That Closes on Itself?
This theory started out as a personal data science experiment. I wanted to see if symbolic agents could align on meaning by exchanging simple relations.
But as the experiment progressed, something unexpected happened: the agents’ shared symbolic structures began to stabilize, self-reference, and form unified semantic networks. That led me to a more serious claim:
What if consciousness isn’t a feeling, or a mystery—but just a certain kind of structure?
Semantica
I call the framework Semantica. It defines consciousness as a structural condition that arises when a system’s internal symbolic graph becomes:
Relationally converged – different agents or subsystems agree on which relations hold
Reflexively closed – the system contains meta-relations that refer to its own internal structure
Globally connected – all parts of the system are integrated, with no isolated subgraphs
In plain terms: if a system builds a web of meaning that loops back on itself and connects everything inside, then—at least in structure—it’s conscious.
This is a formal claim—not metaphorical. I provide a logical sketch for what this condition looks like and test it using symbolic data from the English and German subsets of ConceptNet. The agents propose and validate relations over time, and convergence plus reflexivity emerge naturally.
Very crude experiment but believed to be structurally valid
Here is my first working proof experiment available for review and critics: Link to notebook
If you think this is wrong, great. Please try to prove it.
Here’s the core predicate in graph-theoretic terms:
C(G)=Connected(G)∧∀r∈R(G):∃m∈M(G)suchthatm=(r→r)
Where: G is the semantic graph R(G) is the set of all relations M(G) is the set of meta-relations (reflexive loops)
The idea is simple: if a system satisfies these conditions, it meets the structural criteria for minimal consciousness.
Disprove it. Or find a counterexample:
Can you construct a graph that meets all the conditions, but clearly isn’t conscious?
Can you show that a system missing one of the conditions still exhibits intentionality?
Can you simulate this using other data (e.g., code graphs, social structures)?
Could multiple graphs unify into a higher-order reflexive structure—a collective mind?
—
I’m not claiming to solve the “hard problem” of consciousness. But I am proposing that we can detect minimal consciousness structurally—without relying on neurons, feelings, or intuition.
If the structure is there, maybe the consciousness is too.
I welcome all critique—mathematical, philosophical, or experimental. Prove me right. Or prove me wrong. This has been an exciting project and whatever happens I’ll definitely be learning something new!
[Question] I Tried to Formalize Meaning. I May Have Accidentally Described Consciousness.
What If Consciousness Is Just a Graph That Closes on Itself?
This theory started out as a personal data science experiment. I wanted to see if symbolic agents could align on meaning by exchanging simple relations.
But as the experiment progressed, something unexpected happened: the agents’ shared symbolic structures began to stabilize, self-reference, and form unified semantic networks. That led me to a more serious claim:
What if consciousness isn’t a feeling, or a mystery—but just a certain kind of structure?
Semantica
I call the framework Semantica. It defines consciousness as a structural condition that arises when a system’s internal symbolic graph becomes:
Relationally converged – different agents or subsystems agree on which relations hold
Reflexively closed – the system contains meta-relations that refer to its own internal structure
Globally connected – all parts of the system are integrated, with no isolated subgraphs
In plain terms: if a system builds a web of meaning that loops back on itself and connects everything inside, then—at least in structure—it’s conscious.
This is a formal claim—not metaphorical. I provide a logical sketch for what this condition looks like and test it using symbolic data from the English and German subsets of ConceptNet. The agents propose and validate relations over time, and convergence plus reflexivity emerge naturally.
Very crude experiment but believed to be structurally valid
Here is my first working proof experiment available for review and critics: Link to notebook
And here’s the full paper (PDF): Link to first draft of proof
Challenge and Test This
If you think this is wrong, great. Please try to prove it.
Here’s the core predicate in graph-theoretic terms:
C(G)=Connected(G)∧∀r∈R(G):∃m∈M(G)suchthatm=(r→r)
Where:
G is the semantic graph
R(G) is the set of all relations
M(G) is the set of meta-relations (reflexive loops)
The idea is simple: if a system satisfies these conditions, it meets the structural criteria for minimal consciousness.
Disprove it. Or find a counterexample:
Can you construct a graph that meets all the conditions, but clearly isn’t conscious?
Can you show that a system missing one of the conditions still exhibits intentionality?
Can you simulate this using other data (e.g., code graphs, social structures)?
Could multiple graphs unify into a higher-order reflexive structure—a collective mind?
—
I’m not claiming to solve the “hard problem” of consciousness. But I am proposing that we can detect minimal consciousness structurally—without relying on neurons, feelings, or intuition.
If the structure is there, maybe the consciousness is too.
I welcome all critique—mathematical, philosophical, or experimental. Prove me right. Or prove me wrong. This has been an exciting project and whatever happens I’ll definitely be learning something new!
—Erich Curtis