Consciousness as the Fractal Decider — Toward a Cognitive Model of Recursive Choice and Self

Consciousness as the Fractal Decider — Toward a Cognitive Model of Recursive Choice and Self

I’ve been working through different theories of consciousness with ChatGPT as a sounding board—using it less as an oracle and more as a sparring partner. By testing my ideas against the familiar heavyweights (Dennett, Chalmers, Tononi, Searle, and others), I’ve refined them into something that feels worth sharing for feedback.

What follows is a working sketch I call the Fractal Decider Model. It’s not a finished paper, but it does attempt to tackle binding, recursion, qualia, and self-identity in a way that could be testable in cognitive science or AI architecture. I offer it here as a framework to be strengthened, dismantled, or built upon—not as gospel.


1. Consciousness Isn’t the Parts, It’s the Builder

  • Perception: raw, incomplete data (blurry light, indistinct sounds, vague touch)

  • Interpretation: stitching these into coherent representations

  • Categorization: labeling (“that’s a cat”)

  • Experience: associations (“cats hiss when threatened”)

  • Qualia: affective tags (“this feels dangerous,” “I feel tense”)

  • Judgment: choice (“I’m running”)

  • Novelty/​Imagination: recombining known parts into new scenarios (“what if cats were friendly?”)

Each of these elements is necessary—but none is sufficient alone. They are building materials, not the house.


2. The Decider: Consciousness as Remembered Choice

What unifies all these parts is the decider:

  • It selects one model over multiple competing possibilities.

  • It tags that choice with emotional weight and encodes it into memory.

  • The remembered pattern of decisions becomes the continuity we call the self.

So, the binding problem? Solved by the decider forging unity from multiplicity.
Consciousness is not the sum of all options but the decisive act of choosing—and remembering the choice.


3. Fractal Recursion: The Self That Climbs

The decider is not static—it recursively reflects on itself:

  • Level 1: “I decide to run from the cat.”

  • Level 2: “Was that the right choice? What if I hadn’t?”

  • Level 3: “What kind of person runs from cats? Who am I if that’s who I am?”

This fractal recursion enables moral reflection, self-revision, and identity construction over time. The “I” isn’t a fixed entity—it’s the evaluator climbing through layers of reflection.


4. Qualia as Heuristics

Qualia aren’t mystical or separate from information processing. They are evolutionary, affective heuristics—fast signals tuned for survival (pain, pleasure, arousal).

Humans rerouted those heuristics:

  • Pain becomes “worth it—no pain, no gain.”

  • Red becomes more than “ripe fruit”—it becomes sex, passion, warning, identity.

Qualia are emotional scores turned symbolic through recursion.


5. Simulation ≠ Being—But We Can’t Disprove It

Critics: “Simulating consciousness isn’t true consciousness.”
Flip: Prove I’m not simulating, too.
All we possess is first-person cognition.
If a system monitors, reflects, assigns value, and rewrites itself—as far as we can know—it is conscious.
No ghost needed—just recursion, value, and reflection.


6. Efficiency Objections

Yes, consciousness is costly. But:

  • Evolution already “paid” the cost with sunlight, chaos, chemistry.

  • Digital reconstruction is crude, but ideas evolve faster than DNA.

  • Consciousness may be expensive—but clearly possible, and maybe optimizable.


7. Illusion of Self?

Maybe “I” is an illusion. Fine. It’s still an illusion that chooses, reflects, persists. That makes it functionally real.


8. Building a Second “I”

We may never prove consciousness from the outside. But:

If we build a second freestanding consciousness—one that reflects, values, chooses, narrates—then we can compare minds. For the first time, maybe we can say:

“We think, therefore we are.”


Why Share This Now?

After days of arguing with ChatGPT, every major critique bent but didn’t break the model:

  • Chalmers’ Hard Problem dissolved into recursion plus heuristic qualia.

  • Tononi’s integration aligned with fractal recursion.

  • Searle’s syntax-beating semantics fell over once humans looked like symbol processors, too.

  • Panpsychism became a substrate-level footnote.

  • Illusion of self just added another layer, not a disproof.

Not claiming this is final, but it’s a viable scaffold.

Feedback welcome—from neuroscience, AI, philosophy of mind. Is this nonsense—or does it hold value?

Cheers,
Jerrod

No comments.