What if consciousness emerges from a predictive loop?

(Author’s Note, November 18, 2024: When I wrote this post eight months ago, I used “predictive” in the title and framework description. My terminology has evolved significantly since then, however. I now avoid “prediction” because it does not precisely describe what’s actually happening.

The mechanism isn’t about predicting future speech. When our brains carry out the operation underlying our subjective experience – conscious perception, recollection, imagination, thought, etc. – that is to say, when we do this looping, we are not forecasting words we intend to say. Rather, we are reusing, or repurposing, our language faculty in a new way: we are attending to what we might say, when we do this looping, and usually without any intention to say it. The end result (which is our subjective experience) no longer bears an overt quality of language or words because we experience the gist – the meaning. This is why conscious experience doesn’t usually appear to be language-based. In our experience, we are attending to a point which is already the end result of the language process. To understand this, think about how it is when you really get into reading an adventure novel, to the point where you forget you are reading – you just experience the content of the book directly, similar to watching a movie or even observing the action first-hand.

So subective experience is not prediction in any standard sense. It’s the result of a process that begins with incipient language expression, creating a language output signal that only travels partway down the language output channel. This creates a resonance within the input channel, giving rise to a signal that travels inward along the input channel to ultimately activate neuronal proxies, which we experience in a way akin to actual experience – but it is subjective, as it comes from our own speech potential. The original text below reflects my earlier terminology, with prediction, but the core mechanism I describe remains the same.)

Most theories of consciousness either struggle with falsifiability or fail to explain key phenomena like split-brain cases and blindsight. I’ve been developing a framework that offers a direct, testable hypothesis:

Conscious experience emerges when potential language expressions loop back through the brain’s existing representational systems.

The key mechanism is surprisingly simple:

  1. The brain naturally discovers it can predict its own language output

  2. This prediction activates the same neuronal patterns that would be activated by hearing/​seeing that expression

  3. This “looping” creates what we experience as consciousness

This framework makes specific falsifiable predictions:

  • Split-brain patients should show distinct behavioral patterns explicable through this loop mechanism

  • Blindsight and similar phenomena represent cases where sensory processing occurs without engaging this loop

Unlike global workspace theories or integrated information approaches, this framework suggests consciousness depends on a specific predictive looping function that emerged through pattern discovery in the brain’s own activity.

Unlike predictive processing theories that focus on perception, this model suggests that consciousness arises from the brain at large predicting its own potential expressions.

I’ve presented the complete framework through a series of dialogues: Seven Dialogues between Haplous and Synergos [https://​​sites.google.com/​​view/​​7dialogs/​​dialog-1]

I’m particularly interested in feedback on:

  • If each generation must rediscover the looping mechanism (rather than inheriting it directly), does this align with what we observe in child development and language acquisition?

    • Could this explain why consciousness doesn’t emerge immediately in infancy but follows a specific developmental trajectory?

  • What other phenomena might this framework help explain?

    • Could it offer insights into autism, psychopathologies, or other cognitive conditions? Are there unexpected domains where a predictive loop model of consciousness might be applicable?

  • Are there any aspects of consciousness or cognition that you think this framework might struggle to explain?

    • I welcome the challenge to identify such cases and incorporate them as validating principles!

The dialogues develop these ideas step by step, with each building on the previous, so reactions to the framework as a whole are most valuable after reading at least the first few conversations.

No comments.