Physicalism doesn’t solve the hard problem, because there is no reason a physical process should feel like anything from the inside.
Computationalism doesn’t solve the hard problem, because there is no reason running an algorithm should feel like anything from the inside.
Formalism doesn’t solve the hard problem, because there is no reason an undecideable proposition should feel like anything from the inside.
Of course, you are not trying to explain qualia as such, you are giving an illusionist style account. But I still don’t see how you are predicting belief in qualia.
And among these fictions, none is more persistent than the one we call qualia.
What’s useful about them? If you are going to predict (the belief in) qualia, on the basis of usefulness , you need to state the usefulness. It’s useful to know there is a sabretooth tiger bearing down in you , but why is an appearance more useful than a belief ..and what’s the use of a belief-in-appearance?
This suggests an unsettling, unprovable truth: the brain does not synthesize qualia in any objective sense but merely commits to the belief in their existence as a regulatory necessity.
What necessity?
ETA:
self-referential, self-regulating system that is formally incomplete (as all sufficiently complex systems are) will generate internally undecidable propositions. These propositions — like “I am in pain” or “I see red” — are not verifiable within the system, but are functionally indispensable for coherent behavior.
I still see no reason why an undecideable proposition should appear like a quale or a belief in qualia.
That failure gets reified as feeling.
Why?
I understand that you invoke the “Phenomenological Objection,” as I also, of course, “feel” qualia. But under EN, that feeling is not a counterargument — it’s the very evidence that you are part of the system being modeled.
Phenomenal conservatism , the idea that if something seems to exist ,you should (defeasibly) assume it does exist,.is the basis for belief in qualia. And it can be defeated by a counterargument, but the counter argument needs to be valid as an argument. Saying X’s are actually Y’s for no particular reason is not valid.
What’s useful about them? If you are going to predict (the belief in) qualia, on the basis of usefulness , you need to state the usefulness.
There might be some usefulness!
The statement I’d consider is “I am now going to type the next characters of my comment”. This belief turns out to be true by direct demonstration, it is not provable because I could as well leave the commenting to tomorrow and be thinking “I am now going to sleep”, not particularly justifiable in advance, and it is useful for making specific plans that branch less on my own actions.
I object to the original post because of probabilistic beliefs, though.
To your objection: Again, EN knew that you will object. The thing is EN is very abstract: It’s like two halting machines who think that they are universal halting machines try to understand what it means that they are not unversal halting machines.
They say: Yes but if the halting problem is true, than I will say it’s true. I must be a UTM.
Addressing your claims: Formalism, Computationalism, Physicalism, are all in opposition to EN. EN says, that maybe existence itself is not a fundamental category, but soundness. This means that the idea of things existing and not existing is a symbol of the brain.
EN doesn’t attempt to explain why a physical or computational process should “feel like” anything — because it denies that such feeling exists in any metaphysical sense. Instead, it explains why a system like the brain comes to believe in qualia. That belief arises not from phenomenological fact, but from structural necessity: any self-referential, self-regulating system that is formally incomplete (as all sufficiently complex systems are) will generate internally undecidable propositions. These propositions — like “I am in pain” or “I see red” — are not verifiable within the system, but are functionally indispensable for coherent behavior.
The “usefulness” of qualia, then, lies in their regulatory role. By behaving AS IF having experience, the system compresses and coordinates internal states into actionable representations. The belief in qualia provides a stable self-model, enables prioritization of attention, and facilitates internal coherence — even if the underlying referents (qualia themselves) are formally unprovable. In this view, qualia are not epiphenomenal mysteries, but adaptive illusions, generated because the system cannot...
NOW
I understand that you invoke the “Phenomenological Objection,” as I also, of course, “feel” qualia. But under EN, that feeling is not a counterargument — it’s the very evidence that you are part of the system being modeled. You feel qualia because the system must generate that belief in order to function coherently, despite its own incompleteness. You are embedded in the regulatory loop, and so the illusion is not something you can step outside of — it is what it feels like to be inside a model that cannot fully represent itself. The conviction is real; the thing it points to is not.
”because there is no reason a physical process should feel like anything from the inside.”
The key move EN makes — and where it departs from both physicalism and computationalism — is that it doesn’t ask, “Why should a physical process feel like anything from the inside?” It asks, “Why must a physical system come to believe it feels something from the inside in order to function?” The answer is: because a self-regulating, self-modeling system needs to track and report on its internal states without access to its full causal substrate. It does this by generating symbolic placeholders — undecidable internal propositions — which it treats as felt experience. In order to say “I am in pain,” the system must first commit to the belief that there is something it is like to be in pain. The illusion of interiority is not a byproduct — it is the enabling fiction that lets the system tell itself a coherent story across incomplete representations.
OKAY since you made the right question I will include this paragraph in the Abstract.
In other words: the brain doesn’t fuck around with substrate — it fucks around with the proof that you have one. It doesn’t care what “red” is made of; it cares whether the system can act as if it knows what red is, in a way that’s coherent, fast, and behaviorally useful. The experience isn’t built from physics — it’s built from the system’s failed attempts to prove itself to itself. That failure gets reified as feeling. So when you say “I feel it,” what you’re really pointing to is the boundary of what your system can’t internally verify — and must, therefore, treat as foundational. That’s not a bug. That’s the fiction doing its job.
Physicalism doesn’t solve the hard problem, because there is no reason a physical process should feel like anything from the inside.
Computationalism doesn’t solve the hard problem, because there is no reason running an algorithm should feel like anything from the inside.
Formalism doesn’t solve the hard problem, because there is no reason an undecideable proposition should feel like anything from the inside.
Of course, you are not trying to explain qualia as such, you are giving an illusionist style account. But I still don’t see how you are predicting belief in qualia.
What’s useful about them? If you are going to predict (the belief in) qualia, on the basis of usefulness , you need to state the usefulness. It’s useful to know there is a sabretooth tiger bearing down in you , but why is an appearance more useful than a belief ..and what’s the use of a belief-in-appearance?
What necessity?
ETA:
I still see no reason why an undecideable proposition should appear like a quale or a belief in qualia.
Why?
Phenomenal conservatism , the idea that if something seems to exist ,you should (defeasibly) assume it does exist,.is the basis for belief in qualia. And it can be defeated by a counterargument, but the counter argument needs to be valid as an argument. Saying X’s are actually Y’s for no particular reason is not valid.
There might be some usefulness!
The statement I’d consider is “I am now going to type the next characters of my comment”. This belief turns out to be true by direct demonstration, it is not provable because I could as well leave the commenting to tomorrow and be thinking “I am now going to sleep”, not particularly justifiable in advance, and it is useful for making specific plans that branch less on my own actions.
I object to the original post because of probabilistic beliefs, though.
Thanks for being thoughtful
To your objection: Again, EN knew that you will object. The thing is EN is very abstract: It’s like two halting machines who think that they are universal halting machines try to understand what it means that they are not unversal halting machines.
They say: Yes but if the halting problem is true, than I will say it’s true. I must be a UTM.
Survival.
Addressing your claims: Formalism, Computationalism, Physicalism, are all in opposition to EN. EN says, that maybe existence itself is not a fundamental category, but soundness. This means that the idea of things existing and not existing is a symbol of the brain.
EN doesn’t attempt to explain why a physical or computational process should “feel like” anything — because it denies that such feeling exists in any metaphysical sense. Instead, it explains why a system like the brain comes to believe in qualia. That belief arises not from phenomenological fact, but from structural necessity: any self-referential, self-regulating system that is formally incomplete (as all sufficiently complex systems are) will generate internally undecidable propositions. These propositions — like “I am in pain” or “I see red” — are not verifiable within the system, but are functionally indispensable for coherent behavior.
The “usefulness” of qualia, then, lies in their regulatory role. By behaving AS IF having experience, the system compresses and coordinates internal states into actionable representations. The belief in qualia provides a stable self-model, enables prioritization of attention, and facilitates internal coherence — even if the underlying referents (qualia themselves) are formally unprovable. In this view, qualia are not epiphenomenal mysteries, but adaptive illusions, generated because the system cannot...
NOW
I understand that you invoke the “Phenomenological Objection,” as I also, of course, “feel” qualia. But under EN, that feeling is not a counterargument — it’s the very evidence that you are part of the system being modeled. You feel qualia because the system must generate that belief in order to function coherently, despite its own incompleteness. You are embedded in the regulatory loop, and so the illusion is not something you can step outside of — it is what it feels like to be inside a model that cannot fully represent itself. The conviction is real; the thing it points to is not.
”because there is no reason a physical process should feel like anything from the inside.”
The key move EN makes — and where it departs from both physicalism and computationalism — is that it doesn’t ask, “Why should a physical process feel like anything from the inside?” It asks, “Why must a physical system come to believe it feels something from the inside in order to function?” The answer is: because a self-regulating, self-modeling system needs to track and report on its internal states without access to its full causal substrate. It does this by generating symbolic placeholders — undecidable internal propositions — which it treats as felt experience. In order to say “I am in pain,” the system must first commit to the belief that there is something it is like to be in pain. The illusion of interiority is not a byproduct — it is the enabling fiction that lets the system tell itself a coherent story across incomplete representations.
OKAY since you made the right question I will include this paragraph in the Abstract.
In other words: the brain doesn’t fuck around with substrate — it fucks around with the proof that you have one. It doesn’t care what “red” is made of; it cares whether the system can act as if it knows what red is, in a way that’s coherent, fast, and behaviorally useful. The experience isn’t built from physics — it’s built from the system’s failed attempts to prove itself to itself. That failure gets reified as feeling. So when you say “I feel it,” what you’re really pointing to is the boundary of what your system can’t internally verify — and must, therefore, treat as foundational. That’s not a bug. That’s the fiction doing its job.