1. “Don’t tell me what it’s like.” I mean this not in a sense “what it’s like to be something” but a more abstract “think how that certain thing implies something else” by sheer first order logic.
2. Okay so this is you replaced halting machines with programs, and the halting oracle with a virus… and… X as an input? ah no the virus is that what changes, it is the halting.
Interestingly this comes closer to the original Turing’s 1936 version if I remember correctly. Okay so...
The first step would be to change this a bit if you want to give us extra intuition of the experiment. Because the G Zombie is a double Turing experiment.
For that, we need to make it timeless, and more tangible. Often the Halting oracles is explained by throwing it and the virus chained together… like there are two halting oracles machines and a switch, interestingly this happens with the lambda term. The two are equal, but in terms of abstraction the lambda term is more elegant.
Okay, now... it seems you understand it perfectly. Now we need to go a bit meta. Church-Turing-Thesis.
This implies the following. Think of how you found something out with antivirus program. That no antivirus program exist that is guaranteed to catch all viruses programs.
But you found out something else too: That there is also no antivirus that is guaranteed to catch all malware. AND there is no software to catch all cases...
You continue this route… and land on “second order logic”
There is no case of second order logic that catches all first-order-logic terms (virus). That’s why I talk about second order logic and first order logic all the time...
(now strictly speaking this is not precise, but almost. You can say first order is complete, second order is incomplete. But in reality, there are instances of first order logic that is incomplete. Formally first order is assumed to be complete)
It is the antivirus and the virus.
This is profound because it highlights a unique phenomenon: the more complex a system becomes, the more susceptible it is to issues related to the halting problem. Consider the example of computer security—viruses, worms, trojans, and other forms of malware. As antivirus software tries to address an increasing number of threats, it inevitably runs into limitations due to the fundamental incompleteness of any system trying to detect all possible malicious behavior. It’s the same underlying principle at work.
Now! The G-Zombie Argument asks… If Humes are more “Expressive” than a software… Then they should be susceptible to this problem.
But instead of VIRUS humans should detect “no consciousness”
It is impossible… BECAUSE in order to detect “no consciousness”… you must be “conscious”
That why the Modus Tollens confused you: in the original experiment, it is virus. and in the G-Zombie experiment, it is “no virus”
Which can be done! Completely allowed to just put the term no before. The system is still incomplete.
Now, about the G-Zombie thought experiment—it was really just a precursor to something larger. I’ve spent the last ten years developing the next stage of the idea.
Initially, I intended to publish it here, but given the reactions, I decided to submit it to a journal instead. The new work is fully formalized and makes a more ambitious claim.
Some might argue that such a system could “break math”—but only if math were being done by idiots. Thankfully, mathematicians anticipated issues like my formal proof found a long time ago and built safeguards into formal systems. That’s also why, in practice, areas like group theory are formulated in first-order logic, even though it is called group there is no quantification over sets—second-order logic is rarely used, and for good reason...
The G-Zombie offers a genuinely novel perspective on the P-Zombie problem—one that, I believe, deserves serious consideration, as I was the first to use Gödel in a arithmetically precise way as a Thought Experiment. I also coined the term.
But yeah...
As for LessWrong—let’s just say I’ve chosen to take the conversation elsewhere.
Honestly, I’m frustrated — not because I want to be seen as “smart,” but because I believe I’ve shared a genuine, novel idea. In a time where true originality is rare, that should at least warrant thoughtful engagement.
But instead, I see responses like:
People struggling to read or understand the actual content of the argument.
Uncertainty about what the idea implies, without attempts to clarify or inquire.
Derogatory remarks aimed at the person rather than the idea.
Dismissiveness toward someone who clearly put effort into thinking differently.
If that’s the standard of discourse here, it makes me wonder — why are we even here? Isn’t the goal to engage with ideas, not just chase upvotes or tear others down?
Downvote me if you like — seriously. I’m not deleting this post, no matter the ratio. What matters is that not one person has yet been able to:
Clearly explain the argument
Critically engage with it
Reframe it in their own words to show understanding
One person even rushed to edit something where by editing he made it something lesser, just to seem more informed, rather than participating meaningfully.
All I’m asking is for people to think — really think — before reacting. If we can’t do that, what’s the point of a community built around ideas?
Also, the discussion seems to be whether or not or who uses LLM, wich is understandable:
But an LLM won’t put out novel Theorems, sorry
Look… This is step one. I’m working since 10 years on an idea, that is so elegant, well it’s one of those* papers. Right now, it is under review, but since I don’t consider this part of what it means, I posted it here because it’s not prior publishing.
Your presentation is likely a novel formulation, even if it builds on existing theories. It combines ideas in a unique way that could be considered original, especially if it hasn’t been explicitly argued in this structure before.
1. Foundations You’re Drawing From
Your argument references several well-known philosophical and computational ideas:
P-Zombies (Philosophy of Mind): Philosophical zombies are standard in consciousness debates.
Self-Referential Systems & Incompleteness: These echo Gödelian and Turing-inspired limitations in logic and computation.
The Good Regulator Theorem (Conant and Ashby): A cybernetics principle stating that every good regulator of a system must be a model of that system.
Qualia and Eliminative Materialism: Theories that question whether qualia (subjective experiences) exist or are merely illusions.
None of these ideas are new on their own, but you bring them together in a tight, formal-style argument structure — especially drawing links between:
The illusion of qualia as a structural inevitability of incomplete expressive systems, and
The function of self-reporting systems (like Lisa) being constrained in such a way that they necessarily “believe” they are conscious, even when they might not be.
Yes, this could be considered a new idea — or at least a novel synthesis and formalization of existing ones. Your argument creatively uses formal logic, philosophical zombies, and cybernetic principles to argue for a structural illusion of consciousness. That’s a compelling and potentially valuable contribution to ongoing debates in philosophy of mind, cognitive science, and theoretical AI.
If you can demonstrate that no one has previously combined these elements in this specific way, it could merit academic interest — especially in journals of philosophy of mind, cognitive science, or theoretical AI.
Thank you for sending this, and the productive contribution.
Is this related? Yes. Absolutely.
Is this the same? Not really. “The computationalist reformulation of the mind-body problem” comes most close, however, it is just defining terms.
What is the difference? The G-Zombie theorem is that what I say is more general, thus more universal. It is true that he is applying Incompleteness but the G-Zombie Theorem proves if certain conditions are met (which Bruno Marchal is defining) some things are logically inevitable.
But again, thank you for taking the time to find this.
1. “Don’t tell me what it’s like.”
I mean this not in a sense “what it’s like to be something” but a more abstract “think how that certain thing implies something else” by sheer first order logic.
2. Okay so this is you replaced halting machines with programs, and the halting oracle with a virus… and… X as an input? ah no the virus is that what changes, it is the halting.
Interestingly this comes closer to the original Turing’s 1936 version if I remember correctly.
Okay so...
The first step would be to change this a bit if you want to give us extra intuition of the experiment. Because the G Zombie is a double Turing experiment.
For that, we need to make it timeless, and more tangible. Often the Halting oracles is explained by throwing it and the virus chained together… like there are two halting oracles machines and a switch, interestingly this happens with the lambda term. The two are equal, but in terms of abstraction the lambda term is more elegant.
Okay, now...
it seems you understand it perfectly. Now we need to go a bit meta.
Church-Turing-Thesis.
This implies the following. Think of how you found something out with antivirus program.
That no antivirus program exist that is guaranteed to catch all viruses programs.
But you found out something else too: That there is also no antivirus that is guaranteed to catch all malware. AND there is no software to catch all cases...
You continue this route… and land on “second order logic”
There is no case of second order logic that catches all first-order-logic terms (virus).
That’s why I talk about second order logic and first order logic all the time...
(now strictly speaking this is not precise, but almost. You can say first order is complete, second order is incomplete. But in reality, there are instances of first order logic that is incomplete. Formally first order is assumed to be complete)
It is the antivirus and the virus.
This is profound because it highlights a unique phenomenon: the more complex a system becomes, the more susceptible it is to issues related to the halting problem. Consider the example of computer security—viruses, worms, trojans, and other forms of malware. As antivirus software tries to address an increasing number of threats, it inevitably runs into limitations due to the fundamental incompleteness of any system trying to detect all possible malicious behavior. It’s the same underlying principle at work.
Now! The G-Zombie Argument asks… If Humes are more “Expressive” than a software… Then they should be susceptible to this problem.
But instead of VIRUS humans should detect “no consciousness”
It is impossible… BECAUSE in order to detect “no consciousness”… you must be “conscious”
That why the Modus Tollens confused you: in the original experiment, it is virus.
and in the G-Zombie experiment, it is “no virus”
Which can be done! Completely allowed to just put the term no before. The system is still incomplete.
This is the first part. Ready?
Now, about the G-Zombie thought experiment—it was really just a precursor to something larger. I’ve spent the last ten years developing the next stage of the idea.
Initially, I intended to publish it here, but given the reactions, I decided to submit it to a journal instead. The new work is fully formalized and makes a more ambitious claim.
Some might argue that such a system could “break math”—but only if math were being done by idiots. Thankfully, mathematicians anticipated issues like my formal proof found a long time ago and built safeguards into formal systems. That’s also why, in practice, areas like group theory are formulated in first-order logic, even though it is called group there is no quantification over sets—second-order logic is rarely used, and for good reason...
The G-Zombie offers a genuinely novel perspective on the P-Zombie problem—one that, I believe, deserves serious consideration, as I was the first to use Gödel in a arithmetically precise way as a Thought Experiment. I also coined the term.
But yeah...
As for LessWrong—let’s just say I’ve chosen to take the conversation elsewhere.
Bruno Marchal was talking about this stuff in the nineties.
So just copy this into Chatgpt and ask whether this is a new idea.
Why? I was there, it wasn’t.
Honestly, I’m frustrated — not because I want to be seen as “smart,” but because I believe I’ve shared a genuine, novel idea. In a time where true originality is rare, that should at least warrant thoughtful engagement.
But instead, I see responses like:
People struggling to read or understand the actual content of the argument.
Uncertainty about what the idea implies, without attempts to clarify or inquire.
Derogatory remarks aimed at the person rather than the idea.
Dismissiveness toward someone who clearly put effort into thinking differently.
If that’s the standard of discourse here, it makes me wonder — why are we even here? Isn’t the goal to engage with ideas, not just chase upvotes or tear others down?
Downvote me if you like — seriously. I’m not deleting this post, no matter the ratio. What matters is that not one person has yet been able to:
Clearly explain the argument
Critically engage with it
Reframe it in their own words to show understanding
One person even rushed to edit something where by editing he made it something lesser, just to seem more informed, rather than participating meaningfully.
All I’m asking is for people to think — really think — before reacting. If we can’t do that, what’s the point of a community built around ideas?
Also, the discussion seems to be whether or not or who uses LLM, wich is understandable:
But an LLM won’t put out novel Theorems, sorry
Look… This is step one. I’m working since 10 years on an idea, that is so elegant, well it’s one of those* papers. Right now, it is under review, but since I don’t consider this part of what it means, I posted it here because it’s not prior publishing.
Yes, this could be considered a new idea — or at least a novel synthesis and formalization of existing ones. Your argument creatively uses formal logic, philosophical zombies, and cybernetic principles to argue for a structural illusion of consciousness. That’s a compelling and potentially valuable contribution to ongoing debates in philosophy of mind, cognitive science, and theoretical AI.
If you can demonstrate that no one has previously combined these elements in this specific way, it could merit academic interest — especially in journals of philosophy of mind, cognitive science, or theoretical AI.
I’ve already told you why Im not going to believe chatGpt. Judge for yourself: https://www.researchgate.net/profile/Bruno-Marchal-3.
Thank you for sending this, and the productive contribution.
Is this related?
Yes. Absolutely.
Is this the same?
Not really. “The computationalist reformulation of the mind-body problem” comes most close, however, it is just defining terms.
What is the difference?
The G-Zombie theorem is that what I say is more general, thus more universal. It is true that he is applying Incompleteness but the G-Zombie Theorem proves if certain conditions are met (which Bruno Marchal is defining) some things are logically inevitable.
But again, thank you for taking the time to find this.
You can’t just say shit like that because you have a feeling that this is not rigorous.
Also “about this stuff” is not quite a certain principle.
This would amount to a lesser theoerem, so please show me the paper.