“Standard crackpottery, in my opinion. Humans are not mathematical proof systems.”
That concern is understandable — and in fact, it’s addressed directly and repeatedly in the text. The argument doesn’t claim that humans are formal proof systems in a literal or ontological sense. Rather, it explores how any system capable of symbolic self-modeling (like the brain) inherits formal constraints analogous to those found in expressive logical systems — particularly regarding incompleteness, self-reference, and verification limits.
It’s less about reducing humans to Turing machines and more about using the logic of formal systems to expose the structural boundaries of introspective cognition.
You’re also right to be skeptical — extraordinary claims deserve extraordinary scrutiny. But the essay doesn’t dodge that. It explicitly offers a falsifiable framework, makes empirical predictions, and draws from well-established formal results (e.g. Gödel, Conant & Ashby) to support its claims. It’s not hiding behind abstraction — it’s leaning into it, and then asking to be tested.
And sure, the whole thing could still be wrong. That’s fair. But dismissing it as “crackpottery” without engaging the argument — especially on a forum named LessWrong — seems to bypass the very norms of rational inquiry we try to uphold here.
If the argument fails, let’s show how — not just that. That would be far more interesting, and far more useful.
That concern is understandable — and in fact, it’s addressed directly and repeatedly in the text. The argument doesn’t claim that humans are formal proof systems in a literal or ontological sense. Rather, it explores how any system capable of symbolic self-modeling (like the brain) inherits formal constraints analogous to those found in expressive logical systems — particularly regarding incompleteness, self-reference, and verification limits.
It’s less about reducing humans to Turing machines and more about using the logic of formal systems to expose the structural boundaries of introspective cognition.
You’re also right to be skeptical — extraordinary claims deserve extraordinary scrutiny. But the essay doesn’t dodge that. It explicitly offers a falsifiable framework, makes empirical predictions, and draws from well-established formal results (e.g. Gödel, Conant & Ashby) to support its claims. It’s not hiding behind abstraction — it’s leaning into it, and then asking to be tested.
And sure, the whole thing could still be wrong. That’s fair. But dismissing it as “crackpottery” without engaging the argument — especially on a forum named LessWrong — seems to bypass the very norms of rational inquiry we try to uphold here.
If the argument fails, let’s show how — not just that. That would be far more interesting, and far more useful.