I will say that your rational holds up in many ways, in some ways don’t. I give you that you won the argument. You are right mostly.
“Well, I’m not making any claims about an average LessWronger here, but between the two of us, it’s me who has written an explicit logical proof of a theorem and you who is shouting “Turing proof!”, “Halting machine!” “Godel incompletness!” without going into the substance of them.”
Absolutely correct. You won this argument too.
Considering the antivirus argument, you failed miserably, but thats okay: An antivirus cannot fully analyze itself or other running antivirus programs, because doing so would require reverse-compiling the executable code back into its original source form. Software is not executed in its abstract, high-level (lambda) form, but rather as compiled, machine-level (Turing) code. Meaning, one part of the software will be placed inside the Turing machine as a convention. Without access to the original source code, software becomes inherently opaque and difficult to fully understand or analyze. Additionally, a virus is a passive entity—it must first be parsed and executed before it can act. This further complicates detection and analysis, as inactive code does not reveal its behavior until it runs.
This is where it gets interesting.
”Maybe there is an actual gear-level model inside your mind how all this things together build up to your conclusion but you are not doing a good job at communicating it. You present metaphors, saying that thinking that we are conscious, while not actually being conscious is like being a merely halting machine, thinking that it’s a universal halting machine. But it’s not clear how this is applicable.”
You know what. You are totally right.
So here is what I really say: If the brain is something like a computer… It has to be obey the rules of incompleteness. So “incompleteness” must be hidden somewhere in the setup. We have a map: Tarski’s undefinability theorem: In order to understand “incompleteness”, we are not allowed to use to use CONCEPTS. Why? Because CONCEPTS are incomplete. They are selfreferential. Define a pet: An animal… Define an animal: A life form... etc. So this problem is hard… The hard problem of consciousness. BUT there is a chance we can do something. A silver lining.
Tarski’s undefinability theorem IS A MAP. It shows us how to “find” the incompleteness in ourself. What is our vehicle? First-order-logic. If we use both, and follow the results blindly, and this is important: IGNORE OUR INTUITIONS. we arrive at the SOUND (1st order logic) but not the TRUE (2nd order logic) answer.
I will say that your rational holds up in many ways, in some ways don’t. I give you that you won the argument. You are right mostly.
“Well, I’m not making any claims about an average LessWronger here, but between the two of us, it’s me who has written an explicit logical proof of a theorem and you who is shouting “Turing proof!”, “Halting machine!” “Godel incompletness!” without going into the substance of them.”
Absolutely correct. You won this argument too.
Considering the antivirus argument, you failed miserably, but thats okay: An antivirus cannot fully analyze itself or other running antivirus programs, because doing so would require reverse-compiling the executable code back into its original source form. Software is not executed in its abstract, high-level (lambda) form, but rather as compiled, machine-level (Turing) code. Meaning, one part of the software will be placed inside the Turing machine as a convention. Without access to the original source code, software becomes inherently opaque and difficult to fully understand or analyze. Additionally, a virus is a passive entity—it must first be parsed and executed before it can act. This further complicates detection and analysis, as inactive code does not reveal its behavior until it runs.
This is where it gets interesting.
”Maybe there is an actual gear-level model inside your mind how all this things together build up to your conclusion but you are not doing a good job at communicating it. You present metaphors, saying that thinking that we are conscious, while not actually being conscious is like being a merely halting machine, thinking that it’s a universal halting machine. But it’s not clear how this is applicable.”
You know what. You are totally right.
So here is what I really say: If the brain is something like a computer… It has to be obey the rules of incompleteness. So “incompleteness” must be hidden somewhere in the setup. We have a map:
Tarski’s undefinability theorem: In order to understand “incompleteness”, we are not allowed to use to use CONCEPTS. Why? Because CONCEPTS are incomplete. They are selfreferential. Define a pet: An animal… Define an animal: A life form...
etc. So this problem is hard… The hard problem of consciousness. BUT there is a chance we can do something. A silver lining.
Tarski’s undefinability theorem IS A MAP. It shows us how to “find” the incompleteness in ourself. What is our vehicle? First-order-logic.
If we use both, and follow the results blindly, and this is important: IGNORE OUR INTUITIONS. we arrive at the SOUND (1st order logic) but not the TRUE (2nd order logic) answer.