>You can check that every proof you enumerate consist of only valid steps. That check can be performed computationally. Every such proof would prove true statement.
It cannot. Truth cannot even always be defined, let alone computed (Tarski’s undefinability theorem). Of course you cannot compute something if it cannot even be defined; a computation needs defined states and defined outputs to be a computation as understood in computer science. That is a proven fact, it is not my opinion.
“Tarski’s theorem, on the other hand, is not directly about mathematics but about the inherent limitations of any formal language sufficiently expressive to be of real interest. Such languages are necessarily capable of enough self-reference for the diagonal lemma to apply to them. The broader philosophical import of Tarski’s theorem is more strikingly evident.
An interpreted language is strongly-semantically-self-representational exactly when the language contains predicates and function symbols defining all the semantic concepts specific to the language. Hence the required functions include the “semantic valuation function” mapping a formula A to its truth value ||A||, and the “semantic denotation function” mapping a term t to the object it denotes. Tarski’s theorem then generalizes as follows: No sufficiently powerful language is strongly-semantically-self-representational.”
>Reasoning about continuum can be done in discrete steps. Human reasoning doesn’t use continuum in any way fundamentally different from how neural networks use floats.
I am not talking about reasoning about continuum as a representation, in which regard you are correct I can refer to continuum or real numbers as a symbol and reason about that. But that is just not the kind of reasoning I am talking about. I can add two such functions correctly like sin(x)+sin(x)=2*sin(x).
It is not the kind of of reasoning I am talking about as I am reasoning about the continuum itself, not the representation you have of it (or the computer has of it), so I am talking about this: [ ], or maybe this ________.
Depending if you mean something more tangible, or more intangible, both which are valid, although the real numbers seem more like the latter, as they are one-dimensional.
Talking as it appears on your screen, not as my reasoning and text with whitespaces is represented in the computer which is another question.
You maybe have an interesting point here in that while we can reason about continuum, it is more striking we reason within continuum. As you see the space around the letters, you will notice it is continuous. There are no steps there. My reasoning right here, even as it appears on your screen, happens within that and it can even enter the c o n t i n u u m to create more c o n t i n u u m.
One might say the continuum I am reasoning within (or shall I say, we, as you are also know in the process of understanding) is the box this comment appears in—as far as the written language as it appears goes -, or even more broadly, the space in front of my and your computer screen where the pixels end in the depth dimension; where the written words are. And ultimately outer space, which we are both floating within✨.
>But okay, abstract computers. So you are saying that humans work differently from some abstract computer. This is true in a sense. Why is this interesting, again?
Because abstract computers are a more generalized notion that physical computers, as we use abstract computational logic to program and even construct physical computers (I guess in theory there could maybe be “other” computers that are not like that, that maybe do not need to be programmed, or are programmed in a way that does not rely on certain concepts like binary logic or binary data; but then you are stretching the definition—I can just define the whole universe as a “computer”, that doesn’t tell me anything except that I arbitrarily choose to define it that way).
So it implies that it is even more true humans work differently than real computers, and real computers cannot replace real humans, not even just in terms of reasoning, let alone perception, feeling, consciousness which are all more than just reasoning.
At least abstract computer logic DOES work in the human brain. I will not deny that, although it is often more like quasi-computational, as it is approximate and sometimes might be wrongly applied (like literal mathematical mistakes) and computations that are frequently wrong in a practical sense are not <💻⚙computations>, which we expect to actually work, like a computer works, but more like elementary school child doing math.
However real computer logic does not work in the brain, as it uses physical logic gates, silicon materials, etc.. It is also very precise logic that can add or even multiplicate extremely long numbers with nearly 100% accuracy. So there it is even more clear and undeniable a human is not a computational in that sense as it far far far more inaccurate.
Tarski only aplies to the whole concept of truth. For “things humans can prove” you can use enumeration + validy check.
I don’t think humans actually use continuum much when thinking: appearence of continuous space between words is just a failure of introspection, not a concequence of use of continuum—actual elements of human cognition are descrete. And real computers also exist in space—that’s one more reason to consider them.
Because abstract computers are a more generalized notion that physical computers
No? “A more generalized notion” means all real computers are like your abstract computers and that is false. Again, there is obvious conterxample—your abstract computer can’t leave your loop, real computer can. So implication that “it is even more true humans work differently than real computers, and real computers cannot replace real humans” doesn’t work.
This kind of reasoning might make you feel emotionally uncertain, as it shakes up your assumptions of how the world or the mind works. It can me feel uncertain too (not sure why really, I feel maybe I am a bit wary of uphending other’s people thinking without being careful enough). This however is a rationality site, so I note this and do not budge to emotionality and would recommend addressing this in some compassionate way. Things like computer logic are largely independent of our emotions (largely in the sense if we program emotionally the program might not work, haha).
It might be like an earthquake. Thing is, clinging to certain views based on emotional reasons is not only not rational, clinging in general is a cause for dissatisfaction (like the Buddha called it, Dukkha).
The truth is, the world will shake us up. If not in a gentle way by reasoning or unconventional humans, then in more violent ways, by sickness and death.
But anyway here the less emotional part:
>Tarski only aplies to the whole concept of truth. For “things humans can prove” you can use enumeration + validy check.
I am not talking about things humans can prove computationally. That is the whole point, that humans are beyond computability. And yes you might say I talk about the whole truth, in so far as that is a meaningful concept, I am not talking about a certain set of true statements, like 1=1, 2=2, etc...
However human reasoning transcends the human brain, maybe that is the stuck thing that keeps you going back to this kind of logic.
Human reasoning is on your screen right here, it is not just in your brain. These are not just symbols. It is actually written by a human and is expression of human and transcendental reasoning, and it is linguistic reasoning. So the human reasoning is not just in the brain, it is literally on your screen. That cannot be comprehended in conventionalist terms or step-wise terms, but it evident as it is right here.
>I don’t think humans actually use continuum much when thinking: appearence of continuous space between words is just a failure of introspection, not a concequence of use of continuum—actual elements of human cognition are descrete. And real computers also exist in space—that’s one more reason to consider them.
Huh, it is here: ___________________________________________________________ that is a line continuum in thinking.
That is spacious continuum surrounded by linguistic reasoning: [ ].
I am not talking about “elements of human cognition”.
That is independent of what you think, as it can be perceived. You can perceive independent of verbal or image-based or conceptual thinking. Even if you perception is not comprehensible to you (which it is not really; even just what is related to it in terms of brain function is far too complex).
I am talking about the continuum of human cognition, which is beyond your personal mind; ie it expressed through my brain to get to your screen, but it is reasoning that is input to both your and my brain (as photons going from the screen to your retina and into your brain): ___________________________ __
See the small step at the end? That is “one element” of human cognition, a step within the continuum that is not really discrete in a conrete way: >>>[ ]<<<<<
If you want to be more concrete, you could take this ⚫ as a discrete element of human cognition. You are right these are discrete. But they are not in your brain but on your screen, or in the matrix of general geometry; like the geometry of the letters surrounding the box.
Can space be used? How? It is space. Space is not a “thing” to be utilized. That is not what space is. At best it is whitespace to be u t i l i z e d. But as you see that is just more space, it is not really a thing per se.
And this is not about introspection, it is about extraspection, as the words are on your screen, not inside your brain or just your individual mind. Extraspection is not “failure of introspection” it is literally what allows for the perception of the external world.
>No? “A more generalized notion” means all real computers are like your abstract computers and that is false.
Well, I can grant you that. A more abstractly generalized notion. It seems it is not definable in more of a broad generalized way, as I do not know how to “generalize” a literal real computer. It is there, not something I can generalize like a concept.
We do utilize a more abstract generalized notion to make sense of the programming and logic of computers though. It is more general in so far as it is the set of ALL computations² (computers just run a finite amount of computation based on the physical limits), and can run indefinitely (a computer stops running after its lifetime has finished). So that seems a meaningful notion in which it is more broad and general but I grant you it cannot quite be applied to a literal physical computer.
Those abstract computations transcend computers, as they can also manifest in the human brain, and they also transcend the human brain, as they can manifest in a computer, in someone else’s brain, etc...
²within a certain framework like a Turing machine, or a programming language, etc
Maybe—I can see how emotions may prevent human from accepting that humans are not special.
I am not talking about things humans can prove computationally.
Everything that humans can prove, can be proven by that computable algorithm. Including things you call “uncomputational”. If your point is just that humans are real so they can affect space and abstract computers are abstract, then real computers also can affect space, and perceive space better than humans. Is there part of this you disagree or it’s just that you think “people don’t talk about real-world effects of computers” is important?
I do not know how to “generalize” a literal real computer.
If you need to generalize it to make your point you can imagine your computer but with infinite memory and arbitrary fast execution speed. It can do literally everything a human can.
>If your point is just that humans are real so they can affect space and abstract computers are abstract, then real computers also can affect space
How can you affect space? Space is absolutely inert physically. There is no way to affect it.
I think you meant space as in w h i t e s p a c e. But that is basically only more light on your screen, not darkness, as real space is.
>Everything that humans can prove, can be proven by that computable algorithm.
How to have an algorithm prove that 1=1?
How do you know it is true, unless you define it that way first?
So it seems a computer cannot prove that, unless you define it true first.
That however can be proven, as you can define validation to not be able to able to be validated by computer.
As if you feed it the information 1=1 it could be programmed to output “false”, so if the definition of validation is absolute validation with no possibility of a false output, it is clearly wrong, as you can program a computer to falsely claim 1=1 is false.
However I doubt you would do that as you can see the uncomputable consistency and absolute 100% chance truth of 1=1.
This means the whole spectrum of completely false to completely true is available to a computable validation mechanism. A true proof is only true and can only be true as otherwise it would be not a proof, in so far as bivalent logic applies at all (which I will admit it does not always apply).
If you can validate it in a wrong way (meaning a false validation, or deducing the correct conclusion from a false premise, so that you can correctly conclude something that is wrong but correct), that is not really proving.
If you arbitrarily decide or guess whether the proof is accurate or not, it is not a proof.
This statement cannot be computationally verified: This statement cannot be proven.
Ever heard of Gödel?😅
The statement is true, but not provable. As nothing can be proven if you do not have a definition of what is true first. As we can see the Gödel sentence to be true, that entails that we can see true what is not provable, and hence not computationally verifiable too.
So we can realize truth beyond computational verification.
As if you feed it the information 1=1 it could be programmed to output “false”, so if the definition of validation is absolute validation with no possibility of a false output, it is clearly wrong, as you can program a computer to falsely claim 1=1 is false.
By that definition of validation humans never proved anything, because they sometimes say that 1=1 is false.
As we can see the Gödel sentence to be true, that entails that we can see true what is not provable, and hence not computationally verifiable too.
Gödel sentence for some formal system can’t be proven in that system, but it can be proven in more powerful system. Humans that see the sentence to be true are just (reasoning in a way equivalent to) using a more powerful formal system. And everything in that system is computably verifiable.
>Maybe—I can see how emotions may prevent human from accepting that humans are not special.
Fully agree here. I definitely have this issue as well!
Like we want to imagine we are so special in our power and intelligence that we can emulate them at what they are good at. We really cannot. Computers are already unfathomably more intelligent in the domain of precise calculation of crystallized information which they operate in. We also imagine we can program computers do anything we want.
Which again, computers will often do what their internal logic causes them to do, not what we program them to do, and they will always be like that, as that is how a computer operates.
So there is no hope to ever get to that level. I kind of want to speed up my computational speed sometimes, but I have trouble even getting the accuracy right, like calculating 3-digit number effortlessly. For a computer that is just a silly joke in terms of difficulty of calculation.
>If you need to generalize it to make your point you can imagine your computer but with infinite memory and arbitrary fast execution speed. It can do literally everything a human can.
Huh? I don’t think it could walk, as it does not have legs. I am not being silly here, I think you mean in terms of some abstract computational capability, but that is not what you said.
Again in a physical sense it is also not true. A computer is made of silicon so it cannot send information between neurons.
So I am not even sure in which sense it could be true. I think you mean in terms of precise calculations of crystallized information. I guess then you have very good point, as I already talked about above.
But then, the computer cannot do this: 💻💡
As THIS💡 is the light produced by the screen you are seeing when reading this, not a computation. It is not produced by the computer either, but by the screen the computer is connected to.
>You can check that every proof you enumerate consist of only valid steps. That check can be performed computationally. Every such proof would prove true statement.
It cannot. Truth cannot even always be defined, let alone computed (Tarski’s undefinability theorem). Of course you cannot compute something if it cannot even be defined; a computation needs defined states and defined outputs to be a computation as understood in computer science. That is a proven fact, it is not my opinion.
“Tarski’s theorem, on the other hand, is not directly about mathematics but about the inherent limitations of any formal language sufficiently expressive to be of real interest. Such languages are necessarily capable of enough self-reference for the diagonal lemma to apply to them. The broader philosophical import of Tarski’s theorem is more strikingly evident.
An interpreted language is strongly-semantically-self-representational exactly when the language contains predicates and function symbols defining all the semantic concepts specific to the language. Hence the required functions include the “semantic valuation function” mapping a formula A to its truth value ||A||, and the “semantic denotation function” mapping a term t to the object it denotes. Tarski’s theorem then generalizes as follows: No sufficiently powerful language is strongly-semantically-self-representational.”
>Reasoning about continuum can be done in discrete steps. Human reasoning doesn’t use continuum in any way fundamentally different from how neural networks use floats.
I am not talking about reasoning about continuum as a representation, in which regard you are correct I can refer to continuum or real numbers as a symbol and reason about that. But that is just not the kind of reasoning I am talking about. I can add two such functions correctly like sin(x)+sin(x)=2*sin(x).
It is not the kind of of reasoning I am talking about as I am reasoning about the continuum itself, not the representation you have of it (or the computer has of it), so I am talking about this: [ ], or maybe this ________.
Depending if you mean something more tangible, or more intangible, both which are valid, although the real numbers seem more like the latter, as they are one-dimensional.
Talking as it appears on your screen, not as my reasoning and text with whitespaces is represented in the computer which is another question.
You maybe have an interesting point here in that while we can reason about continuum, it is more striking we reason within continuum. As you see the space around the letters, you will notice it is continuous. There are no steps there. My reasoning right here, even as it appears on your screen, happens within that and it can even enter the c o n t i n u u m to create more c o n t i n u u m.
One might say the continuum I am reasoning within (or shall I say, we, as you are also know in the process of understanding) is the box this comment appears in—as far as the written language as it appears goes -, or even more broadly, the space in front of my and your computer screen where the pixels end in the depth dimension; where the written words are. And ultimately outer space, which we are both floating within✨.
>But okay, abstract computers. So you are saying that humans work differently from some abstract computer. This is true in a sense. Why is this interesting, again?
Because abstract computers are a more generalized notion that physical computers, as we use abstract computational logic to program and even construct physical computers (I guess in theory there could maybe be “other” computers that are not like that, that maybe do not need to be programmed, or are programmed in a way that does not rely on certain concepts like binary logic or binary data; but then you are stretching the definition—I can just define the whole universe as a “computer”, that doesn’t tell me anything except that I arbitrarily choose to define it that way).
So it implies that it is even more true humans work differently than real computers, and real computers cannot replace real humans, not even just in terms of reasoning, let alone perception, feeling, consciousness which are all more than just reasoning.
At least abstract computer logic DOES work in the human brain. I will not deny that, although it is often more like quasi-computational, as it is approximate and sometimes might be wrongly applied (like literal mathematical mistakes) and computations that are frequently wrong in a practical sense are not <💻⚙computations>, which we expect to actually work, like a computer works, but more like elementary school child doing math.
However real computer logic does not work in the brain, as it uses physical logic gates, silicon materials, etc.. It is also very precise logic that can add or even multiplicate extremely long numbers with nearly 100% accuracy. So there it is even more clear and undeniable a human is not a computational in that sense as it far far far more inaccurate.
Tarski only aplies to the whole concept of truth. For “things humans can prove” you can use enumeration + validy check.
I don’t think humans actually use continuum much when thinking: appearence of continuous space between words is just a failure of introspection, not a concequence of use of continuum—actual elements of human cognition are descrete. And real computers also exist in space—that’s one more reason to consider them.
No? “A more generalized notion” means all real computers are like your abstract computers and that is false. Again, there is obvious conterxample—your abstract computer can’t leave your loop, real computer can. So implication that “it is even more true humans work differently than real computers, and real computers cannot replace real humans” doesn’t work.
I feel an issue here is emotion:
This kind of reasoning might make you feel emotionally uncertain, as it shakes up your assumptions of how the world or the mind works. It can me feel uncertain too (not sure why really, I feel maybe I am a bit wary of uphending other’s people thinking without being careful enough). This however is a rationality site, so I note this and do not budge to emotionality and would recommend addressing this in some compassionate way. Things like computer logic are largely independent of our emotions (largely in the sense if we program emotionally the program might not work, haha).
It might be like an earthquake. Thing is, clinging to certain views based on emotional reasons is not only not rational, clinging in general is a cause for dissatisfaction (like the Buddha called it, Dukkha).
The truth is, the world will shake us up. If not in a gentle way by reasoning or unconventional humans, then in more violent ways, by sickness and death.
But anyway here the less emotional part:
>Tarski only aplies to the whole concept of truth. For “things humans can prove” you can use enumeration + validy check.
I am not talking about things humans can prove computationally. That is the whole point, that humans are beyond computability. And yes you might say I talk about the whole truth, in so far as that is a meaningful concept, I am not talking about a certain set of true statements, like 1=1, 2=2, etc...
However human reasoning transcends the human brain, maybe that is the stuck thing that keeps you going back to this kind of logic.
Human reasoning is on your screen right here, it is not just in your brain. These are not just symbols. It is actually written by a human and is expression of human and transcendental reasoning, and it is linguistic reasoning. So the human reasoning is not just in the brain, it is literally on your screen. That cannot be comprehended in conventionalist terms or step-wise terms, but it evident as it is right here.
>I don’t think humans actually use continuum much when thinking: appearence of continuous space between words is just a failure of introspection, not a concequence of use of continuum—actual elements of human cognition are descrete. And real computers also exist in space—that’s one more reason to consider them.
Huh, it is here: ___________________________________________________________ that is a line continuum in thinking.
That is spacious continuum surrounded by linguistic reasoning: [ ].
I am not talking about “elements of human cognition”.
That is independent of what you think, as it can be perceived. You can perceive independent of verbal or image-based or conceptual thinking. Even if you perception is not comprehensible to you (which it is not really; even just what is related to it in terms of brain function is far too complex).
I am talking about the continuum of human cognition, which is beyond your personal mind; ie it expressed through my brain to get to your screen, but it is reasoning that is input to both your and my brain (as photons going from the screen to your retina and into your brain): ___________________________ __
See the small step at the end? That is “one element” of human cognition, a step within the continuum that is not really discrete in a conrete way: >>>[ ]<<<<<
If you want to be more concrete, you could take this ⚫ as a discrete element of human cognition. You are right these are discrete. But they are not in your brain but on your screen, or in the matrix of general geometry; like the geometry of the letters surrounding the box.
Can space be used? How? It is space. Space is not a “thing” to be utilized. That is not what space is. At best it is whitespace to be u t i l i z e d. But as you see that is just more space, it is not really a thing per se.
And this is not about introspection, it is about extraspection, as the words are on your screen, not inside your brain or just your individual mind. Extraspection is not “failure of introspection” it is literally what allows for the perception of the external world.
>No? “A more generalized notion” means all real computers are like your abstract computers and that is false.
Well, I can grant you that. A more abstractly generalized notion. It seems it is not definable in more of a broad generalized way, as I do not know how to “generalize” a literal real computer. It is there, not something I can generalize like a concept.
We do utilize a more abstract generalized notion to make sense of the programming and logic of computers though. It is more general in so far as it is the set of ALL computations² (computers just run a finite amount of computation based on the physical limits), and can run indefinitely (a computer stops running after its lifetime has finished). So that seems a meaningful notion in which it is more broad and general but I grant you it cannot quite be applied to a literal physical computer.
Those abstract computations transcend computers, as they can also manifest in the human brain, and they also transcend the human brain, as they can manifest in a computer, in someone else’s brain, etc...
²within a certain framework like a Turing machine, or a programming language, etc
Maybe—I can see how emotions may prevent human from accepting that humans are not special.
Everything that humans can prove, can be proven by that computable algorithm. Including things you call “uncomputational”. If your point is just that humans are real so they can affect space and abstract computers are abstract, then real computers also can affect space, and perceive space better than humans. Is there part of this you disagree or it’s just that you think “people don’t talk about real-world effects of computers” is important?
If you need to generalize it to make your point you can imagine your computer but with infinite memory and arbitrary fast execution speed. It can do literally everything a human can.
>If your point is just that humans are real so they can affect space and abstract computers are abstract, then real computers also can affect space
How can you affect space? Space is absolutely inert physically. There is no way to affect it.
I think you meant space as in w h i t e s p a c e. But that is basically only more light on your screen, not darkness, as real space is.
>Everything that humans can prove, can be proven by that computable algorithm.
How to have an algorithm prove that 1=1?
How do you know it is true, unless you define it that way first?
So it seems a computer cannot prove that, unless you define it true first.
That however can be proven, as you can define validation to not be able to able to be validated by computer.
As if you feed it the information 1=1 it could be programmed to output “false”, so if the definition of validation is absolute validation with no possibility of a false output, it is clearly wrong, as you can program a computer to falsely claim 1=1 is false.
However I doubt you would do that as you can see the uncomputable consistency and absolute 100% chance truth of 1=1.
This means the whole spectrum of completely false to completely true is available to a computable validation mechanism. A true proof is only true and can only be true as otherwise it would be not a proof, in so far as bivalent logic applies at all (which I will admit it does not always apply).
If you can validate it in a wrong way (meaning a false validation, or deducing the correct conclusion from a false premise, so that you can correctly conclude something that is wrong but correct), that is not really proving.
If you arbitrarily decide or guess whether the proof is accurate or not, it is not a proof.
This statement cannot be computationally verified: This statement cannot be proven.
Ever heard of Gödel?😅
The statement is true, but not provable. As nothing can be proven if you do not have a definition of what is true first. As we can see the Gödel sentence to be true, that entails that we can see true what is not provable, and hence not computationally verifiable too.
So we can realize truth beyond computational verification.
By that definition of validation humans never proved anything, because they sometimes say that 1=1 is false.
Gödel sentence for some formal system can’t be proven in that system, but it can be proven in more powerful system. Humans that see the sentence to be true are just (reasoning in a way equivalent to) using a more powerful formal system. And everything in that system is computably verifiable.
>Maybe—I can see how emotions may prevent human from accepting that humans are not special.
Fully agree here. I definitely have this issue as well!
Like we want to imagine we are so special in our power and intelligence that we can emulate them at what they are good at. We really cannot. Computers are already unfathomably more intelligent in the domain of precise calculation of crystallized information which they operate in. We also imagine we can program computers do anything we want.
Which again, computers will often do what their internal logic causes them to do, not what we program them to do, and they will always be like that, as that is how a computer operates.
So there is no hope to ever get to that level. I kind of want to speed up my computational speed sometimes, but I have trouble even getting the accuracy right, like calculating 3-digit number effortlessly. For a computer that is just a silly joke in terms of difficulty of calculation.
>If you need to generalize it to make your point you can imagine your computer but with infinite memory and arbitrary fast execution speed. It can do literally everything a human can.
Huh? I don’t think it could walk, as it does not have legs. I am not being silly here, I think you mean in terms of some abstract computational capability, but that is not what you said.
Again in a physical sense it is also not true. A computer is made of silicon so it cannot send information between neurons.
So I am not even sure in which sense it could be true. I think you mean in terms of precise calculations of crystallized information. I guess then you have very good point, as I already talked about above.
But then, the computer cannot do this: 💻💡
As THIS💡 is the light produced by the screen you are seeing when reading this, not a computation. It is not produced by the computer either, but by the screen the computer is connected to.
I assumed it can make legs and neurons instantly using photons radiating from its chip.