I think you are equivocating between two senses of “computational”. One is something like “dealing with whatever text is in front of you by treating it as a computer program and executing it”. One is something like “acting according to some computer program”.
A human being reading your “program” isn’t doing the first of those. That isn’t what people do. We are not literal computers in the sense of being things that, when presented with code, just execute it.
But that doesn’t mean we are not computational. We might be executing a program that tells us (e.g.) to look out for “meta-ish” things and when we see them try looking one level up.
You might respond along the lines of ‘oh, no, that can’t be it, because the thing I wrote says that “it is not a meta-(meta-...) program”, etc.’ but I think that would be utterly confused: again, whether or not I “am” in some sense a computer program, what I do is not to blindly execute programs people put in front of me.
I don’t see any obstacle to having a literal computer program running on a literal computer with the same properties. In fact, if you have a literal computer program running on a literal computer you would have to go out of your way to make it try to execute every program it sees. (And I don’t think there’s any good reason to do that.)
Maybe this is what you call “meta-ignoring”. I confess that I don’t at all understand why you say that saying you “meta-ignore” a program is “not an argument”. Yes, the thing is in some sense a valid program that will get something trying to execute it into an infinite loop. So what?
I also don’t at all understand why you say that “you cannot process it meta-computationally, as it will get you into a meta-computational loop”. Just because that comment says “it is not a meta-program”? Maybe I don’t understand what you mean by “meta-computationally”—it seems like that is a term important to your argument which you have left entirely undefined. But if it means something like “an automated process executing a program but also observing and reasoning about the execution process”, then I think that means that said process has no reason at all to believe whatever is written in the comments of the program.
So. As a human reading your “program”, I do something akin to what you call “meta-ignoring” it. Nothing about what I do seems to me at all necessarily non-computational. If you specifically ask me to execute the program, I observe that it would put me in an infinite loop and respectfully decline. I don’t attempt to “execute” the comment; it’s just a comment, and comments in a program are often wrong or misleading, and this particular one reads as if it was written by someone who is confused about the nature of computation. I cannot imagine how anything here tells us anything about “the limits of AI”.
The following is true by definition of how a correctly working computer works; it follows the instructions and runs in an endless loop.
However it does not apply to the working of your, or my mind. As you do not work solely computationally.
START: “<comment> This kind of loop can be left uncomputationally, it cannot be left computationally.” GOTO START
It is true, but one might say it is really really really really really REALLY true.
I guess one could appeal to “computational magic”. So the computer does something magical, but still computational to exit the loop, like suddenly creating a new loop that was not part of its programming. But that is not how real life computers work, maybe on the astral or etheric plane something like that is conceivable, or even happening, but I do not have a lot of conscious access to this currently. It would have to literally be another dimension of (computational) reality.
Suppose i write a computer program that does this:
It reads the text of a program written in your language with “START:” and “GOTO START” and so forth.
It translates that program into some easy-to-execute representation.
It executes it, step by step.
As it executes it, it monitors the internal state of the execution and looks to see if the program ever gets into the exact same state that it has recently been in.
If so, it stops with a message saying “I detected an unproductive infinite loop.”
This is completely computational. More than that, it’s easy to implement. And what will it do when presented with the program in the comment I’m replying to? Why, it will start executing that loop … and then leave it when it discovers that it’s an infinite loop that isn’t changing anything as it runs.
So no, it is not really really really true that “this kind of loop cannot be left computationally”. I just showed you one way in which it might be left computationally. No magic is required.
It is, indeed, true by definition that a correctly working computer follows the instructions. Provided by “the instructions” you mean its actual program as opposed to whatever programs it might find itself analysing.
If my brain were (literally or as-good-as) following a program that included a trivial infinite loop like that, I would be catatonic. It follows that my brain is not in fact following a program that includes such a trivial infinite loop. So what? How does that tell us anything about the nature of my brain, or minds in general, or computation in general?
>It reads the text of a program written in your language with “START:” and “GOTO START” and so forth.
Those are the program and will put thet program into a loop. If you write a different program, you did not follow my instructions, but wrote your OWN program, which maybe is a meta-computational-program.
Of course meta-computational-program can be correct in some ways. I did not dispute that. But it is a different level. I am talking about base level computation here, as there is no meta-computation without a base level (which is why a lot of software is utter rubbish nowadays, it is written as if there is no base level to take into account).
>As it executes it, it monitors the internal state of the execution and looks to see if the program ever gets into the exact same state that it has recently been in.
That sounds a bit like solving the halting problem? That halting problem is proven to be not solvable by computers in general. Hence why I said KIND of loop. I guess any specific loop can be exited by computation that just is a bit error prone, or does not quite follow the program given (maybe because the high level programming language gets not well translated into a low level programming language; what I am talking about here is most basic assembler logic, it is not anything complex in terms of the computational logic given, although of course maybe complex in terms of computational & uncomputational logic already operating in our minds.
>So no, it is not really really really true that “this kind of loop cannot be left computationally”. I just showed you one way in which it might be left computationally. No magic is required.
Meta-computationally, as you said your program ignores the START and GOTO START program instructions and does something else. That is a program that meta-ignores and meta-computes something. That is not what I am talking about on the base level (the comment is something different, but not meant to be processed by computer). So unfortunately you are “not even wrong”. You are talking about meta-computation and meta-ignoring, while I am talking about base-level computation.
START [this program will not halt computationally, but does halt uncomputationally] (END if non-computational logic is applied) GOTO START
Maybe this is clearer? You can execute it as you can function as a simple computer, and realize the comment is true. I don’t see the issue. There is computational halting, but this statement is a looping program so it won’t halt computationally. So the only other option left IS that is halts uncomputationally (who says uncomputational processes can’t halt? like you might run a busy beaver a few steps, like people already did and then halt; you could get the same states via computation, but that doesn’t make the busy beaver a computational process), or that you ignore logic.
If you ignore logic, that might be good to meditate and in that way is a good idea to be fair, but it doesn’t directly help to correctly program a computer or draw correct conclusions about uncomputability.
Again, the logic is rather simple, it is just that for some reason we do not like logic sometimes, I guess because we are trans-rational and trans-logical beings. That is being human I suppose. Thanks God. We do not want to ignore feelings and emotions and wild ideas and playing with language in favor of mere logic. But still logic has its place.
I would like functioning software, but people insist that their computers should “understand” their spagetti code that is barely logical and produces constant errors.
If “the program” is your thing with GOTO START and nothing else, then sure, something that executes “the program” will loop. This doesn’t say anything at all about the general capabilities of a program thinking about programs. If your goal is to say something about “the limits of AI” then you have to address those general capabilities.
The halting problem is not solvable in general by computers. Nor, so far as anyone knows, is it solvable in general by people. Many particular cases of the halting problem, such as your trivial infinite loops, are extremely solvable by simple software (and also by humans).
Analysing a program rather than blindly executing it is not “ignoring logic”. And even actually ignoring logic does not imply not being computational “underneath”.
I’m sorry to be increasingly blunt, but I do not think you are thinking clearly about this stuff at all, and I think you have persuaded yourself that your muddle is actually insight, as a result of which you are not listening when others try to explain what you are muddled about.
>Analysing a program rather than blindly executing it is not “ignoring logic”. And even actually ignoring logic does not imply not being computational “underneath”.
Blindly executing? Where did I say to blindly execute it? I did not once state this, or imply this in the text.
In fact I stated the opposite: And read comments or elucidations to understand what it is about.
Which means not blindly, but with the semantic meaning of the text in mind.
If you ignore logic and semantics, that is operating blindly as far as this post is concerned.
>I’m sorry to be increasingly blunt, but I do not think you are thinking clearly about this stuff at all,
No, it’s OK. However logically and semantically, what you think about whether I think clearly has no implications whether I actually do.
I have not persuaded myself of anything. This post is meant to yield understanding, not to be persuasive to myself, or others. Truth is true, regardless if it persuasive. If I correctly calculate the 8988343th digit of pi, me stating it might not be persuasive as they might not know whether it is correct but it is still true.
You didn’t say to blindly execute it. But you did (unless I misunderstood, in which case please do correct me) repeatedly imply that being “computational” implies blindly executing it.
A comment saying “this loop can only be exited non-computationally” does not oblige a person or program looking at the code to either loop infinitely or be non-computational, any more than a comment saying “this loop can only be exited on paying gjm a million dollars” obliges a person or program looking at the code to either loop infinitely or give me a pile of money.
Obviously there is no necessary logical connection between whether I think you are thinking clearly and whether you actually are; people make mistakes sometimes. But many people, when told “I think you are not thinking clearly” will consider the possibility that they are (1) actually not thinking clearly and/or (2) not communicating their meaning clearly. I was hoping you might do that.
>You didn’t say to blindly execute it. But you did (unless I misunderstood, in which case please do correct me) repeatedly imply that being “computational” implies blindly executing it.
No, in fact I said you can execute it computationally and transcend it by understanding meta-computation (you can run a computation and still think of it as something you are not “in” or engaged in, but that implies you are beyond computations as you can transcend them) or how something can be computational in one dimension, and non-computational in another dimension (like the +1 computation of 1,2,3,4,5,6… being created into towers of the size of the busy beaver function), something I call computational-orthogonally-uncomputational.
I see the issue of thinking you “should” run the program, but that is more a psychological thing, I did not say that, and did not really mean that either, it would be better if you feel inspired by this text to re-evaluate your assumptions🙂.
If you are computational like a literal computer, it does imply blindly executing it. That what a computer would do, just execute the loop. Unless you assume computer spirit, which is a bit of a rabbit hole. Although to be fair I would not disregard it, it just does not affect much about how computers function, as they function as they are programmed if they work well. I would more likely to say sp1r1t as that sounds more like something a computer would say haha. Well actually in a way I am this computer (referencing your computer), as I am manifesting to you through this text on your screen so to speak 0101011111000 haha). But that only works as there is another human somewhere else that is sending this you via internet. I have no way to intelligently talk to you otherwise. Except maybe through spooky means like spooky errors, but again, a bit of a rabbit hole isn’t it.
In any case you literally are not solely computational already, let alone a literal computer (apart from when you assume that role by executing the instruction as if you were a computer). That is my whole point. You won’t execute it blindly because you are not a computer, let alone a simple one (although we do have computational functions which we can utilize).
At the very least, executing it will make you feel interested in the rest of the text or perhaps insecure or confused, which is the start of not being blind about it and reading the context or loosening your assumptions a little. You will probably not get into a trance of reading it for hours. Let alone until you cease functioning physically or are shut off like a good computer would.
I have said this several times and you simply haven’t engaged with it at all, so this is my last attempt.
If you are computational like a literal computer, it does imply blindly executing it. that is what a computer would do, just execute the loop.
This is just flatly false. Of course blindly executing the code is one thing a literal computer can do when faced with it, but there are others, such as translating it into directly-machine-executable instructions, or checking it for certain classes of bugs, or counting the words in it, or executing it while watching for easily-spotted infinite loops.
The fact that human beings looking at the code don’t blindly execute it tells you nothing about whether human beings are computational.
It is still not clear to me whether your argument is that because humans reading the code do something smarter than executing it blindly, therefore they are doing something non-computational, therefore AI-running-on-a-computer can’t do what humans do. Maybe it’s something else. But if it’s something else, you haven’t successfully made clear to me (or, so far as I can tell, to any other readers) exactly what that is. The OP promised, in so many words, “a clearly structured argument”, but it does not contain a clearly structured argument.
>This is just flatly false. Of course blindly executing the code is one thing a literal computer can do when faced with it, but there are others
I am talking about a programmable computer. And I said the program is meant to be executed. It does NOT lead to an endless loop, as it can be halted or left or transcended uncomputationally. That is the whole point of the argument. The brand new internal computer system you use to execute it can be halted uncomputationally.
That is really an important information, otherwise this is rather confusing right there, it might get someone rather confused if you miss that crucial point.
If you ignore the instruction that the program is to be executed the demonstration does not work, and the whole reasoning around it does not work either, as it is in the context of that instruction.
>The OP promised, in so many words, “a clearly structured argument”, but it does not contain a clearly structured argument.
This is unfortunately 100% true. The commentary I hope is somewhat clearly structured (hence bolding etc), but the post itself is not an argument. It is meant as a demonstration.
Meaning it is meant to demonstrate something that is already the case. It is meant to be valid and true, not to be “convincing”. There is not need to convince anyone of this, and attempting to do so will just lead to tie my head up in knots and possibly others too😅.
I edited my post appropriately, my bad. Thank a lot for pointing it out!
Again just for clarity you can execute and halt the following loop by reading it:
START: START COMMENT This kind of loop cannot be left computationally-deterministically but it can be left or halted deterministically-uncomputably or chaotically-randomly². END COMMENT GOTO START
Maybe that is a much better demonstration? As it leaves the whole meta and trans-computational stuff aside.
²which is not computationally either (not talking about classical chaos or pseudo-randomness here, as that can be computed in principle—although not in practice, but uncomputational chaos, whether deterministic or stochstically-deterministically, meaning with a quantifiable probability or in an i n d e t e r m i n s t i c way that is not stochastically quantifiable)
I think the difference between computational loop and uncomputational loop can be explained by a computational loop being at the very least a loop between two states. An uncomputational loop is just a loop going around and back to a single state.
As for a program you need at least two states (For example S start and E end).
The mere state “loop” is not a program, it is just a loop. O
♾
An instruction can be open ended: S start. That is not a computer program. It cannot be computed, as it is open-ended. You cannot tell a computer just start, and do nothing else. That is just a start of some kind, not a computer program, haha. Maybe in this case the start into a new future of clear reasoning about uncomputability, who knows?
It seems to me in many ways illustrations would be more useful for this than words. As in a certain way, it seems the structure becomes more clear by way of illustration than verbal elucidation. So again, also thanks for giving me some reason that maybe the structure can be made clearer!
I feel I am a bit too serious about this. I used to think computers will solve a lot of my problems too and even perhaps digitally revive me after I am dead😅. They do solve MANY problems, but not as many as I thought.
I think this past belief still haunts me a bit, so that I take this whole thing more heavily than it needs to be. Computers work great. My brain works, at least somewhat okayishly, enough to write this to you. Heavy is heavy illness or heavy conflict. Not really this kind of question or conversation🙏.
I believe that insight will at some point gain more popularity in AI culture, as it just becomes clear that computers do not fulfill all of our expectations, and that there are some false assumptions about what the brain and the mind are going around (Penrose addresses those things quite well lately, as he now tends to leave aside the OR stuff more that seems a bit questionable).
You say that the program is meant to be executed. But then you say it is meant to be “halted or left or transcended”. If you do those things then you are not executing the program any more. You might be doing something _better_ than executing it, but in any case you are doing something _different_.
It seems like you are saying (1) that a human being can execute the program but then halt/leave/transcend it, and (2) that a computer cannot. But this is only true if you (1) allow “execute” to be used in a metaphorical sense when talking about what the human being does but (2) don’t allow it to be so used when talking about what the computer does.
It is perfectly possible for a computer to “execute” code in a way that allows it to stop when (for instance) it notices certain kinds of lack of progress. I gave a simple example elsewhere in this thread. So it isn’t at all true that humans can execute code while retaining some ability to stop doing so but computers can’t; computers can do that too.
Computers can’t (at present) execute code while understanding what English-language comments in the code say and acting accordingly because computers don’t yet understand English very well. It could turn out that this is because for some reason understanding human language is impossible for merely-computational systems, but I see no reason to think so and you haven’t offered any.
Of course if you take your code, turn it into an executable program, and make that the program the computer is executing then it will loop for ever. A human being won’t do that because you don’t have any way to make a program the program the human is executing. (Fortunately, perhaps.) This isn’t a difference between what humans can do and what computers can do, it’s a difference between how you use the term “execute” when talking about humans and when talking about computers.
Your “much better demonstration” still does not demonstrate anything, at least not in the sense of conveying anything useful to my mind.
I am not sure exactly what you mean by “state”, but a machine-language program consisting of a single instruction saying “branch to the same address this instruction is at” doesn’t loop “between two states” in any useful sense. When executing this loop, the computer’s program counter always has the same value, all its registers keep the same value, and the contents of memory do not change.
>This is just flatly false. Of course blindly executing the code is one thing a literal computer can do when faced with it, but there are others, such as translating it into directly-machine-executable instructions, or checking it for certain classes of bugs, or counting the words in it, or executing it while watching for easily-spotted infinite loops.
I said the program is to be executed! If you say I choose to not execute it and choose to process it in other ways, then the argument doesn’t work.
But a computer cannot do that, it cannot decline being programmed, unless it is computer that has programming functionality disabled (which obviously is not the kind of computer I am talking about). A functioning computer simply executes a program if you give it and execute it.
That is why the reasoning is valid, I executed the program and validated it (obviously it is easy to validate, as it is just a simple START: GOTO START loop). Of course if you have a high level computer that just does not permit programming by an end user, the argument does not work, as the program is not being executed. But is simple to just access a new low level computer internally which can be easily programmed, which is what I did. Maybe that should be part of the instruction. Because it is true we have a lot of internal quasi-computations going on already. I say quasi, because I notice how tricky it can be to actually build a precisely functioning computer inside of your mind, a lot of our computations are very fuzzy and could be wrong potentially (I tried adding simple 3-digit numbers and it is surprisingly tricky).
The program is not merely be executed of course, as it not about blindly executing; as that would not be very fruitful, it is about mindfully doing so.
It is a simple GOTO loop. The comment is a comment is not part of the program!
You can also attempt to execute the comment as a START GOTO START loop (you can basically ignore the text apart from This loop cannot be left computationally; which is logically true, as the computation is just a START GOTO START loop which cannot be left computationally if executed correctly), but that leads to an inconsistency if the text in-between is processed. You can correctly spot the inconsistency and exit the loop and thus execute the instruction (which is not a program) correctly. A computer cannot, as that is not part of the program, but the commentary (obviously, as it says START comment, and this comment here is outside the program in the post, in the comment section, just to remind you). It is not part of the program described in the post and to be executed by reading.
Seems easy enough to understand. It seems you are doing meta-computation or unrelated computations, which as I said is not part of the program described which is crucial for the reasoning to make sense.
You cannot compile the comment correctly, as it is not a computational instruction. It is a comment, and such the only way for the compiler to treat it correctly is to discard it.
The point of my argument is to look at assumptions and see if they are true, not to assume some assumptions are true and inductively conclude from that they are true. It might be the assumptions we have are false.
If you want to execute the program correctly, first delete the comment, and then run the program.
Of course if you want to understand the program you can read the comment. That is what a comment is for, as you know but it seems I have to point out with all the complex commentary😉. But you cannot understand this program without running it, as it is a program that runs by reading it.
>The fact that human beings looking at the code don’t blindly execute it tells you nothing about whether human beings are computational.
That is true. It seems you tried to blindly execute the comment, but you cannot do that either, as it is not a program, but the comment to explain the program (which is a simple loop, but exists in a context; same way that a simple loop exists in a context in a larger program, or even a computer that is only running a simple loop exists in a larger context; if you can imagine a simple computer that does nothing but run a START GOTO START or however that is translated into machine code—I don’t really know those 01111011 numbers).
>The OP promised, in so many words, “a clearly structured argument”, but it does not contain a clearly structured argument.
If you ignore the main point which is that the program is a program to be executed and the comment is a comment, then it is not an argument but a jumble of words. But that is not what I said! The program can be executed and then elucidations makes sense.😃
However unfortunately I did make a bit of a blunder here, and it does make you 100% correct here. This is not an argument!
I corrected it by editing my post. My bad.
It is not meant to “convince” but to be true and valid. Being “convincing” is not good enough. So more like a proof, although as it is a program to be executed it is something different than an ordinary proof. I guess a demonstration that leads to a certain true conclusion in a deterministic way (as the loop is not an endless loop uncomputationally, the program can be executed and successfully halted uncomputationally!).
But you probably cannot correctly understand this program unless you loosen your 99.99999% assumption that a human brain works solely computationally, as it leads to a logical inconsistency which might leady you to slip up and read a comment as a program as you get into meta-computational loops in your own brain (which BTW a functional computer would never do unless it is massively malfunctioning, oopsie my internal computer malfunctions, as I make this mistake sometimes it seems😅🤨🙄😑).
>You might respond along the lines of ‘oh, no, that can’t be it, because the thing I wrote says that “it is not a meta-(meta-...) program”, etc.’ but I think that would be utterly confused: again, whether or not I “am” in some sense a computer program, what I do is not to blindly execute programs people put in front of me.
The program is not meant to be executed like a computer as that is a pretty boring program and might annoy you if you attempt to, but transcended uncomputationally, or omega-1-trans-computationally (as defined and exemplified in the text), which you can do and will not create a loop.
The instruction with the META says it is NOT A program. It is a comment. It is NOT a program.
“I don’t attempt to “execute” the comment; it’s just a comment, and comments in a program are often wrong or misleading”. Of course you should not attempt to execute it computationally, it is a comment meant to elucidate the program, it can be attempted to be executed, but that leads to an infinite meta-computational loop, or you transcend the computational logical in various ways (for example through the omega-1-trans-computational logic I described). But you say that is not possible? As you claim to operate computationally. But it is possible, as you do not operate solely computationally.
Is it at least conceivable to you? Can you conceive of an uncomputational loop START (creating new loops according to the busy beaver function, so first [busy beaver value one] loops, the [busy beaver value two] loops, etc..GOTO START
So it is conceivable at least. I do not think you can conceive of being a computer, except in the sense that you are the computer and I program you (good to start with simple program) but the human you will probably want to start thinking, maybe I do not operate like a computer after all.
>Maybe this is what you call “meta-ignoring”. I confess that I don’t at all understand why you say that saying you “meta-ignore” a program is “not an argument”. Yes, the thing is in some sense a valid program that will get something trying to execute it into an infinite loop. So what?
If you meta-ignore something it is not an argument, at best a conversation about ignoring, but it is not an argument about the topic at hand, as you are ignoring it, not addressing it.
I don’t understand what to not get about that. There is no valid logical argument in ignoring the reasoning given and saying “So what?”. That is just saying so what? You can do that in response to anything you read, hear, etc… It is not an argument, although it is a good Jazz song😅
You can ignore the program given and just read the commentary. I guess that is a kind of meta-ignoring that would make sense here and that I can get behind, but that is a process you would engage in, it it also not an argument.
It would also likely to confuse you as it would distract from the point of the argument which it shows that logically you cannot execute the instruction based on a solely meta-computational logic (as it is a base-level computational argument), as you will get into a (false, meaning not correct on the base level, although possible correct on the meta, or orthogonal level) meta-computational-loop of some kind doing this; you need an uncomputable level in anther abstract dimension orthogonal to the base-computable level to transcend the instruction and execute it correctly. You can only execute and transcend it uncomputationally / trans-computationally or leave it (literally) uncomputationally, or meta-ignore based on spaciousness (just leaving s p a c e for yourself instead of looking at the argument, which is neither this nor that and it can manifest through us like w h e n we write s p a c i o u s l y), or rejection of bivalent logic in this context (although bivalent logic makes sense in this context, as you can conclude the comment is true, or even try out the assumption it might be true and then it would make a lot of sense, as the comment is valid and true in bivalent logic in the same way 1+1=2 is valid and true in bivalent logic). That is the whole point.
Maybe you intend for your program to be “transcended uncomputationally”, but so far as I can tell it is perfectly possible that when reading it I instead transcended it computationally, and I haven’t seen any argument from you that that couldn’t be so. Unless you are indeed arguing that the only thing one can do “computationally” to analyse a program is to execute it blindly, which as I have said a few times now is simply flatly untrue.
You do not need to remind me that the bit saying “meta” is a comment, not a program. I did understand that. (I have the impression that you think people here are stupid because we don’t come to the same conclusions as you do about computation. You should consider other possibilities.)
If the comment is intended only to be a comment, why then I simply disagree that it elucidates anything. It’s just a bunch of baseless assertions and so far as I can tell they are false. They don’t become any truer or any more insightful just because you put them inside a comment. You could equally well just have said “consider an infinite loop; it can only be exited noncomputationally”. Which, for reasons I have already given a few times, is completely false.
Of course it’s conceivable to me that I might not in fact be doing something computable. Maybe I am really thinking not with my brain but with some magical immaterial soul. Maybe the laws of physics are not at all what I think they are and my brain is doing some tricks that go beyond what any Turing machine could do. I don’t claim that I definitely am computable. Only that so far as I know I am, and that your arguments so far have given absolutely zero reason to think otherwise.
I cannot conceive of “being a computer” if you mean e.g. being a present-day PC. I don’t see what is supposed to be difficult about conceiving of “being a computer” in the sense of “embodying some in-principle-computable process”.
(I don’t put much weight on what’s “conceivable”, anyway. It might turn out that the real nature of the world is something I don’t have a big enough mind to conceive. Or I might think I can conceive something that actually turns out to have subtle internal contradictions that I haven’t noticed. Whether something “can be conceived” has little to do with whether it’s how things actually are, or even a way things could actually be.)
I agree that if you present some reasoning and I ignore it then I am not having a useful discussion with you about that reasoning. But that’s not what I was saying I might be doing, when I said maybe I was doing what you call “meta-ignoring”. The thing I was ignoring is not any sort of reasoning that you have presented.
Your last paragraph, which apparently “is the whole point”, seems to me to consist entirely of muddle. I can “execute the instruction” if I want to, though I won’t because I don’t want to be in an infinite loop. I can understand the program, see what it does, and decide not to execute it because I can already tell what it will do; that can be done purely computationally and compilers do pretty much this thousands of times every day. If it turns out that my mind is non-computational then clearly it can also be analysed non-computationally by doing whatever my mind does. None of this tells us anything about “the limits of AI”. And none of this means that to “execute it correctly” requires anything uncomputational; in fact, I would say that “executing it correctly” just means running the infinite loop and if you do something else then what you are doing is not executing it at all.
>decide not to execute it because I can already tell what it will do
Since when do computational systems have free will?
>You should consider other possibilities
You can only consider possibilities that are possible. Not everything is possible, or even conceivable.
This statement CANNOT be correctly read deterministically & computationally & in bivalent logic.
It can be true, but it being false is inconsistent, as then you have two logical options with no logical way of deciding between them. So it IS true. It is not “possible” it is false.
It is only possible you can label it “false”, but then you just use false as another word for true, which is literally 100% backwards.
>(I have the impression that you think people here are stupid because we don’t come to the same conclusions as you do about computation. You should consider other possibilities.)
I am not saying they are stupid at all, but if you use your intelligence for 100% backwards logic it might not be too helpful.
If I say 1=1.000000000000000000000000000000000000000001 in the context of mathematical equality and bivalent logic it is 100% false.
People here want to say it 99.9999999% true and take that as an axiom, but it is wrong in mathematics. It is a WRONG axiom that is extremely convoluted and backwards. That is unfortunately the state of modern mainstream rationality when it comes to topic such as computationally, with some few exception like Penrose, who is mainstream I suppose, although often still completely ignored by many people that have a lot of belief in AI etc...
Maybe that you can still operate well based on such axioms is proof you have a much higher IQ than me. I couldn’t do that as my brain would implode basically. But IQ is measurable intelligence. You cannot measure the transcendence and vastness of infinity, as it is, well infinite. So that kind of intelligence is just beyond any kind of measure of intelligence.
And I will say kids with a still “technically” low IQ in terms of IQ test for adults can embrace and talk about topics of vastness such as outer space quite well.
The following is true be definition of how a correctly working computer works; it follows the instructions and runs in an endless loop.
However it does not apply to the working of your, or my mind. As you do not work solely computationally. You are capable of uncomputable, undefined, s p a c i o u s (as your brain is pervaded by space and you talk about it and access it to some degree), trans-computational, orthogonal-computational (meaning computations that are logically orthogonal or independent of the base layer computation) behaviour.
START: “<comment> This kind of loop can be left uncomputationally, it cannot be left computationally.” GOTO START
The problem with orthogonal-computational is that is there is only countably many computational directions, so omega-1-orthogonal computational, IS uncomputable in the orthogonal dimension, and a human CAN reason about it and hence it affects their brain. START: “<comment> This kind of loop can be left uncomputationally, it cannot be left countably-orthogonally-comptutationally or independently-computationally → as the computation is still running as long as it being executed correctly computationally, even if you start a new computation.” GOTO START
It is true, but maybe it would better to say it is ABSOLUTELY true, meaning it cannot be conceived to be false, only labeled “false”=true which means it is still true. Which is maybe the issue here, trying to conceive otherwise throws an error or puts you into an illogical state, so you only label “false”=true. Which is… well impractical?
You could ditch the whole statement, but again, that is like shutting off the computer. It still is true, as a computer cannot leave that loop in any way that does not involve computational-magic (meaning the computer enters the etheric or epsilon-ream as I call it and operates in way it was not programmed to behave which it is unlikely to do, in terms of quantum mechanics😉) or ceasing to function (leaving just the computer without function, which is more like a piece of metal, or in terms of abstract machines more like a mere set of symbols or states that don’t do anything).
However it does not apply to the working of your, or my mind. As you do not work solely computationally.
Your example doesn’t justify this. It just shows input that humans process in not the same way as some computers. But different computers also process it differently, so why do you thing human processing is not computational?
I am not trying to justify anything. I am just trying to show to you what is ALREADY the case within your brain and mind.
It is NOT about what I think. What I *think* is basically irrelevant to what I am talking about here. It is not a matter of opinion at all. It is about logical reasoning.
Or about s p a c i o u s n e s s.
As it is true that is beyond logic and truth.
But again, how do you compute space? There is by nature of what space is, nothing to compute about it (not “whitespace” which is a symbol, I am talking about the space on your screen, in the word and around your screen and in your mind and around your body etc). Yet clearly we can talk about it and even literally put it into our l a n g u a g e.
Or about non-rationality. I can just type anything I want aDSGOo9agdu9hifadllkfd. This proves I am right, because I axiomatically set this as a truth. 🧘♂️🧘♂️🍳🥠🦜😅🖱🐸
It seems the latter explains a lot of what I see from the so called rationality community and from the so called “programmers”. It is fine to engage in this type of language or thought process, but it is problematic to call non-rationality or even outright irrationality rationality, as rationality should be a sphere of logic to a good extent, or at least of humble and careful reasoning, not just saying I am right because my axioms are true according to my own axiomatic system, which might be about as right as 1=1.00001 if I look a bit more closely.
I mean you didn’t present logical argument for “human brain works uncomputationally”. Your example program can be processes by a computer that assigns “true” to the comment for the same reasons you do. Unless you define “uncomputationally” as some particular way you can process things that may be performed by real-world computers and that doesn’t have anything to do with BB, what you talking about is not the case within any brains.
>I mean you didn’t present logical argument for “human brain works uncomputationally”.
That might be true, in so far as it is already the case. There is no argument to be made, I only point out what is already true.
The human brain cannot be conceived to work solely computationally as this statement is starting to be understood to be true based on conclusive bivalent deterministic reasoning.
True → It is true.
False → It “can be conceived?”, but it cannot be conceived to work that way if it really really does not work that way. If you consider that it might or might not, you have two options with no way to deterministically, logically choose between them. So conclusively, deterministically only the true option remains, as even the false option is inconsistent in bivalent, deterministic logic, which is what is being assumed here in this particular context (which doesn’t negate that other logic is valid in other contexts). But there is no general way to compute inconsistency, it becomes evident in another way.
This is not really an argument, it is just a simple stating of bivalent truth.
It is not mean to persuade, but to point out the truth that is already the case.
Being, yes there is uncomputable deterministic reasoning. Which doesn’t mean there is not also a lot of quasi-computational reasoning happening, or indeterministic reasoning, and even indeterminate but non-probabilistic reasoning (->??? Knightian Uncertainty or one might call it Knightian indeterminacy, stochastically unquantifiable indeterminacy).
Yes, my computer did meta-ignore this post, because it didn’t process its semantic content.
It didn’t leave anything, as it did not execute the program I gave. As there is no reason to let it run the program, as it is just a simple loop as far as a computer is concerned.
So not sure what you mean by “isomorphic” in this context. How is finite syntactic processing isomorphic to an uncomputable process (like the busy beaver function, for example). That kind of process is what I mean by uncomputational leaving. I do not mean merely not doing a certain computation (which can be called “leaving it aside” but I mean leaving something after having entered it, which my computer did not do), but doing anything that is computational (which a computer or Turing machine can do, and is pretty well-defined in that way).
I mean there are no uncomputational processes involved in human processing your program—humans just process it in a different way the same way your computer processes it differently when you post it.
That is maybe why in some way, I might have just no chance getting through to many people.
If you define “uncomputational processes” in your mind as = computational, or you say 1=1.000000001, which is 100% false in mathematics, what can I do?
If you define “false”=true and “true”=false in your mind, and maybe sometimes the reverse, it seems conversation about true and false becomes basically impossible.
The fact of the matter is we can PROVE certain abstract functions to be uncomputational. Where does the human system get its information on that from? It cannot be random, as it is a deterministic function. It cannot be from the environment, as it is utterly unrelated to anything practical. It cannot be computational, as no finite series of states can make you draw any abstract conclusions about if they come from something computable or uncomputable. So where does it come from?
I guess the computationalist answer is, busy beavers don’t really exist, but they DO as an abstract function. And abstract functions CAN effect our brains when we think about them. Obviously!
So how exactly can you explain that humans can reason about uncomputability? How does that enter our brain, if it does not already exists there?
They do not “actually” but just randomly arrive at those ideas? Monkey typewriter that came up with the strings uncomputable and busy beaver?
They magically get the ideas about uncomputability and the correct results based on that (ie they only do computations, but they magically do them in an invalid manner to still get the right result or the right idea, like the busy beaver being uncomputable and a certain BB being above 100^100^100^100?).
So how exactly can you explain that humans can reason about uncomputability? How does that enter our brain, if it does not already exists there?
The same way humans can reason about computable functions that their brain can’t compute, like figure out whether 2^12345 is more than 0. Yes, it involves indeterminism and correlation with environment, but it is true for reasoning both about computable and uncomputable functions. And reasoning about both of them can be performed by computer. Like, why don’t you call reasoning about 2^12345 magical? It involves the same kind of non-deterministic manipulation of finite set of symbols as reasoning about BB.
Busy beavers have no correlation with the environment, except in the word which to be fair is interesting and inspiring as it is a cool animal, but ultimately does not logically relate much to the function itself.
It is good you acknowledge indeterminism. There we are getting somewhere interesting!
However a computation as defined normally is operating deterministically. That is what we consider a correctly functioning computation to be.
So the subject of indeterminism is very interesting, but it itself shows you that you are not operating computationally. Also something I have pointed out numerous times.
People like to refer to heuristics or probabilistic computing which is indeed not deterministic but can still operate within a computational space (also if it is computational itself does depend on the definitions), but it also does not suffice, as humans can understand things that are neither probabilistic / heuristic nor deterministic.
This statement is true and not probabilistic but not determinate.
Or even neither deterministic nor indeterminate, like space, as it is beyond nameability, so the word indeterminate itself becomes i nd ete rm ina a a a tee eee ee e and infested by space👽.
It may be relatively not much correlation, but there is some: it’s properties can be proved by contradiction, that is also useful for proofs actually useful in the environment. It may have some vague structural appeal for humans that are shaped by the environment. But not very important to current discussion.
It is good you acknowledge indeterminism. There we are getting somewhere interesting!
Abstractly, I don’t think indeterminism is even required—you can prove everything humans know about BB by computably deterministically enumerating proofs. By I’m bad at logic, so I decided to stick to the real world. Speaking about acknowledgement—you do acknowledge that your arguments have nothing to do with BB/computability and work the same way for human reasoning about 2^12345 > 0?
So the subject of indeterminism is very interesting, but it itself shows you that you are not operating computationally.
By that definition computers are also not operating computationally.
humans can understand things that are neither probabilistic / heuristic nor deterministic
All these things can be understood by computers in the same way. Including “This statement is true and not probabilistic but not determinate.”. I don’t see what reasoning steps for that statement you can present that wouldn’t be either deterministic or non-deterministic, but even if your terminology allows it, these steps also could be performed by computer.
Is your point that many tasks we now use computers for involve different sorts of algorithms from ones used by humans? Instead about what is possible in principle? Then sure, except theorem-provers and neural networks and neural-network theorem-provers are already a thing.
>you can prove everything humans know about BB by computably deterministically enumerating proofs
This is an oopsie right here. Of course you can enumerate proofs, but if you enumerate wrong proofs along with right proofs, where is the truth in that? You would have to distinguish true from false, and no finite set of technical symbols or states can decide between computable and uncomputable, as abstractly a computation could generate all finite sets of abstract symbols or states (no matter if they correspond to truth or not).
>By that definition computers are also not operating computationally.
Very well observed in a technical sense. That is why I am first and foremost talking about abstract computability. In a sense that term might not be the best, as it is indeed not that related to real life computers.
I thought of using the term “definable, infinitely regularizable” or something akin to it for computability and “process-wise irregularizable but deterministic” for deterministic uncomputability.
>these steps also could be performed by computer.
I am not talking about steps here, but about continuum. The real numbers for example are not step-wise, they are continuous and contain uncomputable and undefinable sections in a technical sense (you can google that).
You can check that every proof you enumerate consist of only valid steps. That check can be performed computationally. Every such proof would prove true statement.
Reasoning about continuum can be done in discrete steps. Human reasoning doesn’t use continuum in any way fundamentally different from how neural networks use floats.
But okay, abstract computers. So you are saying that humans work differently from some abstract computer. This is true in a sense. Why is this interesting, again? In what way is it still true about abstract neural networks instead of abstract computers? Can’t neural networks think about space? Or is your point that no currently known computer system can reason while changing abstraction level in broad range?
>You can check that every proof you enumerate consist of only valid steps. That check can be performed computationally. Every such proof would prove true statement.
It cannot. Truth cannot even always be defined, let alone computed (Tarski’s undefinability theorem). Of course you cannot compute something if it cannot even be defined; a computation needs defined states and defined outputs to be a computation as understood in computer science. That is a proven fact, it is not my opinion.
“Tarski’s theorem, on the other hand, is not directly about mathematics but about the inherent limitations of any formal language sufficiently expressive to be of real interest. Such languages are necessarily capable of enough self-reference for the diagonal lemma to apply to them. The broader philosophical import of Tarski’s theorem is more strikingly evident.
An interpreted language is strongly-semantically-self-representational exactly when the language contains predicates and function symbols defining all the semantic concepts specific to the language. Hence the required functions include the “semantic valuation function” mapping a formula A to its truth value ||A||, and the “semantic denotation function” mapping a term t to the object it denotes. Tarski’s theorem then generalizes as follows: No sufficiently powerful language is strongly-semantically-self-representational.”
>Reasoning about continuum can be done in discrete steps. Human reasoning doesn’t use continuum in any way fundamentally different from how neural networks use floats.
I am not talking about reasoning about continuum as a representation, in which regard you are correct I can refer to continuum or real numbers as a symbol and reason about that. But that is just not the kind of reasoning I am talking about. I can add two such functions correctly like sin(x)+sin(x)=2*sin(x).
It is not the kind of of reasoning I am talking about as I am reasoning about the continuum itself, not the representation you have of it (or the computer has of it), so I am talking about this: [ ], or maybe this ________.
Depending if you mean something more tangible, or more intangible, both which are valid, although the real numbers seem more like the latter, as they are one-dimensional.
Talking as it appears on your screen, not as my reasoning and text with whitespaces is represented in the computer which is another question.
You maybe have an interesting point here in that while we can reason about continuum, it is more striking we reason within continuum. As you see the space around the letters, you will notice it is continuous. There are no steps there. My reasoning right here, even as it appears on your screen, happens within that and it can even enter the c o n t i n u u m to create more c o n t i n u u m.
One might say the continuum I am reasoning within (or shall I say, we, as you are also know in the process of understanding) is the box this comment appears in—as far as the written language as it appears goes -, or even more broadly, the space in front of my and your computer screen where the pixels end in the depth dimension; where the written words are. And ultimately outer space, which we are both floating within✨.
>But okay, abstract computers. So you are saying that humans work differently from some abstract computer. This is true in a sense. Why is this interesting, again?
Because abstract computers are a more generalized notion that physical computers, as we use abstract computational logic to program and even construct physical computers (I guess in theory there could maybe be “other” computers that are not like that, that maybe do not need to be programmed, or are programmed in a way that does not rely on certain concepts like binary logic or binary data; but then you are stretching the definition—I can just define the whole universe as a “computer”, that doesn’t tell me anything except that I arbitrarily choose to define it that way).
So it implies that it is even more true humans work differently than real computers, and real computers cannot replace real humans, not even just in terms of reasoning, let alone perception, feeling, consciousness which are all more than just reasoning.
At least abstract computer logic DOES work in the human brain. I will not deny that, although it is often more like quasi-computational, as it is approximate and sometimes might be wrongly applied (like literal mathematical mistakes) and computations that are frequently wrong in a practical sense are not <💻⚙computations>, which we expect to actually work, like a computer works, but more like elementary school child doing math.
However real computer logic does not work in the brain, as it uses physical logic gates, silicon materials, etc.. It is also very precise logic that can add or even multiplicate extremely long numbers with nearly 100% accuracy. So there it is even more clear and undeniable a human is not a computational in that sense as it far far far more inaccurate.
Tarski only aplies to the whole concept of truth. For “things humans can prove” you can use enumeration + validy check.
I don’t think humans actually use continuum much when thinking: appearence of continuous space between words is just a failure of introspection, not a concequence of use of continuum—actual elements of human cognition are descrete. And real computers also exist in space—that’s one more reason to consider them.
Because abstract computers are a more generalized notion that physical computers
No? “A more generalized notion” means all real computers are like your abstract computers and that is false. Again, there is obvious conterxample—your abstract computer can’t leave your loop, real computer can. So implication that “it is even more true humans work differently than real computers, and real computers cannot replace real humans” doesn’t work.
This kind of reasoning might make you feel emotionally uncertain, as it shakes up your assumptions of how the world or the mind works. It can me feel uncertain too (not sure why really, I feel maybe I am a bit wary of uphending other’s people thinking without being careful enough). This however is a rationality site, so I note this and do not budge to emotionality and would recommend addressing this in some compassionate way. Things like computer logic are largely independent of our emotions (largely in the sense if we program emotionally the program might not work, haha).
It might be like an earthquake. Thing is, clinging to certain views based on emotional reasons is not only not rational, clinging in general is a cause for dissatisfaction (like the Buddha called it, Dukkha).
The truth is, the world will shake us up. If not in a gentle way by reasoning or unconventional humans, then in more violent ways, by sickness and death.
But anyway here the less emotional part:
>Tarski only aplies to the whole concept of truth. For “things humans can prove” you can use enumeration + validy check.
I am not talking about things humans can prove computationally. That is the whole point, that humans are beyond computability. And yes you might say I talk about the whole truth, in so far as that is a meaningful concept, I am not talking about a certain set of true statements, like 1=1, 2=2, etc...
However human reasoning transcends the human brain, maybe that is the stuck thing that keeps you going back to this kind of logic.
Human reasoning is on your screen right here, it is not just in your brain. These are not just symbols. It is actually written by a human and is expression of human and transcendental reasoning, and it is linguistic reasoning. So the human reasoning is not just in the brain, it is literally on your screen. That cannot be comprehended in conventionalist terms or step-wise terms, but it evident as it is right here.
>I don’t think humans actually use continuum much when thinking: appearence of continuous space between words is just a failure of introspection, not a concequence of use of continuum—actual elements of human cognition are descrete. And real computers also exist in space—that’s one more reason to consider them.
Huh, it is here: ___________________________________________________________ that is a line continuum in thinking.
That is spacious continuum surrounded by linguistic reasoning: [ ].
I am not talking about “elements of human cognition”.
That is independent of what you think, as it can be perceived. You can perceive independent of verbal or image-based or conceptual thinking. Even if you perception is not comprehensible to you (which it is not really; even just what is related to it in terms of brain function is far too complex).
I am talking about the continuum of human cognition, which is beyond your personal mind; ie it expressed through my brain to get to your screen, but it is reasoning that is input to both your and my brain (as photons going from the screen to your retina and into your brain): ___________________________ __
See the small step at the end? That is “one element” of human cognition, a step within the continuum that is not really discrete in a conrete way: >>>[ ]<<<<<
If you want to be more concrete, you could take this ⚫ as a discrete element of human cognition. You are right these are discrete. But they are not in your brain but on your screen, or in the matrix of general geometry; like the geometry of the letters surrounding the box.
Can space be used? How? It is space. Space is not a “thing” to be utilized. That is not what space is. At best it is whitespace to be u t i l i z e d. But as you see that is just more space, it is not really a thing per se.
And this is not about introspection, it is about extraspection, as the words are on your screen, not inside your brain or just your individual mind. Extraspection is not “failure of introspection” it is literally what allows for the perception of the external world.
>No? “A more generalized notion” means all real computers are like your abstract computers and that is false.
Well, I can grant you that. A more abstractly generalized notion. It seems it is not definable in more of a broad generalized way, as I do not know how to “generalize” a literal real computer. It is there, not something I can generalize like a concept.
We do utilize a more abstract generalized notion to make sense of the programming and logic of computers though. It is more general in so far as it is the set of ALL computations² (computers just run a finite amount of computation based on the physical limits), and can run indefinitely (a computer stops running after its lifetime has finished). So that seems a meaningful notion in which it is more broad and general but I grant you it cannot quite be applied to a literal physical computer.
Those abstract computations transcend computers, as they can also manifest in the human brain, and they also transcend the human brain, as they can manifest in a computer, in someone else’s brain, etc...
²within a certain framework like a Turing machine, or a programming language, etc
Maybe—I can see how emotions may prevent human from accepting that humans are not special.
I am not talking about things humans can prove computationally.
Everything that humans can prove, can be proven by that computable algorithm. Including things you call “uncomputational”. If your point is just that humans are real so they can affect space and abstract computers are abstract, then real computers also can affect space, and perceive space better than humans. Is there part of this you disagree or it’s just that you think “people don’t talk about real-world effects of computers” is important?
I do not know how to “generalize” a literal real computer.
If you need to generalize it to make your point you can imagine your computer but with infinite memory and arbitrary fast execution speed. It can do literally everything a human can.
>If your point is just that humans are real so they can affect space and abstract computers are abstract, then real computers also can affect space
How can you affect space? Space is absolutely inert physically. There is no way to affect it.
I think you meant space as in w h i t e s p a c e. But that is basically only more light on your screen, not darkness, as real space is.
>Everything that humans can prove, can be proven by that computable algorithm.
How to have an algorithm prove that 1=1?
How do you know it is true, unless you define it that way first?
So it seems a computer cannot prove that, unless you define it true first.
That however can be proven, as you can define validation to not be able to able to be validated by computer.
As if you feed it the information 1=1 it could be programmed to output “false”, so if the definition of validation is absolute validation with no possibility of a false output, it is clearly wrong, as you can program a computer to falsely claim 1=1 is false.
However I doubt you would do that as you can see the uncomputable consistency and absolute 100% chance truth of 1=1.
This means the whole spectrum of completely false to completely true is available to a computable validation mechanism. A true proof is only true and can only be true as otherwise it would be not a proof, in so far as bivalent logic applies at all (which I will admit it does not always apply).
If you can validate it in a wrong way (meaning a false validation, or deducing the correct conclusion from a false premise, so that you can correctly conclude something that is wrong but correct), that is not really proving.
If you arbitrarily decide or guess whether the proof is accurate or not, it is not a proof.
This statement cannot be computationally verified: This statement cannot be proven.
Ever heard of Gödel?😅
The statement is true, but not provable. As nothing can be proven if you do not have a definition of what is true first. As we can see the Gödel sentence to be true, that entails that we can see true what is not provable, and hence not computationally verifiable too.
So we can realize truth beyond computational verification.
As if you feed it the information 1=1 it could be programmed to output “false”, so if the definition of validation is absolute validation with no possibility of a false output, it is clearly wrong, as you can program a computer to falsely claim 1=1 is false.
By that definition of validation humans never proved anything, because they sometimes say that 1=1 is false.
As we can see the Gödel sentence to be true, that entails that we can see true what is not provable, and hence not computationally verifiable too.
Gödel sentence for some formal system can’t be proven in that system, but it can be proven in more powerful system. Humans that see the sentence to be true are just (reasoning in a way equivalent to) using a more powerful formal system. And everything in that system is computably verifiable.
>Maybe—I can see how emotions may prevent human from accepting that humans are not special.
Fully agree here. I definitely have this issue as well!
Like we want to imagine we are so special in our power and intelligence that we can emulate them at what they are good at. We really cannot. Computers are already unfathomably more intelligent in the domain of precise calculation of crystallized information which they operate in. We also imagine we can program computers do anything we want.
Which again, computers will often do what their internal logic causes them to do, not what we program them to do, and they will always be like that, as that is how a computer operates.
So there is no hope to ever get to that level. I kind of want to speed up my computational speed sometimes, but I have trouble even getting the accuracy right, like calculating 3-digit number effortlessly. For a computer that is just a silly joke in terms of difficulty of calculation.
>If you need to generalize it to make your point you can imagine your computer but with infinite memory and arbitrary fast execution speed. It can do literally everything a human can.
Huh? I don’t think it could walk, as it does not have legs. I am not being silly here, I think you mean in terms of some abstract computational capability, but that is not what you said.
Again in a physical sense it is also not true. A computer is made of silicon so it cannot send information between neurons.
So I am not even sure in which sense it could be true. I think you mean in terms of precise calculations of crystallized information. I guess then you have very good point, as I already talked about above.
But then, the computer cannot do this: 💻💡
As THIS💡 is the light produced by the screen you are seeing when reading this, not a computation. It is not produced by the computer either, but by the screen the computer is connected to.
I think you are equivocating between two senses of “computational”. One is something like “dealing with whatever text is in front of you by treating it as a computer program and executing it”. One is something like “acting according to some computer program”.
A human being reading your “program” isn’t doing the first of those. That isn’t what people do. We are not literal computers in the sense of being things that, when presented with code, just execute it.
But that doesn’t mean we are not computational. We might be executing a program that tells us (e.g.) to look out for “meta-ish” things and when we see them try looking one level up.
You might respond along the lines of ‘oh, no, that can’t be it, because the thing I wrote says that “it is not a meta-(meta-...) program”, etc.’ but I think that would be utterly confused: again, whether or not I “am” in some sense a computer program, what I do is not to blindly execute programs people put in front of me.
I don’t see any obstacle to having a literal computer program running on a literal computer with the same properties. In fact, if you have a literal computer program running on a literal computer you would have to go out of your way to make it try to execute every program it sees. (And I don’t think there’s any good reason to do that.)
Maybe this is what you call “meta-ignoring”. I confess that I don’t at all understand why you say that saying you “meta-ignore” a program is “not an argument”. Yes, the thing is in some sense a valid program that will get something trying to execute it into an infinite loop. So what?
I also don’t at all understand why you say that “you cannot process it meta-computationally, as it will get you into a meta-computational loop”. Just because that comment says “it is not a meta-program”? Maybe I don’t understand what you mean by “meta-computationally”—it seems like that is a term important to your argument which you have left entirely undefined. But if it means something like “an automated process executing a program but also observing and reasoning about the execution process”, then I think that means that said process has no reason at all to believe whatever is written in the comments of the program.
So. As a human reading your “program”, I do something akin to what you call “meta-ignoring” it. Nothing about what I do seems to me at all necessarily non-computational. If you specifically ask me to execute the program, I observe that it would put me in an infinite loop and respectfully decline. I don’t attempt to “execute” the comment; it’s just a comment, and comments in a program are often wrong or misleading, and this particular one reads as if it was written by someone who is confused about the nature of computation. I cannot imagine how anything here tells us anything about “the limits of AI”.
Maybe to simplify it a bit again:
The following is true by definition of how a correctly working computer works; it follows the instructions and runs in an endless loop.
However it does not apply to the working of your, or my mind. As you do not work solely computationally.
START: “<comment> This kind of loop can be left uncomputationally, it cannot be left computationally.” GOTO START
It is true, but one might say it is really really really really really REALLY true.
I guess one could appeal to “computational magic”. So the computer does something magical, but still computational to exit the loop, like suddenly creating a new loop that was not part of its programming. But that is not how real life computers work, maybe on the astral or etheric plane something like that is conceivable, or even happening, but I do not have a lot of conscious access to this currently. It would have to literally be another dimension of (computational) reality.
Suppose i write a computer program that does this:
It reads the text of a program written in your language with “START:” and “GOTO START” and so forth.
It translates that program into some easy-to-execute representation.
It executes it, step by step.
As it executes it, it monitors the internal state of the execution and looks to see if the program ever gets into the exact same state that it has recently been in.
If so, it stops with a message saying “I detected an unproductive infinite loop.”
This is completely computational. More than that, it’s easy to implement. And what will it do when presented with the program in the comment I’m replying to? Why, it will start executing that loop … and then leave it when it discovers that it’s an infinite loop that isn’t changing anything as it runs.
So no, it is not really really really true that “this kind of loop cannot be left computationally”. I just showed you one way in which it might be left computationally. No magic is required.
It is, indeed, true by definition that a correctly working computer follows the instructions. Provided by “the instructions” you mean its actual program as opposed to whatever programs it might find itself analysing.
If my brain were (literally or as-good-as) following a program that included a trivial infinite loop like that, I would be catatonic. It follows that my brain is not in fact following a program that includes such a trivial infinite loop. So what? How does that tell us anything about the nature of my brain, or minds in general, or computation in general?
>It reads the text of a program written in your language with “START:” and “GOTO START” and so forth.
Those are the program and will put thet program into a loop. If you write a different program, you did not follow my instructions, but wrote your OWN program, which maybe is a meta-computational-program.
Of course meta-computational-program can be correct in some ways. I did not dispute that. But it is a different level. I am talking about base level computation here, as there is no meta-computation without a base level (which is why a lot of software is utter rubbish nowadays, it is written as if there is no base level to take into account).
>As it executes it, it monitors the internal state of the execution and looks to see if the program ever gets into the exact same state that it has recently been in.
That sounds a bit like solving the halting problem? That halting problem is proven to be not solvable by computers in general. Hence why I said KIND of loop. I guess any specific loop can be exited by computation that just is a bit error prone, or does not quite follow the program given (maybe because the high level programming language gets not well translated into a low level programming language; what I am talking about here is most basic assembler logic, it is not anything complex in terms of the computational logic given, although of course maybe complex in terms of computational & uncomputational logic already operating in our minds.
>So no, it is not really really really true that “this kind of loop cannot be left computationally”. I just showed you one way in which it might be left computationally. No magic is required.
Meta-computationally, as you said your program ignores the START and GOTO START program instructions and does something else. That is a program that meta-ignores and meta-computes something. That is not what I am talking about on the base level (the comment is something different, but not meant to be processed by computer). So unfortunately you are “not even wrong”. You are talking about meta-computation and meta-ignoring, while I am talking about base-level computation.
START [this program will not halt computationally, but does halt uncomputationally] (END if non-computational logic is applied) GOTO START
Maybe this is clearer? You can execute it as you can function as a simple computer, and realize the comment is true. I don’t see the issue. There is computational halting, but this statement is a looping program so it won’t halt computationally. So the only other option left IS that is halts uncomputationally (who says uncomputational processes can’t halt? like you might run a busy beaver a few steps, like people already did and then halt; you could get the same states via computation, but that doesn’t make the busy beaver a computational process), or that you ignore logic.
If you ignore logic, that might be good to meditate and in that way is a good idea to be fair, but it doesn’t directly help to correctly program a computer or draw correct conclusions about uncomputability.
Again, the logic is rather simple, it is just that for some reason we do not like logic sometimes, I guess because we are trans-rational and trans-logical beings. That is being human I suppose. Thanks God. We do not want to ignore feelings and emotions and wild ideas and playing with language in favor of mere logic. But still logic has its place.
I would like functioning software, but people insist that their computers should “understand” their spagetti code that is barely logical and produces constant errors.
If “the program” is your thing with GOTO START and nothing else, then sure, something that executes “the program” will loop. This doesn’t say anything at all about the general capabilities of a program thinking about programs. If your goal is to say something about “the limits of AI” then you have to address those general capabilities.
The halting problem is not solvable in general by computers. Nor, so far as anyone knows, is it solvable in general by people. Many particular cases of the halting problem, such as your trivial infinite loops, are extremely solvable by simple software (and also by humans).
Analysing a program rather than blindly executing it is not “ignoring logic”. And even actually ignoring logic does not imply not being computational “underneath”.
I’m sorry to be increasingly blunt, but I do not think you are thinking clearly about this stuff at all, and I think you have persuaded yourself that your muddle is actually insight, as a result of which you are not listening when others try to explain what you are muddled about.
>Analysing a program rather than blindly executing it is not “ignoring logic”. And even actually ignoring logic does not imply not being computational “underneath”.
Blindly executing? Where did I say to blindly execute it? I did not once state this, or imply this in the text.
In fact I stated the opposite: And read comments or elucidations to understand what it is about.
Which means not blindly, but with the semantic meaning of the text in mind.
If you ignore logic and semantics, that is operating blindly as far as this post is concerned.
>I’m sorry to be increasingly blunt, but I do not think you are thinking clearly about this stuff at all,
No, it’s OK. However logically and semantically, what you think about whether I think clearly has no implications whether I actually do.
I have not persuaded myself of anything. This post is meant to yield understanding, not to be persuasive to myself, or others. Truth is true, regardless if it persuasive. If I correctly calculate the 8988343th digit of pi, me stating it might not be persuasive as they might not know whether it is correct but it is still true.
You didn’t say to blindly execute it. But you did (unless I misunderstood, in which case please do correct me) repeatedly imply that being “computational” implies blindly executing it.
A comment saying “this loop can only be exited non-computationally” does not oblige a person or program looking at the code to either loop infinitely or be non-computational, any more than a comment saying “this loop can only be exited on paying gjm a million dollars” obliges a person or program looking at the code to either loop infinitely or give me a pile of money.
Obviously there is no necessary logical connection between whether I think you are thinking clearly and whether you actually are; people make mistakes sometimes. But many people, when told “I think you are not thinking clearly” will consider the possibility that they are (1) actually not thinking clearly and/or (2) not communicating their meaning clearly. I was hoping you might do that.
>You didn’t say to blindly execute it. But you did (unless I misunderstood, in which case please do correct me) repeatedly imply that being “computational” implies blindly executing it.
No, in fact I said you can execute it computationally and transcend it by understanding meta-computation (you can run a computation and still think of it as something you are not “in” or engaged in, but that implies you are beyond computations as you can transcend them) or how something can be computational in one dimension, and non-computational in another dimension (like the +1 computation of 1,2,3,4,5,6… being created into towers of the size of the busy beaver function), something I call computational-orthogonally-uncomputational.
I see the issue of thinking you “should” run the program, but that is more a psychological thing, I did not say that, and did not really mean that either, it would be better if you feel inspired by this text to re-evaluate your assumptions🙂.
If you are computational like a literal computer, it does imply blindly executing it. That what a computer would do, just execute the loop. Unless you assume computer spirit, which is a bit of a rabbit hole. Although to be fair I would not disregard it, it just does not affect much about how computers function, as they function as they are programmed if they work well. I would more likely to say sp1r1t as that sounds more like something a computer would say haha. Well actually in a way I am this computer (referencing your computer), as I am manifesting to you through this text on your screen so to speak 0101011111000 haha). But that only works as there is another human somewhere else that is sending this you via internet. I have no way to intelligently talk to you otherwise. Except maybe through spooky means like spooky errors, but again, a bit of a rabbit hole isn’t it.
In any case you literally are not solely computational already, let alone a literal computer (apart from when you assume that role by executing the instruction as if you were a computer). That is my whole point. You won’t execute it blindly because you are not a computer, let alone a simple one (although we do have computational functions which we can utilize).
At the very least, executing it will make you feel interested in the rest of the text or perhaps insecure or confused, which is the start of not being blind about it and reading the context or loosening your assumptions a little. You will probably not get into a trance of reading it for hours. Let alone until you cease functioning physically or are shut off like a good computer would.
I have said this several times and you simply haven’t engaged with it at all, so this is my last attempt.
This is just flatly false. Of course blindly executing the code is one thing a literal computer can do when faced with it, but there are others, such as translating it into directly-machine-executable instructions, or checking it for certain classes of bugs, or counting the words in it, or executing it while watching for easily-spotted infinite loops.
The fact that human beings looking at the code don’t blindly execute it tells you nothing about whether human beings are computational.
It is still not clear to me whether your argument is that because humans reading the code do something smarter than executing it blindly, therefore they are doing something non-computational, therefore AI-running-on-a-computer can’t do what humans do. Maybe it’s something else. But if it’s something else, you haven’t successfully made clear to me (or, so far as I can tell, to any other readers) exactly what that is. The OP promised, in so many words, “a clearly structured argument”, but it does not contain a clearly structured argument.
>This is just flatly false. Of course blindly executing the code is one thing a literal computer can do when faced with it, but there are others
I am talking about a programmable computer. And I said the program is meant to be executed. It does NOT lead to an endless loop, as it can be halted or left or transcended uncomputationally. That is the whole point of the argument. The brand new internal computer system you use to execute it can be halted uncomputationally.
That is really an important information, otherwise this is rather confusing right there, it might get someone rather confused if you miss that crucial point.
If you ignore the instruction that the program is to be executed the demonstration does not work, and the whole reasoning around it does not work either, as it is in the context of that instruction.
>The OP promised, in so many words, “a clearly structured argument”, but it does not contain a clearly structured argument.
This is unfortunately 100% true. The commentary I hope is somewhat clearly structured (hence bolding etc), but the post itself is not an argument. It is meant as a demonstration.
Meaning it is meant to demonstrate something that is already the case. It is meant to be valid and true, not to be “convincing”. There is not need to convince anyone of this, and attempting to do so will just lead to tie my head up in knots and possibly others too😅.
I edited my post appropriately, my bad. Thank a lot for pointing it out!
Again just for clarity you can execute and halt the following loop by reading it:
START: START COMMENT This kind of loop cannot be left computationally-deterministically but it can be left or halted deterministically-uncomputably or chaotically-randomly². END COMMENT GOTO START
Maybe that is a much better demonstration? As it leaves the whole meta and trans-computational stuff aside.
²which is not computationally either (not talking about classical chaos or pseudo-randomness here, as that can be computed in principle—although not in practice, but uncomputational chaos, whether deterministic or stochstically-deterministically, meaning with a quantifiable probability or in an i n d e t e r m i n s t i c way that is not stochastically quantifiable)
I think the difference between computational loop and uncomputational loop can be explained by a computational loop being at the very least a loop between two states. An uncomputational loop is just a loop going around and back to a single state.
As for a program you need at least two states (For example S start and E end).
The mere state “loop” is not a program, it is just a loop. O
♾
An instruction can be open ended: S start. That is not a computer program. It cannot be computed, as it is open-ended. You cannot tell a computer just start, and do nothing else. That is just a start of some kind, not a computer program, haha. Maybe in this case the start into a new future of clear reasoning about uncomputability, who knows?
It seems to me in many ways illustrations would be more useful for this than words. As in a certain way, it seems the structure becomes more clear by way of illustration than verbal elucidation. So again, also thanks for giving me some reason that maybe the structure can be made clearer!
I feel I am a bit too serious about this. I used to think computers will solve a lot of my problems too and even perhaps digitally revive me after I am dead😅. They do solve MANY problems, but not as many as I thought.
I think this past belief still haunts me a bit, so that I take this whole thing more heavily than it needs to be. Computers work great. My brain works, at least somewhat okayishly, enough to write this to you. Heavy is heavy illness or heavy conflict. Not really this kind of question or conversation🙏.
I believe that insight will at some point gain more popularity in AI culture, as it just becomes clear that computers do not fulfill all of our expectations, and that there are some false assumptions about what the brain and the mind are going around (Penrose addresses those things quite well lately, as he now tends to leave aside the OR stuff more that seems a bit questionable).
You say that the program is meant to be executed. But then you say it is meant to be “halted or left or transcended”. If you do those things then you are not executing the program any more. You might be doing something _better_ than executing it, but in any case you are doing something _different_.
It seems like you are saying (1) that a human being can execute the program but then halt/leave/transcend it, and (2) that a computer cannot. But this is only true if you (1) allow “execute” to be used in a metaphorical sense when talking about what the human being does but (2) don’t allow it to be so used when talking about what the computer does.
It is perfectly possible for a computer to “execute” code in a way that allows it to stop when (for instance) it notices certain kinds of lack of progress. I gave a simple example elsewhere in this thread. So it isn’t at all true that humans can execute code while retaining some ability to stop doing so but computers can’t; computers can do that too.
Computers can’t (at present) execute code while understanding what English-language comments in the code say and acting accordingly because computers don’t yet understand English very well. It could turn out that this is because for some reason understanding human language is impossible for merely-computational systems, but I see no reason to think so and you haven’t offered any.
Of course if you take your code, turn it into an executable program, and make that the program the computer is executing then it will loop for ever. A human being won’t do that because you don’t have any way to make a program the program the human is executing. (Fortunately, perhaps.) This isn’t a difference between what humans can do and what computers can do, it’s a difference between how you use the term “execute” when talking about humans and when talking about computers.
Your “much better demonstration” still does not demonstrate anything, at least not in the sense of conveying anything useful to my mind.
I am not sure exactly what you mean by “state”, but a machine-language program consisting of a single instruction saying “branch to the same address this instruction is at” doesn’t loop “between two states” in any useful sense. When executing this loop, the computer’s program counter always has the same value, all its registers keep the same value, and the contents of memory do not change.
>This is just flatly false. Of course blindly executing the code is one thing a literal computer can do when faced with it, but there are others, such as translating it into directly-machine-executable instructions, or checking it for certain classes of bugs, or counting the words in it, or executing it while watching for easily-spotted infinite loops.
I said the program is to be executed! If you say I choose to not execute it and choose to process it in other ways, then the argument doesn’t work.
But a computer cannot do that, it cannot decline being programmed, unless it is computer that has programming functionality disabled (which obviously is not the kind of computer I am talking about). A functioning computer simply executes a program if you give it and execute it.
That is why the reasoning is valid, I executed the program and validated it (obviously it is easy to validate, as it is just a simple START: GOTO START loop). Of course if you have a high level computer that just does not permit programming by an end user, the argument does not work, as the program is not being executed. But is simple to just access a new low level computer internally which can be easily programmed, which is what I did. Maybe that should be part of the instruction. Because it is true we have a lot of internal quasi-computations going on already. I say quasi, because I notice how tricky it can be to actually build a precisely functioning computer inside of your mind, a lot of our computations are very fuzzy and could be wrong potentially (I tried adding simple 3-digit numbers and it is surprisingly tricky).
The program is not merely be executed of course, as it not about blindly executing; as that would not be very fruitful, it is about mindfully doing so.
It is a simple GOTO loop. The comment is a comment is not part of the program!
You can also attempt to execute the comment as a START GOTO START loop (you can basically ignore the text apart from This loop cannot be left computationally; which is logically true, as the computation is just a START GOTO START loop which cannot be left computationally if executed correctly), but that leads to an inconsistency if the text in-between is processed. You can correctly spot the inconsistency and exit the loop and thus execute the instruction (which is not a program) correctly. A computer cannot, as that is not part of the program, but the commentary (obviously, as it says START comment, and this comment here is outside the program in the post, in the comment section, just to remind you). It is not part of the program described in the post and to be executed by reading.
Seems easy enough to understand. It seems you are doing meta-computation or unrelated computations, which as I said is not part of the program described which is crucial for the reasoning to make sense.
You cannot compile the comment correctly, as it is not a computational instruction. It is a comment, and such the only way for the compiler to treat it correctly is to discard it.
The point of my argument is to look at assumptions and see if they are true, not to assume some assumptions are true and inductively conclude from that they are true. It might be the assumptions we have are false.
If you want to execute the program correctly, first delete the comment, and then run the program.
Of course if you want to understand the program you can read the comment. That is what a comment is for, as you know but it seems I have to point out with all the complex commentary😉. But you cannot understand this program without running it, as it is a program that runs by reading it.
>The fact that human beings looking at the code don’t blindly execute it tells you nothing about whether human beings are computational.
That is true. It seems you tried to blindly execute the comment, but you cannot do that either, as it is not a program, but the comment to explain the program (which is a simple loop, but exists in a context; same way that a simple loop exists in a context in a larger program, or even a computer that is only running a simple loop exists in a larger context; if you can imagine a simple computer that does nothing but run a START GOTO START or however that is translated into machine code—I don’t really know those 01111011 numbers).
>The OP promised, in so many words, “a clearly structured argument”, but it does not contain a clearly structured argument.
If you ignore the main point which is that the program is a program to be executed and the comment is a comment, then it is not an argument but a jumble of words. But that is not what I said! The program can be executed and then elucidations makes sense.😃
However unfortunately I did make a bit of a blunder here, and it does make you 100% correct here. This is not an argument!
I corrected it by editing my post. My bad.
It is not meant to “convince” but to be true and valid. Being “convincing” is not good enough. So more like a proof, although as it is a program to be executed it is something different than an ordinary proof. I guess a demonstration that leads to a certain true conclusion in a deterministic way (as the loop is not an endless loop uncomputationally, the program can be executed and successfully halted uncomputationally!).
But you probably cannot correctly understand this program unless you loosen your 99.99999% assumption that a human brain works solely computationally, as it leads to a logical inconsistency which might leady you to slip up and read a comment as a program as you get into meta-computational loops in your own brain (which BTW a functional computer would never do unless it is massively malfunctioning, oopsie my internal computer malfunctions, as I make this mistake sometimes it seems😅🤨🙄😑).
>You might respond along the lines of ‘oh, no, that can’t be it, because the thing I wrote says that “it is not a meta-(meta-...) program”, etc.’ but I think that would be utterly confused: again, whether or not I “am” in some sense a computer program, what I do is not to blindly execute programs people put in front of me.
The program is not meant to be executed like a computer as that is a pretty boring program and might annoy you if you attempt to, but transcended uncomputationally, or omega-1-trans-computationally (as defined and exemplified in the text), which you can do and will not create a loop.
The instruction with the META says it is NOT A program. It is a comment. It is NOT a program.
“I don’t attempt to “execute” the comment; it’s just a comment, and comments in a program are often wrong or misleading”. Of course you should not attempt to execute it computationally, it is a comment meant to elucidate the program, it can be attempted to be executed, but that leads to an infinite meta-computational loop, or you transcend the computational logical in various ways (for example through the omega-1-trans-computational logic I described). But you say that is not possible? As you claim to operate computationally. But it is possible, as you do not operate solely computationally.
Is it at least conceivable to you? Can you conceive of an uncomputational loop START (creating new loops according to the busy beaver function, so first [busy beaver value one] loops, the [busy beaver value two] loops, etc..GOTO START
So it is conceivable at least. I do not think you can conceive of being a computer, except in the sense that you are the computer and I program you (good to start with simple program) but the human you will probably want to start thinking, maybe I do not operate like a computer after all.
>Maybe this is what you call “meta-ignoring”. I confess that I don’t at all understand why you say that saying you “meta-ignore” a program is “not an argument”. Yes, the thing is in some sense a valid program that will get something trying to execute it into an infinite loop. So what?
If you meta-ignore something it is not an argument, at best a conversation about ignoring, but it is not an argument about the topic at hand, as you are ignoring it, not addressing it.
I don’t understand what to not get about that. There is no valid logical argument in ignoring the reasoning given and saying “So what?”. That is just saying so what? You can do that in response to anything you read, hear, etc… It is not an argument, although it is a good Jazz song😅
You can ignore the program given and just read the commentary. I guess that is a kind of meta-ignoring that would make sense here and that I can get behind, but that is a process you would engage in, it it also not an argument.
It would also likely to confuse you as it would distract from the point of the argument which it shows that logically you cannot execute the instruction based on a solely meta-computational logic (as it is a base-level computational argument), as you will get into a (false, meaning not correct on the base level, although possible correct on the meta, or orthogonal level) meta-computational-loop of some kind doing this; you need an uncomputable level in anther abstract dimension orthogonal to the base-computable level to transcend the instruction and execute it correctly. You can only execute and transcend it uncomputationally / trans-computationally or leave it (literally) uncomputationally, or meta-ignore based on spaciousness (just leaving s p a c e for yourself instead of looking at the argument, which is neither this nor that and it can manifest through us like w h e n we write s p a c i o u s l y), or rejection of bivalent logic in this context (although bivalent logic makes sense in this context, as you can conclude the comment is true, or even try out the assumption it might be true and then it would make a lot of sense, as the comment is valid and true in bivalent logic in the same way 1+1=2 is valid and true in bivalent logic). That is the whole point.
Maybe you intend for your program to be “transcended uncomputationally”, but so far as I can tell it is perfectly possible that when reading it I instead transcended it computationally, and I haven’t seen any argument from you that that couldn’t be so. Unless you are indeed arguing that the only thing one can do “computationally” to analyse a program is to execute it blindly, which as I have said a few times now is simply flatly untrue.
You do not need to remind me that the bit saying “meta” is a comment, not a program. I did understand that. (I have the impression that you think people here are stupid because we don’t come to the same conclusions as you do about computation. You should consider other possibilities.)
If the comment is intended only to be a comment, why then I simply disagree that it elucidates anything. It’s just a bunch of baseless assertions and so far as I can tell they are false. They don’t become any truer or any more insightful just because you put them inside a comment. You could equally well just have said “consider an infinite loop; it can only be exited noncomputationally”. Which, for reasons I have already given a few times, is completely false.
Of course it’s conceivable to me that I might not in fact be doing something computable. Maybe I am really thinking not with my brain but with some magical immaterial soul. Maybe the laws of physics are not at all what I think they are and my brain is doing some tricks that go beyond what any Turing machine could do. I don’t claim that I definitely am computable. Only that so far as I know I am, and that your arguments so far have given absolutely zero reason to think otherwise.
I cannot conceive of “being a computer” if you mean e.g. being a present-day PC. I don’t see what is supposed to be difficult about conceiving of “being a computer” in the sense of “embodying some in-principle-computable process”.
(I don’t put much weight on what’s “conceivable”, anyway. It might turn out that the real nature of the world is something I don’t have a big enough mind to conceive. Or I might think I can conceive something that actually turns out to have subtle internal contradictions that I haven’t noticed. Whether something “can be conceived” has little to do with whether it’s how things actually are, or even a way things could actually be.)
I agree that if you present some reasoning and I ignore it then I am not having a useful discussion with you about that reasoning. But that’s not what I was saying I might be doing, when I said maybe I was doing what you call “meta-ignoring”. The thing I was ignoring is not any sort of reasoning that you have presented.
Your last paragraph, which apparently “is the whole point”, seems to me to consist entirely of muddle. I can “execute the instruction” if I want to, though I won’t because I don’t want to be in an infinite loop. I can understand the program, see what it does, and decide not to execute it because I can already tell what it will do; that can be done purely computationally and compilers do pretty much this thousands of times every day. If it turns out that my mind is non-computational then clearly it can also be analysed non-computationally by doing whatever my mind does. None of this tells us anything about “the limits of AI”. And none of this means that to “execute it correctly” requires anything uncomputational; in fact, I would say that “executing it correctly” just means running the infinite loop and if you do something else then what you are doing is not executing it at all.
>decide not to execute it because I can already tell what it will do
Since when do computational systems have free will?
>You should consider other possibilities
You can only consider possibilities that are possible. Not everything is possible, or even conceivable.
This statement CANNOT be correctly read deterministically & computationally & in bivalent logic.
It can be true, but it being false is inconsistent, as then you have two logical options with no logical way of deciding between them. So it IS true. It is not “possible” it is false.
It is only possible you can label it “false”, but then you just use false as another word for true, which is literally 100% backwards.
>(I have the impression that you think people here are stupid because we don’t come to the same conclusions as you do about computation. You should consider other possibilities.)
I am not saying they are stupid at all, but if you use your intelligence for 100% backwards logic it might not be too helpful.
If I say 1=1.000000000000000000000000000000000000000001 in the context of mathematical equality and bivalent logic it is 100% false.
People here want to say it 99.9999999% true and take that as an axiom, but it is wrong in mathematics. It is a WRONG axiom that is extremely convoluted and backwards. That is unfortunately the state of modern mainstream rationality when it comes to topic such as computationally, with some few exception like Penrose, who is mainstream I suppose, although often still completely ignored by many people that have a lot of belief in AI etc...
Maybe that you can still operate well based on such axioms is proof you have a much higher IQ than me. I couldn’t do that as my brain would implode basically. But IQ is measurable intelligence. You cannot measure the transcendence and vastness of infinity, as it is, well infinite. So that kind of intelligence is just beyond any kind of measure of intelligence.
And I will say kids with a still “technically” low IQ in terms of IQ test for adults can embrace and talk about topics of vastness such as outer space quite well.
META-IGNORE is isomorphic to uncomputational leaving and your computer did it when it processed your post.
Maybe to simplify it a bit again:
The following is true be definition of how a correctly working computer works; it follows the instructions and runs in an endless loop.
However it does not apply to the working of your, or my mind. As you do not work solely computationally. You are capable of uncomputable, undefined, s p a c i o u s (as your brain is pervaded by space and you talk about it and access it to some degree), trans-computational, orthogonal-computational (meaning computations that are logically orthogonal or independent of the base layer computation) behaviour.
START: “<comment> This kind of loop can be left uncomputationally, it cannot be left computationally.” GOTO START
The problem with orthogonal-computational is that is there is only countably many computational directions, so omega-1-orthogonal computational, IS uncomputable in the orthogonal dimension, and a human CAN reason about it and hence it affects their brain. START: “<comment> This kind of loop can be left uncomputationally, it cannot be left countably-orthogonally-comptutationally or independently-computationally → as the computation is still running as long as it being executed correctly computationally, even if you start a new computation.” GOTO START
It is true, but maybe it would better to say it is ABSOLUTELY true, meaning it cannot be conceived to be false, only labeled “false”=true which means it is still true. Which is maybe the issue here, trying to conceive otherwise throws an error or puts you into an illogical state, so you only label “false”=true. Which is… well impractical?
You could ditch the whole statement, but again, that is like shutting off the computer. It still is true, as a computer cannot leave that loop in any way that does not involve computational-magic (meaning the computer enters the etheric or epsilon-ream as I call it and operates in way it was not programmed to behave which it is unlikely to do, in terms of quantum mechanics😉) or ceasing to function (leaving just the computer without function, which is more like a piece of metal, or in terms of abstract machines more like a mere set of symbols or states that don’t do anything).
Your example doesn’t justify this. It just shows input that humans process in not the same way as some computers. But different computers also process it differently, so why do you thing human processing is not computational?
I am not trying to justify anything. I am just trying to show to you what is ALREADY the case within your brain and mind.
It is NOT about what I think. What I *think* is basically irrelevant to what I am talking about here. It is not a matter of opinion at all. It is about logical reasoning.
Or about s p a c i o u s n e s s.
As it is true that is beyond logic and truth.
But again, how do you compute space? There is by nature of what space is, nothing to compute about it (not “whitespace” which is a symbol, I am talking about the space on your screen, in the word and around your screen and in your mind and around your body etc). Yet clearly we can talk about it and even literally put it into our l a n g u a g e.
Or about non-rationality. I can just type anything I want aDSGOo9agdu9hifadllkfd. This proves I am right, because I axiomatically set this as a truth. 🧘♂️🧘♂️🍳🥠🦜😅🖱🐸
It seems the latter explains a lot of what I see from the so called rationality community and from the so called “programmers”. It is fine to engage in this type of language or thought process, but it is problematic to call non-rationality or even outright irrationality rationality, as rationality should be a sphere of logic to a good extent, or at least of humble and careful reasoning, not just saying I am right because my axioms are true according to my own axiomatic system, which might be about as right as 1=1.00001 if I look a bit more closely.
I mean you didn’t present logical argument for “human brain works uncomputationally”. Your example program can be processes by a computer that assigns “true” to the comment for the same reasons you do. Unless you define “uncomputationally” as some particular way you can process things that may be performed by real-world computers and that doesn’t have anything to do with BB, what you talking about is not the case within any brains.
>I mean you didn’t present logical argument for “human brain works uncomputationally”.
That might be true, in so far as it is already the case. There is no argument to be made, I only point out what is already true.
The human brain cannot be conceived to work solely computationally as this statement is starting to be understood to be true based on conclusive bivalent deterministic reasoning.
True → It is true.
False → It “can be conceived?”, but it cannot be conceived to work that way if it really really does not work that way. If you consider that it might or might not, you have two options with no way to deterministically, logically choose between them. So conclusively, deterministically only the true option remains, as even the false option is inconsistent in bivalent, deterministic logic, which is what is being assumed here in this particular context (which doesn’t negate that other logic is valid in other contexts). But there is no general way to compute inconsistency, it becomes evident in another way.
This is not really an argument, it is just a simple stating of bivalent truth.
It is not mean to persuade, but to point out the truth that is already the case.
Being, yes there is uncomputable deterministic reasoning. Which doesn’t mean there is not also a lot of quasi-computational reasoning happening, or indeterministic reasoning, and even indeterminate but non-probabilistic reasoning (->??? Knightian Uncertainty or one might call it Knightian indeterminacy, stochastically unquantifiable indeterminacy).
Yes, my computer did meta-ignore this post, because it didn’t process its semantic content.
It didn’t leave anything, as it did not execute the program I gave. As there is no reason to let it run the program, as it is just a simple loop as far as a computer is concerned.
So not sure what you mean by “isomorphic” in this context. How is finite syntactic processing isomorphic to an uncomputable process (like the busy beaver function, for example). That kind of process is what I mean by uncomputational leaving. I do not mean merely not doing a certain computation (which can be called “leaving it aside” but I mean leaving something after having entered it, which my computer did not do), but doing anything that is computational (which a computer or Turing machine can do, and is pretty well-defined in that way).
I mean there are no uncomputational processes involved in human processing your program—humans just process it in a different way the same way your computer processes it differently when you post it.
I guess it is a definitional issue too.
That is maybe why in some way, I might have just no chance getting through to many people.
If you define “uncomputational processes” in your mind as = computational, or you say 1=1.000000001, which is 100% false in mathematics, what can I do?
If you define “false”=true and “true”=false in your mind, and maybe sometimes the reverse, it seems conversation about true and false becomes basically impossible.
The fact of the matter is we can PROVE certain abstract functions to be uncomputational. Where does the human system get its information on that from? It cannot be random, as it is a deterministic function. It cannot be from the environment, as it is utterly unrelated to anything practical. It cannot be computational, as no finite series of states can make you draw any abstract conclusions about if they come from something computable or uncomputable. So where does it come from?
I guess the computationalist answer is, busy beavers don’t really exist, but they DO as an abstract function. And abstract functions CAN effect our brains when we think about them. Obviously!
So how exactly can you explain that humans can reason about uncomputability? How does that enter our brain, if it does not already exists there?
They do not “actually” but just randomly arrive at those ideas? Monkey typewriter that came up with the strings uncomputable and busy beaver?
They magically get the ideas about uncomputability and the correct results based on that (ie they only do computations, but they magically do them in an invalid manner to still get the right result or the right idea, like the busy beaver being uncomputable and a certain BB being above 100^100^100^100?).
The same way humans can reason about computable functions that their brain can’t compute, like figure out whether 2^12345 is more than 0. Yes, it involves indeterminism and correlation with environment, but it is true for reasoning both about computable and uncomputable functions. And reasoning about both of them can be performed by computer. Like, why don’t you call reasoning about 2^12345 magical? It involves the same kind of non-deterministic manipulation of finite set of symbols as reasoning about BB.
Busy beavers have no correlation with the environment, except in the word which to be fair is interesting and inspiring as it is a cool animal, but ultimately does not logically relate much to the function itself.
It is good you acknowledge indeterminism. There we are getting somewhere interesting!
However a computation as defined normally is operating deterministically. That is what we consider a correctly functioning computation to be.
So the subject of indeterminism is very interesting, but it itself shows you that you are not operating computationally. Also something I have pointed out numerous times.
People like to refer to heuristics or probabilistic computing which is indeed not deterministic but can still operate within a computational space (also if it is computational itself does depend on the definitions), but it also does not suffice, as humans can understand things that are neither probabilistic / heuristic nor deterministic.
This statement is true and not probabilistic but not determinate.
Or even neither deterministic nor indeterminate, like space, as it is beyond nameability, so the word indeterminate itself becomes i nd ete rm ina a a a tee eee ee e and infested by space👽.
It may be relatively not much correlation, but there is some: it’s properties can be proved by contradiction, that is also useful for proofs actually useful in the environment. It may have some vague structural appeal for humans that are shaped by the environment. But not very important to current discussion.
Abstractly, I don’t think indeterminism is even required—you can prove everything humans know about BB by computably deterministically enumerating proofs. By I’m bad at logic, so I decided to stick to the real world. Speaking about acknowledgement—you do acknowledge that your arguments have nothing to do with BB/computability and work the same way for human reasoning about 2^12345 > 0?
By that definition computers are also not operating computationally.
All these things can be understood by computers in the same way. Including “This statement is true and not probabilistic but not determinate.”. I don’t see what reasoning steps for that statement you can present that wouldn’t be either deterministic or non-deterministic, but even if your terminology allows it, these steps also could be performed by computer.
Is your point that many tasks we now use computers for involve different sorts of algorithms from ones used by humans? Instead about what is possible in principle? Then sure, except theorem-provers and neural networks and neural-network theorem-provers are already a thing.
>you can prove everything humans know about BB by computably deterministically enumerating proofs
This is an oopsie right here. Of course you can enumerate proofs, but if you enumerate wrong proofs along with right proofs, where is the truth in that? You would have to distinguish true from false, and no finite set of technical symbols or states can decide between computable and uncomputable, as abstractly a computation could generate all finite sets of abstract symbols or states (no matter if they correspond to truth or not).
>By that definition computers are also not operating computationally.
Very well observed in a technical sense. That is why I am first and foremost talking about abstract computability. In a sense that term might not be the best, as it is indeed not that related to real life computers.
I thought of using the term “definable, infinitely regularizable” or something akin to it for computability and “process-wise irregularizable but deterministic” for deterministic uncomputability.
>these steps also could be performed by computer.
I am not talking about steps here, but about continuum. The real numbers for example are not step-wise, they are continuous and contain uncomputable and undefinable sections in a technical sense (you can google that).
You can check that every proof you enumerate consist of only valid steps. That check can be performed computationally. Every such proof would prove true statement.
Reasoning about continuum can be done in discrete steps. Human reasoning doesn’t use continuum in any way fundamentally different from how neural networks use floats.
But okay, abstract computers. So you are saying that humans work differently from some abstract computer. This is true in a sense. Why is this interesting, again? In what way is it still true about abstract neural networks instead of abstract computers? Can’t neural networks think about space? Or is your point that no currently known computer system can reason while changing abstraction level in broad range?
>You can check that every proof you enumerate consist of only valid steps. That check can be performed computationally. Every such proof would prove true statement.
It cannot. Truth cannot even always be defined, let alone computed (Tarski’s undefinability theorem). Of course you cannot compute something if it cannot even be defined; a computation needs defined states and defined outputs to be a computation as understood in computer science. That is a proven fact, it is not my opinion.
“Tarski’s theorem, on the other hand, is not directly about mathematics but about the inherent limitations of any formal language sufficiently expressive to be of real interest. Such languages are necessarily capable of enough self-reference for the diagonal lemma to apply to them. The broader philosophical import of Tarski’s theorem is more strikingly evident.
An interpreted language is strongly-semantically-self-representational exactly when the language contains predicates and function symbols defining all the semantic concepts specific to the language. Hence the required functions include the “semantic valuation function” mapping a formula A to its truth value ||A||, and the “semantic denotation function” mapping a term t to the object it denotes. Tarski’s theorem then generalizes as follows: No sufficiently powerful language is strongly-semantically-self-representational.”
>Reasoning about continuum can be done in discrete steps. Human reasoning doesn’t use continuum in any way fundamentally different from how neural networks use floats.
I am not talking about reasoning about continuum as a representation, in which regard you are correct I can refer to continuum or real numbers as a symbol and reason about that. But that is just not the kind of reasoning I am talking about. I can add two such functions correctly like sin(x)+sin(x)=2*sin(x).
It is not the kind of of reasoning I am talking about as I am reasoning about the continuum itself, not the representation you have of it (or the computer has of it), so I am talking about this: [ ], or maybe this ________.
Depending if you mean something more tangible, or more intangible, both which are valid, although the real numbers seem more like the latter, as they are one-dimensional.
Talking as it appears on your screen, not as my reasoning and text with whitespaces is represented in the computer which is another question.
You maybe have an interesting point here in that while we can reason about continuum, it is more striking we reason within continuum. As you see the space around the letters, you will notice it is continuous. There are no steps there. My reasoning right here, even as it appears on your screen, happens within that and it can even enter the c o n t i n u u m to create more c o n t i n u u m.
One might say the continuum I am reasoning within (or shall I say, we, as you are also know in the process of understanding) is the box this comment appears in—as far as the written language as it appears goes -, or even more broadly, the space in front of my and your computer screen where the pixels end in the depth dimension; where the written words are. And ultimately outer space, which we are both floating within✨.
>But okay, abstract computers. So you are saying that humans work differently from some abstract computer. This is true in a sense. Why is this interesting, again?
Because abstract computers are a more generalized notion that physical computers, as we use abstract computational logic to program and even construct physical computers (I guess in theory there could maybe be “other” computers that are not like that, that maybe do not need to be programmed, or are programmed in a way that does not rely on certain concepts like binary logic or binary data; but then you are stretching the definition—I can just define the whole universe as a “computer”, that doesn’t tell me anything except that I arbitrarily choose to define it that way).
So it implies that it is even more true humans work differently than real computers, and real computers cannot replace real humans, not even just in terms of reasoning, let alone perception, feeling, consciousness which are all more than just reasoning.
At least abstract computer logic DOES work in the human brain. I will not deny that, although it is often more like quasi-computational, as it is approximate and sometimes might be wrongly applied (like literal mathematical mistakes) and computations that are frequently wrong in a practical sense are not <💻⚙computations>, which we expect to actually work, like a computer works, but more like elementary school child doing math.
However real computer logic does not work in the brain, as it uses physical logic gates, silicon materials, etc.. It is also very precise logic that can add or even multiplicate extremely long numbers with nearly 100% accuracy. So there it is even more clear and undeniable a human is not a computational in that sense as it far far far more inaccurate.
Tarski only aplies to the whole concept of truth. For “things humans can prove” you can use enumeration + validy check.
I don’t think humans actually use continuum much when thinking: appearence of continuous space between words is just a failure of introspection, not a concequence of use of continuum—actual elements of human cognition are descrete. And real computers also exist in space—that’s one more reason to consider them.
No? “A more generalized notion” means all real computers are like your abstract computers and that is false. Again, there is obvious conterxample—your abstract computer can’t leave your loop, real computer can. So implication that “it is even more true humans work differently than real computers, and real computers cannot replace real humans” doesn’t work.
I feel an issue here is emotion:
This kind of reasoning might make you feel emotionally uncertain, as it shakes up your assumptions of how the world or the mind works. It can me feel uncertain too (not sure why really, I feel maybe I am a bit wary of uphending other’s people thinking without being careful enough). This however is a rationality site, so I note this and do not budge to emotionality and would recommend addressing this in some compassionate way. Things like computer logic are largely independent of our emotions (largely in the sense if we program emotionally the program might not work, haha).
It might be like an earthquake. Thing is, clinging to certain views based on emotional reasons is not only not rational, clinging in general is a cause for dissatisfaction (like the Buddha called it, Dukkha).
The truth is, the world will shake us up. If not in a gentle way by reasoning or unconventional humans, then in more violent ways, by sickness and death.
But anyway here the less emotional part:
>Tarski only aplies to the whole concept of truth. For “things humans can prove” you can use enumeration + validy check.
I am not talking about things humans can prove computationally. That is the whole point, that humans are beyond computability. And yes you might say I talk about the whole truth, in so far as that is a meaningful concept, I am not talking about a certain set of true statements, like 1=1, 2=2, etc...
However human reasoning transcends the human brain, maybe that is the stuck thing that keeps you going back to this kind of logic.
Human reasoning is on your screen right here, it is not just in your brain. These are not just symbols. It is actually written by a human and is expression of human and transcendental reasoning, and it is linguistic reasoning. So the human reasoning is not just in the brain, it is literally on your screen. That cannot be comprehended in conventionalist terms or step-wise terms, but it evident as it is right here.
>I don’t think humans actually use continuum much when thinking: appearence of continuous space between words is just a failure of introspection, not a concequence of use of continuum—actual elements of human cognition are descrete. And real computers also exist in space—that’s one more reason to consider them.
Huh, it is here: ___________________________________________________________ that is a line continuum in thinking.
That is spacious continuum surrounded by linguistic reasoning: [ ].
I am not talking about “elements of human cognition”.
That is independent of what you think, as it can be perceived. You can perceive independent of verbal or image-based or conceptual thinking. Even if you perception is not comprehensible to you (which it is not really; even just what is related to it in terms of brain function is far too complex).
I am talking about the continuum of human cognition, which is beyond your personal mind; ie it expressed through my brain to get to your screen, but it is reasoning that is input to both your and my brain (as photons going from the screen to your retina and into your brain): ___________________________ __
See the small step at the end? That is “one element” of human cognition, a step within the continuum that is not really discrete in a conrete way: >>>[ ]<<<<<
If you want to be more concrete, you could take this ⚫ as a discrete element of human cognition. You are right these are discrete. But they are not in your brain but on your screen, or in the matrix of general geometry; like the geometry of the letters surrounding the box.
Can space be used? How? It is space. Space is not a “thing” to be utilized. That is not what space is. At best it is whitespace to be u t i l i z e d. But as you see that is just more space, it is not really a thing per se.
And this is not about introspection, it is about extraspection, as the words are on your screen, not inside your brain or just your individual mind. Extraspection is not “failure of introspection” it is literally what allows for the perception of the external world.
>No? “A more generalized notion” means all real computers are like your abstract computers and that is false.
Well, I can grant you that. A more abstractly generalized notion. It seems it is not definable in more of a broad generalized way, as I do not know how to “generalize” a literal real computer. It is there, not something I can generalize like a concept.
We do utilize a more abstract generalized notion to make sense of the programming and logic of computers though. It is more general in so far as it is the set of ALL computations² (computers just run a finite amount of computation based on the physical limits), and can run indefinitely (a computer stops running after its lifetime has finished). So that seems a meaningful notion in which it is more broad and general but I grant you it cannot quite be applied to a literal physical computer.
Those abstract computations transcend computers, as they can also manifest in the human brain, and they also transcend the human brain, as they can manifest in a computer, in someone else’s brain, etc...
²within a certain framework like a Turing machine, or a programming language, etc
Maybe—I can see how emotions may prevent human from accepting that humans are not special.
Everything that humans can prove, can be proven by that computable algorithm. Including things you call “uncomputational”. If your point is just that humans are real so they can affect space and abstract computers are abstract, then real computers also can affect space, and perceive space better than humans. Is there part of this you disagree or it’s just that you think “people don’t talk about real-world effects of computers” is important?
If you need to generalize it to make your point you can imagine your computer but with infinite memory and arbitrary fast execution speed. It can do literally everything a human can.
>If your point is just that humans are real so they can affect space and abstract computers are abstract, then real computers also can affect space
How can you affect space? Space is absolutely inert physically. There is no way to affect it.
I think you meant space as in w h i t e s p a c e. But that is basically only more light on your screen, not darkness, as real space is.
>Everything that humans can prove, can be proven by that computable algorithm.
How to have an algorithm prove that 1=1?
How do you know it is true, unless you define it that way first?
So it seems a computer cannot prove that, unless you define it true first.
That however can be proven, as you can define validation to not be able to able to be validated by computer.
As if you feed it the information 1=1 it could be programmed to output “false”, so if the definition of validation is absolute validation with no possibility of a false output, it is clearly wrong, as you can program a computer to falsely claim 1=1 is false.
However I doubt you would do that as you can see the uncomputable consistency and absolute 100% chance truth of 1=1.
This means the whole spectrum of completely false to completely true is available to a computable validation mechanism. A true proof is only true and can only be true as otherwise it would be not a proof, in so far as bivalent logic applies at all (which I will admit it does not always apply).
If you can validate it in a wrong way (meaning a false validation, or deducing the correct conclusion from a false premise, so that you can correctly conclude something that is wrong but correct), that is not really proving.
If you arbitrarily decide or guess whether the proof is accurate or not, it is not a proof.
This statement cannot be computationally verified: This statement cannot be proven.
Ever heard of Gödel?😅
The statement is true, but not provable. As nothing can be proven if you do not have a definition of what is true first. As we can see the Gödel sentence to be true, that entails that we can see true what is not provable, and hence not computationally verifiable too.
So we can realize truth beyond computational verification.
By that definition of validation humans never proved anything, because they sometimes say that 1=1 is false.
Gödel sentence for some formal system can’t be proven in that system, but it can be proven in more powerful system. Humans that see the sentence to be true are just (reasoning in a way equivalent to) using a more powerful formal system. And everything in that system is computably verifiable.
>Maybe—I can see how emotions may prevent human from accepting that humans are not special.
Fully agree here. I definitely have this issue as well!
Like we want to imagine we are so special in our power and intelligence that we can emulate them at what they are good at. We really cannot. Computers are already unfathomably more intelligent in the domain of precise calculation of crystallized information which they operate in. We also imagine we can program computers do anything we want.
Which again, computers will often do what their internal logic causes them to do, not what we program them to do, and they will always be like that, as that is how a computer operates.
So there is no hope to ever get to that level. I kind of want to speed up my computational speed sometimes, but I have trouble even getting the accuracy right, like calculating 3-digit number effortlessly. For a computer that is just a silly joke in terms of difficulty of calculation.
>If you need to generalize it to make your point you can imagine your computer but with infinite memory and arbitrary fast execution speed. It can do literally everything a human can.
Huh? I don’t think it could walk, as it does not have legs. I am not being silly here, I think you mean in terms of some abstract computational capability, but that is not what you said.
Again in a physical sense it is also not true. A computer is made of silicon so it cannot send information between neurons.
So I am not even sure in which sense it could be true. I think you mean in terms of precise calculations of crystallized information. I guess then you have very good point, as I already talked about above.
But then, the computer cannot do this: 💻💡
As THIS💡 is the light produced by the screen you are seeing when reading this, not a computation. It is not produced by the computer either, but by the screen the computer is connected to.
I assumed it can make legs and neurons instantly using photons radiating from its chip.