Seems kinda weird, right? Well, consider this: Turing showed that there is no computation that this machine can’t perform, that another machine can perform.
I am not sure to which extent you already know what follows, but I thought this might be worth clarifying.
The “basic” church turing thesis is that our intuitive notion of “formal computation process” is entirely captured by the turing machine. A consequence is that any “good” formal notion of algorithm / fully described computation process is equivalent or weaker to the Turing machine.
So far, this has been true of all proposals and it is taken for granted that this statement is true. Note that the thesis is about being a good formal definition of an intuitive notion. Hence we cannot prove it to be true. We can only prove that various attempts at formalizing computation indeed ended up equivalent to each others.
The “physic church turing thesis” is that no machine can be built that performs computations impossible for a turing machine. For example: there is no way to build a halting oracle in real life. This is less certain.
But if you take a God’s eye view and had the power to get the community to all shift at once to a different model, it sounds to me like there are probably better ones than Turing’s.
People implicitly refer to models much closer to actual programing languages when they prove stuff related to computability. The tedious details of turing machines are rarely discuss in actual proof, at least in my experience. In fact, the precise model at hand is often somewhat ambiguous, which can be slightly problematic. I think the turing machine is a good starting point to understand the notion of computability, just because it is simpler than alternatives (that I am aware of).
One of my favorites: he proved that, given some extremely, extremely reasonable assumptions[13], you should Shut Up and Multiply. Well, he didn’t phrase it that way. He said “maximize expected utility”.
Not sure he did, at least to the extent that it is taken when we say “shut up and multiply”. I guess you refer to the Von Neumann–Morgenstern utility theorem, but that theorem does not provide a full “it is in our best interest to do utility maximization”. There was some discussion on the topic recently: a first claim and my own answer.
I am not sure to which extent you already know what follows, but I thought this might be worth clarifying. The “basic” church turing thesis is that our intuitive notion of “formal computation process” is entirely captured by the turing machine. A consequence is that any “good” formal notion of algorithm / fully described computation process is equivalent or weaker to the Turing machine. So far, this has been true of all proposals and it is taken for granted that this statement is true. Note that the thesis is about being a good formal definition of an intuitive notion. Hence we cannot prove it to be true. We can only prove that various attempts at formalizing computation indeed ended up equivalent to each others.
The Church-Turing Thesis is in fact false, in that there are other computer models that satisfy the intuitive notion and a formal notion of computation that define different computable sets. Basically, it means that you can’t make arbitrary changes to a Turing Machine and still define the same sets of computable functions. Similarly, there are other models of computation that are different, and define different computable sets.
This paper gives a rundown of a specific model of computation, in which a Universal Turing Machine is enhanced by a closed-timelike curve in the Deutschian model of closed-timelike curves, and it turns out that the computable sets in this model are both recursively enumerable and the co-recursively enumerable ones. These are not problems that a Universal Turing Machine without a CTC could solve. This means that the new problems that become solvable include both the validity and satisfiability of full 1st order logic formulas, and much more than that, they can solve any problem that is Turing-reducible to the halting problem, which is huge philosophically speaking.
This paper below gives the gist of what a Turing Machine with a CTC could solve below:
I’d always teach the Church-Turing Thesis as essentially a historical curiosity, not a cutting-edge hypothesis, a hypothesis that turned out to be false, but in the process we learned something new about the nature of computers.
so long as the infinite bitstring that computes the halt function for Turing Machines is simulated with every other bitstring
I don’t know what you mean by this.
A Turing machine with a halting oracle can (trivially) solve the halting problem. A Turing machine with access to a closed timelike curve can solve the halting problem. A Turing machine on its own cannot. And we do not have access to either a halting oracle or a closed timelike curve.
The halting oracle can be encoded (by God) as an infinite bitstring. Every finite prefix of that string appears in a list of all finite strings. That list of all finite strings can be generated by a Turing machine. But none of that matters, because we do not know and cannot know which subset of all those finite strings encodes a halting oracle.
Here is a simpler example of the issue here. Consider two Turing machines. Each of them ignores its input tape. One prints “Yes” and halts, and the other prints “No” and halts. One of these machines answers the Riemann Hypothesis. Of course, we do not know which, which makes them useless for answering the Riemann Hypothesis.
I didn’t check the article yet but if I understood your comment correctly then a simpler example would have been “turing machines with a halting oracle”, which is indeed stronger than normal turing machines.
(Per my understanding) the church-turing thesis is about having a good formal definition of “the” intuitive notion of algorithm. And an important property of this intuitive notion is that a “perfectly rigorous idiot” could run any algorithm with just paper and a pen. So I would say it is wrong to take something that goes beyond that as a counterexample.
Maybe we should clarify this concept of “the intuitive notion of algorithm”.
PS: We are running dangerously close to just arguing semantics, but insofar as “the church-turing thesis” is a generally consensual notion I do not think the debate is entirely pointless.
I do think this is possibly going to degenerate into a word game, but the thing I remember from the wikipedia is that all algorithms that a human can effectively calculate with pen and paper are the computable functions, and the problem here is either the human has real limitations on time like real humans, and thus the set of computable functions is much smaller than all total computable functions, or we don’t, and the problem here is CTC computers that solve the halting problem can be simulated by a Universal Turing Machine, thus CTC spacetimes will eventually be learned by a human with unlimited time and memory, and at that point the set of computable functions is larger than the original definition, causing a contradicton.
To ensure that this is valid, we will be relying on a construction like this, which tells us that while Universal Turing Machines can’t compute the halting function, it can simulate the halting function anyway, so long as the infinite bitstring that computes the halt function for Turing Machines is simulated with every other bitstring, we can emulate a computer that knows the halt function, and thus a human must be able with unlimited time and memory to compute more things than a Turing Machine.
Here’s the link below on the details of this process:
Thus, the set of computable functions must be larger or smaller than defined, contradicting the Church Turing Thesis as written.
It’s okay to say that your intuitive notion of computability is exactly equivalent to the Universal Turing Machine, but then say that instead of insisting that every notion of computability is the same thing.
Ok, I think I can clarify what people generally mean when they consider that the logic Church-Turing thesis is correct.
There is an intuitive notion of computation that is somewhat consensual. It doesn’t account for limits on time (beyond the fact that everything must be finitely long) and does not account for limits in space / memory.
It is also somewhat equivalent to “what a rigorous idiot could be made to do, given immortality and infinite paper/pen”.
Many people / most computer scientist share this intuitive idea and at some point people thought they should make more rigorous what it is exactly.
Whenever people tried to come up with formal processes that only allow “obviously acceptable” operations with regard to this intuitive notion, they produced frameworks that are either weaker or equivalent to the turing machine.
The Church-Turing thesis is that the turing machine formalism fully captures this intuitive notion of computation. It seems to be true.
With time, the word “computable” itself has come to be defined on the basis of this idea. So when we read or hear the word in a theoretical computer-science context, it now refers to the turing machine.
Beyond this, it is indeed the case that the word “computation” has also come to be applied to other formalisms that look like the initial notion to some degree. In general with an adjective in front to distinguish them from just “computation”. We can think for example of “quantum computing”, which does not match the initial intuition for “computation” (though it is not necessarily stronger). These other applications of the word “computation” are not what the Church-Turing thesis is about, so they are not counterexamples.
Also, all of this is for the “logic” Church-Turing thesis, to which what can be built in real life is irrelevant.
PS: I take it for granted that computer scientists, including those knowledgeable on the notions at hand, usually consider the thesis correct. That’s my experience, but maybe you disagree.
Thank you for this post.
I am not sure to which extent you already know what follows, but I thought this might be worth clarifying. The “basic” church turing thesis is that our intuitive notion of “formal computation process” is entirely captured by the turing machine. A consequence is that any “good” formal notion of algorithm / fully described computation process is equivalent or weaker to the Turing machine. So far, this has been true of all proposals and it is taken for granted that this statement is true. Note that the thesis is about being a good formal definition of an intuitive notion. Hence we cannot prove it to be true. We can only prove that various attempts at formalizing computation indeed ended up equivalent to each others.
The “physic church turing thesis” is that no machine can be built that performs computations impossible for a turing machine. For example: there is no way to build a halting oracle in real life. This is less certain.
People implicitly refer to models much closer to actual programing languages when they prove stuff related to computability. The tedious details of turing machines are rarely discuss in actual proof, at least in my experience. In fact, the precise model at hand is often somewhat ambiguous, which can be slightly problematic. I think the turing machine is a good starting point to understand the notion of computability, just because it is simpler than alternatives (that I am aware of).
Not sure he did, at least to the extent that it is taken when we say “shut up and multiply”. I guess you refer to the Von Neumann–Morgenstern utility theorem, but that theorem does not provide a full “it is in our best interest to do utility maximization”. There was some discussion on the topic recently: a first claim and my own answer.
The Church-Turing Thesis is in fact false, in that there are other computer models that satisfy the intuitive notion and a formal notion of computation that define different computable sets. Basically, it means that you can’t make arbitrary changes to a Turing Machine and still define the same sets of computable functions. Similarly, there are other models of computation that are different, and define different computable sets.
This paper gives a rundown of a specific model of computation, in which a Universal Turing Machine is enhanced by a closed-timelike curve in the Deutschian model of closed-timelike curves, and it turns out that the computable sets in this model are both recursively enumerable and the co-recursively enumerable ones. These are not problems that a Universal Turing Machine without a CTC could solve. This means that the new problems that become solvable include both the validity and satisfiability of full 1st order logic formulas, and much more than that, they can solve any problem that is Turing-reducible to the halting problem, which is huge philosophically speaking.
This paper below gives the gist of what a Turing Machine with a CTC could solve below:
https://www.scottaaronson.com/papers/ctchalt.pdf
I’d always teach the Church-Turing Thesis as essentially a historical curiosity, not a cutting-edge hypothesis, a hypothesis that turned out to be false, but in the process we learned something new about the nature of computers.
I don’t know what you mean by this.
A Turing machine with a halting oracle can (trivially) solve the halting problem. A Turing machine with access to a closed timelike curve can solve the halting problem. A Turing machine on its own cannot. And we do not have access to either a halting oracle or a closed timelike curve.
The halting oracle can be encoded (by God) as an infinite bitstring. Every finite prefix of that string appears in a list of all finite strings. That list of all finite strings can be generated by a Turing machine. But none of that matters, because we do not know and cannot know which subset of all those finite strings encodes a halting oracle.
Here is a simpler example of the issue here. Consider two Turing machines. Each of them ignores its input tape. One prints “Yes” and halts, and the other prints “No” and halts. One of these machines answers the Riemann Hypothesis. Of course, we do not know which, which makes them useless for answering the Riemann Hypothesis.
I didn’t check the article yet but if I understood your comment correctly then a simpler example would have been “turing machines with a halting oracle”, which is indeed stronger than normal turing machines. (Per my understanding) the church-turing thesis is about having a good formal definition of “the” intuitive notion of algorithm. And an important property of this intuitive notion is that a “perfectly rigorous idiot” could run any algorithm with just paper and a pen. So I would say it is wrong to take something that goes beyond that as a counterexample.
Maybe we should clarify this concept of “the intuitive notion of algorithm”.
PS: We are running dangerously close to just arguing semantics, but insofar as “the church-turing thesis” is a generally consensual notion I do not think the debate is entirely pointless.
I do think this is possibly going to degenerate into a word game, but the thing I remember from the wikipedia is that all algorithms that a human can effectively calculate with pen and paper are the computable functions, and the problem here is either the human has real limitations on time like real humans, and thus the set of computable functions is much smaller than all total computable functions, or we don’t, and the problem here is CTC computers that solve the halting problem can be simulated by a Universal Turing Machine, thus CTC spacetimes will eventually be learned by a human with unlimited time and memory, and at that point the set of computable functions is larger than the original definition, causing a contradicton.
To ensure that this is valid, we will be relying on a construction like this, which tells us that while Universal Turing Machines can’t compute the halting function, it can simulate the halting function anyway, so long as the infinite bitstring that computes the halt function for Turing Machines is simulated with every other bitstring, we can emulate a computer that knows the halt function, and thus a human must be able with unlimited time and memory to compute more things than a Turing Machine.
Here’s the link below on the details of this process:
https://www.amirrorclear.net/academic/ideas/simulation/index.html
Thus, the set of computable functions must be larger or smaller than defined, contradicting the Church Turing Thesis as written.
It’s okay to say that your intuitive notion of computability is exactly equivalent to the Universal Turing Machine, but then say that instead of insisting that every notion of computability is the same thing.
Ok, I think I can clarify what people generally mean when they consider that the logic Church-Turing thesis is correct.
There is an intuitive notion of computation that is somewhat consensual. It doesn’t account for limits on time (beyond the fact that everything must be finitely long) and does not account for limits in space / memory. It is also somewhat equivalent to “what a rigorous idiot could be made to do, given immortality and infinite paper/pen”. Many people / most computer scientist share this intuitive idea and at some point people thought they should make more rigorous what it is exactly.
Whenever people tried to come up with formal processes that only allow “obviously acceptable” operations with regard to this intuitive notion, they produced frameworks that are either weaker or equivalent to the turing machine.
The Church-Turing thesis is that the turing machine formalism fully captures this intuitive notion of computation. It seems to be true.
With time, the word “computable” itself has come to be defined on the basis of this idea. So when we read or hear the word in a theoretical computer-science context, it now refers to the turing machine.
Beyond this, it is indeed the case that the word “computation” has also come to be applied to other formalisms that look like the initial notion to some degree. In general with an adjective in front to distinguish them from just “computation”. We can think for example of “quantum computing”, which does not match the initial intuition for “computation” (though it is not necessarily stronger). These other applications of the word “computation” are not what the Church-Turing thesis is about, so they are not counterexamples. Also, all of this is for the “logic” Church-Turing thesis, to which what can be built in real life is irrelevant.
PS: I take it for granted that computer scientists, including those knowledgeable on the notions at hand, usually consider the thesis correct. That’s my experience, but maybe you disagree.