I am not sure I understand this line of argument. I think you are saying that, while no Turing machine can solve the halting problem in general, that there is a Turing machine of a length no greater than F(n) (where F is an as-of-yet unspecified function) that can solve the halting problem for any Turing machine whose length ⇐ n.
If so, it seems to me that this observation is trivially true; there are only ~ 2^n Turing machines of a length <=n (assuming a two-symbol alphabet), so there must be a Turing machine with a length proportional to 2^n can solve the halting problem for Turing machines of length ⇐ n, simply via table lookup.
While trivially true, this does not really solve the halting problem, because:
The halting problem requires a general solution for arbitrarily large Turing machines, and
While it is clear that a Turing machine exists that can solve the halting problem for Turing machines of length no greater than n, there is no general way to find that Turing machine for an arbitrary value of n.
If so, it seems to me that this observation is trivially true; there are only ~ 2^n Turing machines of a length <=n (assuming a two-symbol alphabet), so there must be a Turing machine with a length proportional to 2^n can solve the halting problem for Turing machines of length ⇐ n, simply via table lookup.
How does it do so via table look-up? Without running the machines, how can it form such a table?
this does not really solve the halting problem, because
If I was claiming to be able to solve the Halting Problem in general circumstances, as in solve the Halting Problem, I would be speaking nonsense. Glad we’re on the same page.
The halting problem requires a general solution for arbitrarily large Turing machines, and
Which requires a formal theory of countably infinite Kolmogorov complexity (ie: infinite expected thermodynamic entropy), yes.
While it is clear that a Turing machine exists that can solve the halting problem for Turing machines of length no greater than n, there is no general way to find that Turing machine for an arbitrary value of n.
My understanding of Chaitin’s Theorem is that the F(n) (which he calls L) places an upper bound on the size of strings that can be proven to be algorithmically random. I have been hoping to find the time to explore the proof of this theorem in detail, so as to hopefully yield a construction telling us exactly how large a halting problem (equivalently: a string randomness problem) a given theory can solve.
As to where we obtain such a theory, that requires, in my view, a more thorough exploration of AIT—and, potentially, ordinal logic—to give a useful answer. My current maximum-probability estimate is that any sufficiently large consistent theory capable of talking about compressing strings/machines that halt can do the job; I believe this based on Chaitin appearing to talk about such in his own description of his proofs (and its making intuitive sense, given the universality of any one formulation of Turing-complete no-oracle computation).
The core idea is that every theory can solve some size of halting problems, so what we actually have to do is keep learning a larger theory until we find that the halting problems of interest become solvable. AIT and the use of real-life compressors as upper bounds to Kolmogorov complexity should yield ways to apply the theory to real programs—sometimes.
How does it do so via table look-up? Without running the machines, how can it form such a table?
It doesn’t have to form the table; a Turing machine of a length on the order of 2^n can have the answer to the question “does this Turing machine halt” built-in (hard coded) for all 2^n Turing machines of length n or smaller. Such a Turing machine exists for any value of n, but finding this Turing machine is not possible for an arbitrary value of n. Functions on a finite domain are essentially all Turing computable, so the fact that a Turing machine exists that can solve the halting problem for all Turing machines of length n or less is not surprising.
The core idea is that every theory can solve some size of halting problems, so what we actually have to do is keep learning a larger theory until we find that the halting problems of interest become solvable.
My point is that the fact that a theory exists that can solve some size n of halting problem is trivially true (if by theory you mean more-less the same thing as Turing machine) but this fact is also useless, since there is no way to discover that theory (Turing machine) for an arbitrary value of n.
It doesn’t have to form the table; a Turing machine of a length on the order of 2^n can have the answer to the question “does this Turing machine halt” built-in (hard coded) for all 2^n Turing machines of length n or smaller. Such a Turing machine exists for any value of n, but finding this Turing machine is not possible for an arbitrary value of n. Functions on a finite domain are essentially all Turing computable, so the fact that a Turing machine exists that can solve the halting problem for all Turing machines of length n or less is not surprising.
Oh, you mean classical existence. Yes, it is classically trivial that there exists some Turing machine blah blah blah, simply by the fact that classically, a Turing machine must either halt in finite time or never halt. However, for purposes of computability theory and algorithmic information theory, such an existence statement is useless.
My apologies, but are you making the statement just to be extra platonist about these things?
My point is that the fact that a theory exists that can solve some size n of halting problem is trivially true (if by theory you mean more-less the same thing as Turing machine) but this fact is also useless, since there is no way to discover that theory (Turing machine) for an arbitrary value of n.
And I am speaking in a constructive sense of existence, intending to say that we can in fact discover such theories: by learning our way up the ordinal hierarchy, so to speak.
However, for purposes of computability theory and algorithmic information theory, such an existence statement is useless.
That is my point—stating that a function that maps a finite domain onto {0,1} is computable is trivially true and (as you said) useless. I was not really trying to be extra Platonist; I was just trying to point out that these sorts of finite mapping functions are not really interesting from a computability theory standpoint.
And I am speaking in a constructive sense of existence, intending to say that we can in fact discover such theories: by learning our way up the ordinal hierarchy, so to speak.
If you can formalize how you will discover such theories (i.e. how the learning our way up the ordinal hierarchy part will work), I’ll be interested in seeing what you come up with.
If you can formalize how you will discover such theories (i.e. how the learning our way up the ordinal hierarchy part will work), I’ll be interested in seeing what you come up with.
I am not sure I understand this line of argument. I think you are saying that, while no Turing machine can solve the halting problem in general, that there is a Turing machine of a length no greater than F(n) (where F is an as-of-yet unspecified function) that can solve the halting problem for any Turing machine whose length ⇐ n.
If so, it seems to me that this observation is trivially true; there are only ~ 2^n Turing machines of a length <=n (assuming a two-symbol alphabet), so there must be a Turing machine with a length proportional to 2^n can solve the halting problem for Turing machines of length ⇐ n, simply via table lookup.
While trivially true, this does not really solve the halting problem, because:
The halting problem requires a general solution for arbitrarily large Turing machines, and
While it is clear that a Turing machine exists that can solve the halting problem for Turing machines of length no greater than n, there is no general way to find that Turing machine for an arbitrary value of n.
How does it do so via table look-up? Without running the machines, how can it form such a table?
If I was claiming to be able to solve the Halting Problem in general circumstances, as in solve the Halting Problem, I would be speaking nonsense. Glad we’re on the same page.
Which requires a formal theory of countably infinite Kolmogorov complexity (ie: infinite expected thermodynamic entropy), yes.
My understanding of Chaitin’s Theorem is that the F(n) (which he calls L) places an upper bound on the size of strings that can be proven to be algorithmically random. I have been hoping to find the time to explore the proof of this theorem in detail, so as to hopefully yield a construction telling us exactly how large a halting problem (equivalently: a string randomness problem) a given theory can solve.
As to where we obtain such a theory, that requires, in my view, a more thorough exploration of AIT—and, potentially, ordinal logic—to give a useful answer. My current maximum-probability estimate is that any sufficiently large consistent theory capable of talking about compressing strings/machines that halt can do the job; I believe this based on Chaitin appearing to talk about such in his own description of his proofs (and its making intuitive sense, given the universality of any one formulation of Turing-complete no-oracle computation).
The core idea is that every theory can solve some size of halting problems, so what we actually have to do is keep learning a larger theory until we find that the halting problems of interest become solvable. AIT and the use of real-life compressors as upper bounds to Kolmogorov complexity should yield ways to apply the theory to real programs—sometimes.
It doesn’t have to form the table; a Turing machine of a length on the order of 2^n can have the answer to the question “does this Turing machine halt” built-in (hard coded) for all 2^n Turing machines of length n or smaller. Such a Turing machine exists for any value of n, but finding this Turing machine is not possible for an arbitrary value of n. Functions on a finite domain are essentially all Turing computable, so the fact that a Turing machine exists that can solve the halting problem for all Turing machines of length n or less is not surprising.
My point is that the fact that a theory exists that can solve some size n of halting problem is trivially true (if by theory you mean more-less the same thing as Turing machine) but this fact is also useless, since there is no way to discover that theory (Turing machine) for an arbitrary value of n.
Oh, you mean classical existence. Yes, it is classically trivial that there exists some Turing machine blah blah blah, simply by the fact that classically, a Turing machine must either halt in finite time or never halt. However, for purposes of computability theory and algorithmic information theory, such an existence statement is useless.
My apologies, but are you making the statement just to be extra platonist about these things?
And I am speaking in a constructive sense of existence, intending to say that we can in fact discover such theories: by learning our way up the ordinal hierarchy, so to speak.
That is my point—stating that a function that maps a finite domain onto {0,1} is computable is trivially true and (as you said) useless. I was not really trying to be extra Platonist; I was just trying to point out that these sorts of finite mapping functions are not really interesting from a computability theory standpoint.
If you can formalize how you will discover such theories (i.e. how the learning our way up the ordinal hierarchy part will work), I’ll be interested in seeing what you come up with.
That’s the tough bit.