I am developing a paradox using diagonalization process. I don’t know if is it possible, but I try.
For example, if I diagonalize the set of all finite bit strings. What I get is a nonfinite string, so there is no paradox.
Then I take the set of all infinite bit strings which are the result of a finite algorithm. If I diagonalize them, the result is most likely not obtainable by a finite algorithm. Otherwise it would be a paradox.
But then again, a simple finite algorithm constructs all finite bit strings. So also all finite algorithms. If I add a diagonalization algorithm at the end of this process… Well …. Let me think what is wrong with this one …
You can iterate over all finite algorithms, but you can’t reliably tell which of these algorithms will output an infinite string, or a finite string, or get stuck at some point in an infinite loop (unless you have a halting oracle).
You couldn’t even iterate over “the first character outputted by finite algorithm number n”, let alone the nth one.
Off the top of my head, it sounds like you’re going to run into computability problems.
I’ve annoyingly forgotten the terminology and don’t have time to look it up, and some details may be incorrect, but: some algorithms can be written in a programming language which imposes bounds on their running time before the algorithm is actually run. (The runtime is allowed to depend on the arguments to a function.) Others can only be written in a language which allows you to write infinite loops. (This is true even of some algorithms which are guaranteed to terminate on all inputs.)
The algorithms you iterate over will have to be of the first kind, or you’ll quickly run into an infinite loop. But the algorithm to do the iteration and diagonalise will be of the second kind. Thus, no paradox.
(IIRC, exactly this problem was discussed in GEB.)
I am developing a paradox using diagonalization process. I don’t know if is it possible, but I try.
For example, if I diagonalize the set of all finite bit strings. What I get is a nonfinite string, so there is no paradox.
Then I take the set of all infinite bit strings which are the result of a finite algorithm. If I diagonalize them, the result is most likely not obtainable by a finite algorithm. Otherwise it would be a paradox.
But then again, a simple finite algorithm constructs all finite bit strings. So also all finite algorithms. If I add a diagonalization algorithm at the end of this process… Well …. Let me think what is wrong with this one …
You can iterate over all finite algorithms, but you can’t reliably tell which of these algorithms will output an infinite string, or a finite string, or get stuck at some point in an infinite loop (unless you have a halting oracle).
You couldn’t even iterate over “the first character outputted by finite algorithm number n”, let alone the nth one.
Essentially, I agree with you. The algorithm I defined with this diagonalization method is in fact NOT finite.
What is just fine.
Off the top of my head, it sounds like you’re going to run into computability problems.
I’ve annoyingly forgotten the terminology and don’t have time to look it up, and some details may be incorrect, but: some algorithms can be written in a programming language which imposes bounds on their running time before the algorithm is actually run. (The runtime is allowed to depend on the arguments to a function.) Others can only be written in a language which allows you to write infinite loops. (This is true even of some algorithms which are guaranteed to terminate on all inputs.)
The algorithms you iterate over will have to be of the first kind, or you’ll quickly run into an infinite loop. But the algorithm to do the iteration and diagonalise will be of the second kind. Thus, no paradox.
(IIRC, exactly this problem was discussed in GEB.)
I am not doing it very seriously. Only before I sleep, for example. But it might be a paradox there, after all.
There’s your problem. The process doesn’t halt.