You have a couple of confusions about what algorithms are, what “semantic properties” are, etc. I’ll try to unpack those briefly.
In my brain, this translates to: there is no general algorithm that can accurately assess non-trivial statements about any program’s behavior.
About all programs’ behavior. You can certainly tell that certain machines halt, just by running them eg 10,000 steps and seeing that some halt and saying “yes” to those.
The part where I start to go mad is when I consider that turing machines can also be real physical machines, in my world.
Nitpick: Turing machines technically require an infinite storage tape, but all the computers in our universe have finite memory. You could think of us as abstractly implementing a bounded-space Turing machine, maybe?
But I’m made of cells. And all electronic computers are (for now) constructed of superatomic-level objects which we can “predict” the behavior of from fixed starting positions and basic physics. If that “semantic property” of a program is whether or not it flips the bit at 0xff812938 to 1, then how on earth is that not decidable by the laws of physics? Either the lever is going to fall or it’s not. I can’t reconcile the macro level concept with the micro level concept that my laptop is just a Rube-Goldberg machine and somewhere in there the result is determined by silicon, not abstract computer scientists.
Suppose we have infinite energy and run a perfect computer forever. We task it with deciding whether programs halt (without loss of generality wrt Rice’s Theorem, since HALT is Turing-reducible to nontrivial semantic checks). Whether the screen emits photons in a “Yes!” or in a “No.” pattern is a physical consequence in our universe, yes—that’s because this is a “semidecidable” problem: we can always test the Yes cases correctly after some length of time has passed, but we can stall out on some No instances.
But how do we know the computer will give us an answer at all? You can keep running the laws of physics forward in your prediction, yes, but there’s no guarantee that after any finite amount of time the computer has displayed an answer.
So we see that
So, what I think Rice’s Theorem is saying by extension, is that since at some points in time I can do computation and base my behavior on the results of computation, there’s no general algorithm to assess what I as a human being am going to do either.
is not true, because your future behavior could theoretically be predicted by running your physics simulator forward by a finite number of steps.
Am I supposed to believe that the outcome of a Magic the Gathering game is undecidable in some weird sense, or is this only the case about some weird pseudo-property of the game that doesn’t actually affect my ability to predict how many life points player X is going to lose after these series of card plays?
To get intuition for this, I recommend getting a better feel for X-completeness in general (eg Mario is NP-complete).
This is basically the explanation I came up with by the end of the post; that I wouldn’t necessarily know if I just hadn’t simulated the billiard balls far enough. It’s much easier for me to understand that there’s no way to prove in general for all programs that the billiard balls find themselves in a particular state ever; at least, that sounds like a much harder problem so I’m much more comfortable writing it off as unsolvable.
Watching the video now. Interesting turns of phrase this speaker’s got.
Also, a “nontrivial semantic property” is captured by testing membership in a set of partial computable functions (which isn’t the empty set or the set of all computable functions). Note that testing what function the Turing machine is implementing automatically sidesteps syntactic checks. For example, I could just want to check whether the program implements the constant-zero function, but Rice’s theorem says I can’t check this for all programs!
Because if I could, then I could find out whether machine M halts on input x. Do you see how? Do you see how the proof fails if the property in question is trivial?
You have a couple of confusions about what algorithms are, what “semantic properties” are, etc. I’ll try to unpack those briefly.
About all programs’ behavior. You can certainly tell that certain machines halt, just by running them eg 10,000 steps and seeing that some halt and saying “yes” to those.
Nitpick: Turing machines technically require an infinite storage tape, but all the computers in our universe have finite memory. You could think of us as abstractly implementing a bounded-space Turing machine, maybe?
Suppose we have infinite energy and run a perfect computer forever. We task it with deciding whether programs halt (without loss of generality wrt Rice’s Theorem, since HALT is Turing-reducible to nontrivial semantic checks). Whether the screen emits photons in a “Yes!” or in a “No.” pattern is a physical consequence in our universe, yes—that’s because this is a “semidecidable” problem: we can always test the Yes cases correctly after some length of time has passed, but we can stall out on some No instances.
But how do we know the computer will give us an answer at all? You can keep running the laws of physics forward in your prediction, yes, but there’s no guarantee that after any finite amount of time the computer has displayed an answer.
So we see that
is not true, because your future behavior could theoretically be predicted by running your physics simulator forward by a finite number of steps.
To get intuition for this, I recommend getting a better feel for X-completeness in general (eg Mario is NP-complete).
This is basically the explanation I came up with by the end of the post; that I wouldn’t necessarily know if I just hadn’t simulated the billiard balls far enough. It’s much easier for me to understand that there’s no way to prove in general for all programs that the billiard balls find themselves in a particular state ever; at least, that sounds like a much harder problem so I’m much more comfortable writing it off as unsolvable.
Watching the video now. Interesting turns of phrase this speaker’s got.
Also, a “nontrivial semantic property” is captured by testing membership in a set of partial computable functions (which isn’t the empty set or the set of all computable functions). Note that testing what function the Turing machine is implementing automatically sidesteps syntactic checks. For example, I could just want to check whether the program implements the constant-zero function, but Rice’s theorem says I can’t check this for all programs!
Because if I could, then I could find out whether machine M halts on input x. Do you see how? Do you see how the proof fails if the property in question is trivial?