Notably, this was exactly the sort of belief I was trying to show is false, and your observation about the physical universe does not matter for the argument I made here, because the question is whether with say 2^1000000 atoms, you can solve larger problem sizes with the same code, and Turing-complete systems say yes to the question.
In essence, it’s a question of whether we can scale our computers with more memory and time without having to change the code/algorithms, and basically all modern computers can do this in theory.
I think a much more interesting question is why TC machines are — despite only existing in theory — such useful models for thinking about real-world computers. There is obviously some approximation going on here, where for the vast majority of real-world problems, you can write them in such a way that they work just fine with finite RAM. (E.g. the cases where we would run out of RAM are quite easy to predict, and don’t just crop up randomly.)
Short version, because you can always extend them with more memory and time, and it really matters a lot in practical computing if you can get a general coding solution that is also cheap to work with, because it can handle upgrades to it’s memory very well.
In essence, the system descriptor/code doesn’t have to change if the memory and time increases (for a finite state machine or look-up table, they would have to change.
Notably, this was exactly the sort of belief I was trying to show is false
Please point out if there is a specific claim I made in my comment that you believe to be false. I said that “I don’t think a TC computer can ever be built in our universe.”, which you don’t seem to argue with? (If we assume that we can only ever get access to a finite number of atoms. If you dispute this I won’t argue with that, neither of us has a Theory of Everything to say for certain.)
Just to make precise why I was making that claim and what it was trying to argue against, take this quote from the post:
yes, you can consider a finite computer in the real world to be Turing-complete
I dropped the “finite computer” constraint and interpreted the phrase “real world” to mean that “it can be built in our universe”, this is how I arrived at the “a TC computer can be built in our universe” statement, which I claimed was false.
I actually agree that if we assume that there’s a finite maximum of atoms, we could in principle reformulate the universal computer as a finite state automaton, and if we were willing to accept the non-scalability of a finite state automaton, this could actually work.
The fundamental problem is that now we would have software that only works up to a specified memory limit, because we essentially burned the software into the hardware of the finite automaton and if you are ever uncertain of how much memory or time a problem requires, or more worryingly if we were ever uncertain about how much resources we could actually use, then our “software” for the finite automaton is no longer usable and we’d have to throw it away and recreate a new computer for every input length.
Turing Machine models automatically handle arbitrarily large inputs without having to throw away expensive work on developing the software.
So in essence, if you want to handle the most general case, or believe unbounded atoms are possible, like me, then you really want the universal computer architecture of modern computers.
The key property of real computers that makes them Turing Complete in theory is that they can scale with more memory and time arbitrarily without changing the system descriptior/code.
More below:
(If we assume that we can only ever get access to a finite number of atoms. If you dispute this I won’t argue with that, neither of us has a Theory of Everything to say for certain.)
Notably, this was exactly the sort of belief I was trying to show is false, and your observation about the physical universe does not matter for the argument I made here, because the question is whether with say 2^1000000 atoms, you can solve larger problem sizes with the same code, and Turing-complete systems say yes to the question.
In essence, it’s a question of whether we can scale our computers with more memory and time without having to change the code/algorithms, and basically all modern computers can do this in theory.
Short version, because you can always extend them with more memory and time, and it really matters a lot in practical computing if you can get a general coding solution that is also cheap to work with, because it can handle upgrades to it’s memory very well.
In essence, the system descriptor/code doesn’t have to change if the memory and time increases (for a finite state machine or look-up table, they would have to change.
Please point out if there is a specific claim I made in my comment that you believe to be false. I said that “I don’t think a TC computer can ever be built in our universe.”, which you don’t seem to argue with? (If we assume that we can only ever get access to a finite number of atoms. If you dispute this I won’t argue with that, neither of us has a Theory of Everything to say for certain.)
Just to make precise why I was making that claim and what it was trying to argue against, take this quote from the post:
I dropped the “finite computer” constraint and interpreted the phrase “real world” to mean that “it can be built in our universe”, this is how I arrived at the “a TC computer can be built in our universe” statement, which I claimed was false.
I think I understand the question now.
I actually agree that if we assume that there’s a finite maximum of atoms, we could in principle reformulate the universal computer as a finite state automaton, and if we were willing to accept the non-scalability of a finite state automaton, this could actually work.
The fundamental problem is that now we would have software that only works up to a specified memory limit, because we essentially burned the software into the hardware of the finite automaton and if you are ever uncertain of how much memory or time a problem requires, or more worryingly if we were ever uncertain about how much resources we could actually use, then our “software” for the finite automaton is no longer usable and we’d have to throw it away and recreate a new computer for every input length.
Turing Machine models automatically handle arbitrarily large inputs without having to throw away expensive work on developing the software.
So in essence, if you want to handle the most general case, or believe unbounded atoms are possible, like me, then you really want the universal computer architecture of modern computers.
The key property of real computers that makes them Turing Complete in theory is that they can scale with more memory and time arbitrarily without changing the system descriptior/code.
More below:
https://www.dwarkesh.com/p/adam-brown