Roughly the (physical) Church-Turing thesis says that any physical process is computable. This is a very well established principle (as he says, quantum computation can’t compute anything that Turing machines can’t so the discovery/invention of that didn’t challenge the thesis). In the talk he gives lots of examples and evidence for a sharpened version “the extended Church-Turing thesis” which says that polynomial time and space efficiently realizable physical processes can be computed in polynomial time. It’s true that quantum computation can compute some things more efficiently but we don’t have any reason to think they break this thesis yet.
I could build (physically) as many battery powered NAND gates as I have space for and connect them up to solve the problem of, for example, adding two n bit numbers together: I would need O(n) gates to implement it. Similarly I could (in principle) build battery powered quantum gates, and for a few problems I would need asymptotically less quantum gates than nand (classical) gates to implement it. For an abstract problem, one might find a far better way (in terms of space and time) to physically calculate the answer than using classical, or even quantum gates, than its complexity class permits: That would smash the extended thesis. He shows a bunch of failed attempts (using soapy water, taking advantage of black holes, …) and it’s clear that we have no reason to assume such a thing exists.
I think the human brain is a lot like the soapy water: it works well for trivial (in the sense of Zeilberger) problems like “designing a nuclear power plant” or “proving Fermats last theorem”, to the point that it’s actually mystifying (we don’t know how to make computers program from abstract specifications or prove theorems at any level even close to competitive with humans) but it doesn’t scale up: you can’t view brain size as a variable, so it can’t modeled as a formal language or complexity class. A proper understanding of how brains and intelligence in general works will change that and either let us view the (trans)human brain with a variable, or it could show us that a recursively self improving AI would start failing after it got to a certain stage (just as the soapy water failed when there were too many pegs).
I think the view that complexity classes are/may be more fundamental than traditional physics is probably true in a vague sense but it’s also a slightly misleading way to state what’s happening. It’s probably similar to the situation with quantum mechanics and quantum electrodynamics (as mnielsen says in postulates of quantum mechanics I). QED (which describes physics) is built upon the postulates of QM (which is just a basic setup for anything “quantum”), we don’t need to know anything about QED to do quantum computation—but to build a physical quantum computer we do.
In the talk he gives lots of examples and evidence for a sharpened version “the extended Church-Turing thesis” which says that polynomial time and space physical processes can be computed in polynomial time. It’s true that quantum computation can compute some things more efficiently but we don’t have any reason to think they break this thesis yet.
Can you elaborate on this a bit? There are problems that are suspected to not be in P (such as factoring) but that are in BQP. Since factoring is in BQP, we could create a physical process that factors numbers in polynomial time, and would be (conjecturally) unable to simulate that process in polynomial time.
Thanks for pointing out my mistake: where I wrote the precise but wrong “polynomial time and space physical processes can be computed in polynomial time”, Scott wrote the imprecise but not-wrong “efficiently realizable physical processes can be computed in polynomial time” and as you demonstrated mine doesn’t make sense.
On the other hand, Scott wrote in lecture 14 of his Quantum Computing Since Democritus course:
the argument is that quantum computing must be impossible because it violates the Extended Church-Turing Thesis. That is, we know that quantum computing can’t be possible (assuming BPP≠BQP), because we know that BPP defines the limit of the efficiently computable.
So, we have this thesis, and quantum computing violates the thesis, so it must be impossible
More recently, there was a lot of discussion in the comments of his blog post about various different attempts at defining an “extended Church-Turing thesis”. There doesn’t seem to be a good consensus.
Roughly the (physical) Church-Turing thesis says that any physical process is computable. This is a very well established principle (as he says, quantum computation can’t compute anything that Turing machines can’t so the discovery/invention of that didn’t challenge the thesis). In the talk he gives lots of examples and evidence for a sharpened version “the extended Church-Turing thesis” which says that
polynomial time and spaceefficiently realizable physical processes can be computed in polynomial time. It’s true that quantum computation can compute some things more efficiently but we don’t have any reason to think they break this thesis yet.I could build (physically) as many battery powered NAND gates as I have space for and connect them up to solve the problem of, for example, adding two n bit numbers together: I would need O(n) gates to implement it. Similarly I could (in principle) build battery powered quantum gates, and for a few problems I would need asymptotically less quantum gates than nand (classical) gates to implement it. For an abstract problem, one might find a far better way (in terms of space and time) to physically calculate the answer than using classical, or even quantum gates, than its complexity class permits: That would smash the extended thesis. He shows a bunch of failed attempts (using soapy water, taking advantage of black holes, …) and it’s clear that we have no reason to assume such a thing exists.
I think the human brain is a lot like the soapy water: it works well for trivial (in the sense of Zeilberger) problems like “designing a nuclear power plant” or “proving Fermats last theorem”, to the point that it’s actually mystifying (we don’t know how to make computers program from abstract specifications or prove theorems at any level even close to competitive with humans) but it doesn’t scale up: you can’t view brain size as a variable, so it can’t modeled as a formal language or complexity class. A proper understanding of how brains and intelligence in general works will change that and either let us view the (trans)human brain with a variable, or it could show us that a recursively self improving AI would start failing after it got to a certain stage (just as the soapy water failed when there were too many pegs).
I think the view that complexity classes are/may be more fundamental than traditional physics is probably true in a vague sense but it’s also a slightly misleading way to state what’s happening. It’s probably similar to the situation with quantum mechanics and quantum electrodynamics (as mnielsen says in postulates of quantum mechanics I). QED (which describes physics) is built upon the postulates of QM (which is just a basic setup for anything “quantum”), we don’t need to know anything about QED to do quantum computation—but to build a physical quantum computer we do.
Can you elaborate on this a bit? There are problems that are suspected to not be in P (such as factoring) but that are in BQP. Since factoring is in BQP, we could create a physical process that factors numbers in polynomial time, and would be (conjecturally) unable to simulate that process in polynomial time.
Thanks for pointing out my mistake: where I wrote the precise but wrong “polynomial time and space physical processes can be computed in polynomial time”, Scott wrote the imprecise but not-wrong “efficiently realizable physical processes can be computed in polynomial time” and as you demonstrated mine doesn’t make sense.
On the other hand, Scott wrote in lecture 14 of his Quantum Computing Since Democritus course:
More recently, there was a lot of discussion in the comments of his blog post about various different attempts at defining an “extended Church-Turing thesis”. There doesn’t seem to be a good consensus.