Consider an idealized Turing machine. It has two parts, a “tape” which contains an infinite series of finite states: and a “head” which sits at a particular index and stores a single value .
At each step, the Turing machine reads the state at it’s current index. The Turing machine then looks up the combination in its instructions. The table contains instructions, which describe how should be updated.
Now, if the Church Turing Hypothesis is true, then this metaphorical tape is sufficiently powerful to simulate not only boring things like computers, but also fancy things like black-holes and (dare I say it) human intelligence!
However, I have pulled a fast one on you.
For, you see, as I have described it there is not one Turing machine, but an infinite sea of possible Turing machines, many of which are simulating your current consciousness at this very moment.
Now, based off of the reasoning here, we all know that it would be completely unparsimonious and silly to imagine that an infinite number of Turing machines simply “collapse” into the one that actually describes the reality you are currently inhabiting. Rather, all of the possible Turing machines exist and you merely observe the branch of reality in which the Turing machine happens to be simulating your current existence.
Now I know what some people will say. The will tell me to “shut up and calculate”. They will explain that Turing machine theory exists to predict observations, and that I can do this without worrying about whether or not the other branches of the Turing machine exist. They will tell me that the existence of the other branches of the Turing machine is a metaphysical question that science can know nothing about.
But those people are schmucks.
I demand to know whether or not my “Many Turing Machines” hypothesis is true or false. And I demand that science have an objective opinion on whether it is true or false. And I demand that they agree with me that it is indeed true, since it avoids the nonlinear “collapse” operator which I find so distasteful.
Basically, I don’t understand why people think this is a rational position to take when it comes to quantum mechanics.
P.S. I know about things like Delayed Choice Quantum Eraser and Quantum Bomb Testers. But none of those things change the fact that fundamentally Quantum Physics can be simulated (to any chosen degree of approximation) using a finite state Turing machine. This sort of feels like a “the math is hard, it must be magic” argument, as opposed to a meaningful distinction from “Many Turing Machines”.
P.P.S. In case my own viewpoint was not obvious, I think “shut up and calculate” means we only worry about things that could potentially affect our future observations, and worrying about whether or not the other branches of the multiverse “exist” is about as meaningful as worrying about how many angels could stand on the head of a pin.
I think that you are putting forward example hypothesis that you don’t really believe in order to prove your point. Unfortunately it isn’t clear which hypothesis you do believe, and this makes your point opaque.
From a mathematical perspective, quantum collapse is about as bad as insisting that the universe will suddenly cease to exist in n years time. Quantum collapse introduces a nontrivial complexity penalty, in particular you need to pick a space of simultaneity.
The different Turing machines don’t interact at all. Physicists can split the universe into a pair of universes in the quantum multiverse, and then merge them back together in a way that lets them detect that both had an independent existence. In the quantum bomb test, without a bomb, the universes in which the photon took each path are identical, allowing interference. If the bomb does exist, no interference. Many worlds just says that these branches carry on existing whether or not scientists manage to make them interact again.
The fact that superposed states do interact significantly on the small scale is important, because its the basis for believing there could be worlds in the first place. The MTM model is completely non interacting, so it misrepresents the physics.
The MTM model is literally computing the same thing as the MWH. Specifically, suppose for a human brain I compute the events observed by the same human brain. Granted, this requires solving both the easy problem of consciousness and the grand unified theory . But I don’t think anyone here is seriously suggesting those are inherently non-computable functions.
I suppose a reasonable objection is that the shortest program is MWH, since I don’t have to determine when an observation happens. But if I ask for the fastest program in terms of time and memory efficiency instead, MWH is a clear loser.
MWI is more than one theory, because everything is more than one thing[*].
If you defined MWI as just the evolution of the SWE (as required by the simplicity theory), then calculating a bunch of non-interacting states is getting it wrong.
If you start with the idea that MWI is a bunch of non-interacting observers observing different things, then the MTM might get it right. The problem is that no-one knows how to get the second kind of MWI out of the maths. That is where things like the basis problem come in.
[*]
There is an approach based on coherent superposiitions, and and an version based on decoherence. These are incompatible opposites.
Worlds are superpositions, so they in exist at small scales, they can continue to interact with each other, after, “splitting” , and and they can be erased. These coherent superposed states are the kind of “world” we have direct evidence for, although they seem to lack many of the properties requited for a fully fledged many worlds theory, hence the scare quotes. Call these Small worlds.
Worlds are large, in fact universe-like. They are causally and informationally isolated from each other. This approach is often based on quantum decoherence. Call these Big Worlds.