# Multiple Worlds, One Universal Wave Function

The following post is an adaptation of a paper I wrote in 2017 that I thought might be of interest to people here on LessWrong. The paper is essentially my attempt at presenting the clearest and most cogent defense of the Everett interpretation of quantum mechanics—the interpretation that I very strongly believe to be true—that I could (at least using only undergraduate wave mechanics, which was the level at which I wrote the paper). My motivation for posting this now is that I was recently talking with a colleague of mine who mentioned that they had stumbled upon my paper recently and really enjoyed it, and so realizing that I hadn’t ever really shared it here on LessWrong, I figured I would put it out there in case anyone else found it similarly useful or interesting.

It’s also worth noting that LessWrong has a storied history with the Everett interpretation, with Yudkowsky also defending it quite vigorously. I actually cite Eliezer at one point in the paper—and I basically agree with what he said in his sequence—though I hope that if you bounced away from that sequence you’ll find my paper more persuasive. Also, I include Everett’s derivation of the Born rule, which is something that I think is quite important and that I expect even a lot of people very familiar with the Everett interpretation won’t have seen before.

# Abstract

We seek to present and defend the view that the interpretation of quantum mechanics is no more complicated than the interpretation of plate tectonics: that which is being studied is real, and that which the theory predicts is true. The view which holds that the mathematical formalism of quantum mechanics—without any additional postulates—is a complete description of reality is known as the Everett interpretation. We seek to defend the Everett interpretation of quantum mechanics as the most probable interpretation available. To accomplish this task, we analyze the history of the Everett interpretation, provide mathematical backing for its assertions, respond to criticisms that have been leveled against it, and compare it to its modern alternatives.

# Introduction

One of the most puzzling aspects of quantum mechanics is the fact that, when one measures a system in a superposition of multiple states, it is only ever found in one of them. This puzzle was dubbed the “measurement problem,” and the first attempt at a solution was by Werner Heisenberg, who in 1927 proposed his theory of “wave function collapse.”[1] Heisenberg proposed that there was a cutoff length, below which systems were governed by quantum mechanics, and above which they were governed by classical mechanics. Whenever quantum systems encounter the cutoff point, the theory stated, they collapse down into a single state with probabilities following the squared amplitude, or Born, rule. Thus, the theory predicted that physics just behaved differently at different length scales. This traditional interpretation of quantum mechanics is usually referred to as the Copenhagen interpretation.

From the very beginning, the Copenhagen interpretation was seriously suspect. Albert Einstein was famously displeased with its lack of determinism, saying “God does not play dice,” to which Niels Bohr quipped in response, “Einstein, stop telling God what to do.”[2] As clever as Bohr’s answer is, Einstein—with his famous physical intuition—was right to be concerned. Though Einstein favored a hidden variable interpretation[3], which was later ruled out by Bell’s theorem[4], the Copenhagen interpretation nevertheless leaves open many questions. If physics behaves differently at different length scales, what is the cutoff point? What qualifies as a wave-function-collapsing measurement? How can physics behave differently at different length scales, when macroscopic objects are made up of microscopic objects? Why is the observer not governed by the same laws of physics as the system being observed? Where do the squared amplitude Born probabilities come from? If the physical world is fundamentally random, how is the world we see selected from all the possibilities? How could one explain the applicability of quantum mechanics to macroscopic systems, such as Chandrasekhar’s insight in 1930 that modeling neutron stars required the entire star to be treated as a quantum system?[5]

# The Everett Interpretation of Quantum Mechanics

Enter the Everett Interpretation. In 1956, Hugh Everett III, then a doctoral candidate at Princeton, had an idea: if you could find a way to explain the phenomenon of measurement from within wave mechanics, you could do away with the extra postulate of wave function collapse, and thus many of the problems of the Copenhagen interpretation. Everett worked on this idea under his thesis advisor, Einstein-prize-winning theoretical physicist John Wheeler, who would later publish a paper in support of Everett’s theory.[6] In 1957, Everett finished his thesis “The Theory of the Universal Wave Function,”[7] published as the “‘Relative State’ Formulation of Quantum Mechanics.”[8] In his thesis, Everett succeeded in deriving every one of the strange quirks of the Copenhagen interpretation—wave function collapse, the apparent randomness of measurement, and even the Born rule—from purely wave mechanical grounds, as we will do in the “Mathematics of the Everett Interpretation” section.

Everett’s derivation relied on what was at the time a controversial application of quantum mechanics: the existence of wave functions containing observers themselves. Everett believed that there was no reason to restrict the domain of quantum mechanics to only small, unobserved systems. Instead, Everett proposed that any system, even the system of the entire universe, could be encompassed in a single, albeit often intractable, “universal wave function.”

Modern formulations of the Everett interpretation reduce his reasoning down to two fundamental ideas:[9][10][11][12][13]

the wave function obeys the standard, linear, deterministic Schrodinger wave equation

*at all times*(the relativistic variant, to be precise), andthe wave function is

*physically real.*

Specifically, the first statement precludes wave function collapse and demands that we continue to use the same wave mechanics for all systems, even those with observers, and the second statement demands that we accept the physical implications of doing so. The Everett interpretation is precisely that which is implied by these two statements.

Importantly, neither of these two principles are additional assumptions on top of traditional quantum theory—instead, they are simplifications of existing quantum theory, since they act only to remove the prior ad-hoc postulates of wave function collapse and the non-universal applicability of the wave equation.[11][14] The beauty of the Everett interpretation is the fact that we can remove the postulates of the Copenhagen interpretation and still end up with a theory that works.

## DeWitt’s Multiple Worlds

Removing the Copenhagen postulates had some implications that did not mesh well with many physicists’ existing physical intuitions. If one accepted Everett’s universal wave function, one was forced to accept the idea that macroscopic objects—cats, people, planets, stars, galaxies, even the entire universe—could be in a superposition of many states, just as microscopic objects could. In other words, multiple different versions of the universe—multiple worlds, so to speak—could exist simultaneously. It was for this reason that Einstein-prize-winning physicist Bryce DeWitt, a supporter of the Everett interpretation, dubbed Everett’s theory of the universal wave function the “multiworld” (or now more commonly “multiple worlds”) interpretation of quantum mechanics.[9]

While the idea of multiple worlds may at first seem strange, to Everett, it was simply an extension of the normal laws of quantum mechanics. Simultaneous superposition of states is something physicists already accept for microscopic systems whenever they do quantum mechanics—by virtue of the overwhelming empirical evidence in favor of it. Not only that, but evidence keeps coming out demonstrating superpositions at larger and larger length scales. In 1999 it was demonstrated, for example, that Carbon-60 molecules can be put into a superposition.[15]. While it is unlikely that a superposition of such a macroscopic object as Schrodinger’s cat will ever be conclusively demonstrated, due to the difficulty in isolating such a system from the outside world, it is likely that the trend of demonstrating superposition at larger and larger length scales will continue. It seems that to not accept that a cat could be in a superposition, even if we can never demonstrate it, however, is a failure of induction—a rejection of an empirically-demonstrated trend.

While the Everett interpretation ended up implying the existence of multiple worlds, this was never Everett’s starting point. The “multiple worlds” of the Everett interpretation were not added to traditional quantum mechanics as new postulates, but rather fell out from the act of *taking away* the existing ad-hoc postulates of the Copenhagen interpretation—a consequence of taking the wave function seriously as a fundamental physical entity. In Everett’s own words, “The aim is not to deny or contradict the conventional formulation of quantum theory, which has demonstrated its usefulness in an overwhelming variety of problems, but rather to supply a new, more general and complete formulation, from which the conventional interpretation can be *deduced.*”[8] Thus, it is not surprising that Stephen Hawking and Nobel laureate Murray Gell-Mann, supporters of the Everett interpretation, have expressed reservations with the name “multiple worlds interpretation,” and therefore we will continue to refer to the theory simply as the Everett interpretation instead.[16]

## The Nature of Observation

Accepting the Everett interpretation raises an important question: if the macroscopic world can be in a superposition of multiple states, what differentiates them? Stephen Hawking has the answer: “in order to determine where one is in space-time one has to measure the metric and this act of measurement places one in one of the various different branches of the wave function in the Wheeler-Everett interpretation of quantum mechanics.”[17] When we perform an observation on a system whose state is in a superposition of eigenfunctions, a version of us sees each different, possible eigenfunction. The different worlds are defined by the different eigenfunctions that are observed.

We can show this, as Everett did, just by acknowledging the existence of universal, joint system-observer wave functions.[7][8] Before measuring the state of a system in a superposition, the observer and the system are independent—we can get their joint wave function simply by multiplying together their individual wave functions. After measurement, however, the two become entangled—that is, the state of the observer becomes dependent on the state of the system that was observed. The result is that for each eigenfunction in the system’s superposition, the observer’s wave function evolves differently. Thus, we can no longer express their joint wave function as the product of their individual wave functions. Instead, we are forced to express the joint wave function as a sum of different components, one for each possible eigenfunction of the system that could be observed. These different components are the different “worlds” of the Everett interpretation, with the only difference between them being which eigenfunction of the system was observed. We will formalize this reasoning in the “The Apparent Collapse of The Wave Function” section.

We are still left with the question, however, of why we experience a particular probability of seeing some states over others, if every state that can be observed is observed. Informally, we can think of the different worlds—the different possible observations—as being “weighted” by their squared amplitudes, and which one of the versions of us we are as a random choice from that weighted distribution. Formally, we can prove that under the Everett interpretation, if an observer interacts with many systems each in a superposition of multiple states, the distribution of states they see will follow the Born rule.[7][8][18][11][19][14] A portion of Everett’s proof of this fact is included in the “The Born Probability Rule” section.

# The Mathematics of the Everett Interpretation

Previously, we asserted that universally-applied wave mechanics was sufficient, without ad-hoc postulates such as wave function collapse, to imply all the oddities of the Copenhagen interpretation. We will now prove that assertion. In this section, as per the Everett interpretation, we will accept that basic wave mechanics is obeyed for all physical systems, including those containing observers. From that assumption, we will show that the apparent phenomena of wave function collapse, random measurement, and the Born Rule follow. The proofs given below are adopted from Everett’s original paper.[7][8]

## The Apparent Collapse of The Wave Function

Suppose we have a system with eigenfunctions and initial state . Consider an observer with initial state . Let be the state of after observing eigenfunctions of . Since we would like to demonstrate how repeated measurements see a collapsed wave function, we will assume that repeated measurement is possible, and thus that the states of remain unchanged after observation. As we are working under the Everett interpretation, we will let ourselves define a joint system-observer wave function with initial configuration Then, our goal is to understand what happens to when repeatedly observes . Thus, we will define to represent the state of after independent observations of are performed by .

Consider the simple case where and thus we are in initial state . In this case, by our previous definition of and requirement that remain unchanged, we can write the state after the observation as . Since quantum mechanics is linear, and the eigenfunctions are orthogonal, it must be that this same process occurs for each .

Thus, by the principle of superposition, we can write in its general form as For the next observation, each will once again see the same , since it has not changed state. As previously defined, we use the notation to denote the state of after observing in state twice. Thus, we can write as and more generally, we can write as where is repeated times in .

Thus, once a measurement of has been performed, every subsequent measurement will see the same eigenfunction, even though all eigenfunctions continue to exist. We can see this from the fact that the same is repeated in each state of . In this way, we see how, despite the fact that the original wave function for is in a superposition of many eigenfunctions, once a measurement has been performed, each subsequent measurement will always see the same eigenfunction.

Note that there is no longer a single, independent state of . Instead, there are many , one for each eigenfunction. What does that mean? It means that for every eigenfunction of , there is a corresponding state of wherein sees that eigenfunction. Thus, one is required to accept that there are many observers , with corresponding state , each one seeing a different eigenfunction . This is the origin of the Everett interpretation’s “multiple worlds.”

From the perspective of each in this scenario it will appear as if has “collapsed” from a complex superposition into a single eigenfunction . As we can see from the joint wave function, however, that is not the case—in fact, the entire superposition still exists. What has changed is only that , the state of , is no longer independent of that superposition, and has instead become entangled with it.

## The Apparent Randomness of Measurement

Suppose we now have many such systems , which we will denote where . Consider from before, but with the modification that instead of repeatedly observing a single , observes different in each measurement, such that is the joint system-observer wave function after measuring the th .

As before, we will define the initial joint wave function *Ψ*_{0} as

where we are summing over all possible combinations of eigenfunctions for the different systems with arbitrary coefficients for each combination.

Then, as before, we can use the principle of superposition to find as

since the first measurement will see the state of . More generally, we can write as

following the same principle, as each measurement of an will see the corresponding state .

Thus, when subsequent measurements of identical systems are performed, the resulting sequence of eigenfunctions observed by in each appear random (according to what distribution we will show in the next subsection), since there is no structure to the sequences . This appearance of randomness is true even though the entire process is completely deterministic. If, alternatively, was to return to a previously-measured , we would get a repeat of the first analysis, wherein would always see the same state as was previously measured.

## The Born Probability Rule

As before, consider a system in state . To be able to talk about a probability for an observer to see state , we need some function that will serve as a measure of that probability.

Since we know that quantum mechanics is invariant up to an overall phase, we will impose the condition on *P* that it must satisfy the equation

Furthermore, by the linearity of quantum mechanics, we will impose the condition on such that for defined as must satisfy the equation

Together, these two conditions fully specify what function must be. Assuming is normalized, such that , it must be that

or equivalently

such that

which, using the phase invariance condition that *P*(|*a*|) = *P*(*a*), gives

Then, from the linearity condition, we have which, by the phase invariance condition, is equivalent to

Putting it all together, we get

then, defining a new function , yields which implies that must be a linear function such that for some constant Therefore, since , which, imposing the phase invariance condition, becomes which, where is normalized to 1, is the Born rule.

The fact that this measure is a probability, beyond that it is the only measure that can be, is deserving of further proof. The concept of probability is notoriously hard to define, however, and without a definition of probability, it is just as meaningful to call *P* something as arbitrary as the “stallion” of the wave function as the “probability.”^{[1]} Nevertheless, for nearly every reasonable probability theory that exists, such proofs have been provided. Everett provided a proof based on the standard frequentist definition of probability[7][8], David Deutsch (Oxford theoretical physicist) has provided a proof based on game theory[18], and David Wallace (USC theoretical physicist) has provided a proof based on decision theory[11]. For any reasonable definition of probability, wave mechanics is able to show that the above measure satisfies it in the limit without any additional postulates.[19][14][20]

# Arguments For and Against the Everett Interpretation

Despite the unrivaled empirical success of quantum theory, the very suggestion that it may be

literally true as a description of natureis still greeted with cynicism, incomprehension, and even anger.[21]

David Deutsch, 1996

## Falsifiability and Empiricism

Perhaps the most common criticism of the Everett interpretation is the claim that it is not falsifiable, and thus falls outside the realm of empirical science.[22] In fact, this claim is simply not true—many different methods for testing the Everett interpretation have been proposed, and, a great deal of empirical data regarding the Everett interpretation is already available.

One such method we have already discussed: the Everett interpretation removes the Copenhagen interpretation’s postulate that the wave function must collapse at a particular length scale. Were it ever to be conclusively demonstrated that superposition was impossible past some point, the Everett interpretation would be disproved. Thus, every demonstration performed of superposition at larger and larger length scales—such as for Carbon 60 as was previously mentioned[15]—is a test of the Everett interpretation. Arguably, it is the Copenhagen interpretation which is unfalsifiable, since it makes no claim about where the boundary lies at which wave function collapse occurs, and thus proponents can respond to the evidence of larger superpositions simply by changing their theory and moving their proposed boundary up.

Another method of falsification regards the interaction between the Everett interpretation and quantum gravity. The Everett interpretation makes a definitive prediction that gravity must be quantized. Were gravity not quantized—not wrapped up in the wave function like all the other forces—and instead simply a background metric for the entire wave function, we would be able to detect the gravitational impact of the other states we were in a superposition with.[10][23] In 1957, Richard Feynman, who would later come to explicitly support the Everett interpretation[16] as well as become a Nobel laureate, presented an early version of the above argument as a reason to believe in quantum gravity, arguing, “There is a bare possibility (which I shouldn’t mention!) that quantum mechanics fails and becomes classical again when the amplification gets far enough [but] if you believe in quantum mechanics up to any level then you have to believe in gravitational quantization.”[24]

Another proposal concerns differing probabilities of finding ourselves in the universe we are in depending on whether the Everett interpretation holds or not. If the Everett interpretation is false, and the universe only has a single state, there is only one state for us to find ourselves in, and thus we would expect to find ourselves in an approximately random universe. On the other hand, if the Everett interpretation is true, and there are many different states that the universe is in, we could find ourselves in any of them, and thus we would expect to find ourselves in one which was more disposed than average towards the existence of life. Approximate calculations of the relative probability of the observed universe based on the Hartle-Hawking boundary condition strongly support the Everett interpretation.[10]

Finally, as we made a point of being clear about in the “The Everett Interpretation of Quantum Mechanics” section, the Everett interpretation is simply a consequence of taking the wave function seriously as a physical entity. Thus, it is somewhat unfair to ask the Everett interpretation to achieve falsifiability independently of the theory—quantum mechanics—which implies it.[22] If a new theory were proposed that said quantum mechanics stopped working outside of the future light cone of Earth, we would not accept it as a new physical controversy—we would say that, unless there is incredibly strong proof otherwise, we should by default assume that the same laws of physics apply everywhere. The Everett interpretation is just that default—it is only by historical accident that it happened to be discovered after the Copenhagen interpretation. Thus, to the extent that one has confidence in the universal applicability of the principles of quantum mechanics, one should have equal confidence in the Everett interpretation, since it is a logical consequence. It is in fact all the more impressive—and tantamount to its importance to quantum mechanics—that the Everett interpretation manages to achieve falsifiability and empirical support despite its primary virtue of simply saying that quantum mechanics be applied universally.

## Simplicity

Another common objection to the Everett interpretation is that it “postulates too many universes,” which Sean Carroll, a Caltech cosmologist and supporter of the Everett interpretation, calls “the basic silly objection.”[25] At this point, it should be very clear why this objection is silly: the Everett interpretation postulates no such thing—the existence of “many universes” is an *implication,* not a *postulate,* of the theory. Opponents of the Everett interpretation, however, have accused it of a lack of simplicity on the grounds that adding in all those additional universes is unnecessary added complexity, and since by the principle of Occam’s razor the simplest explanation is probably correct, the Everett interpretation can be rejected.[26]

In fact, Occam’s razor is an incredibly strong argument in favor of the Everett interpretation. To explain this, we will first need to formalize what we mean by Occam’s razor, which will require some measure of theoretical computer science. Specifically, we will make use of Solomonoff’s theory of inductive inference: the best, most general framework we have for comparing the probability of empirically indistinguishable physical theories.[27][28][29]^{[2]} To use Solomonoff’s formalism, only one assumption is required of us: under some encoding scheme, competing theories of the universe can be modeled as programs. This assumption does not imply that the universe must be computable, only that it can be computably described, which all physical theories capable of being written down must abide by. From this assumption, and the axioms of probability theory, Solomonoff induction can be derived.[27]

Solomonoff induction tells us that, if we have a set of programs^{[3]}
which encode for empirically indistinguishable
physical theories, the probability of the theory described
by a given program with length in bits (0s and 1s)
is given by

up to a constant normalization factor calculated across all the to make the probabilities sum to 1.[27] We can see how this makes intuitive sense, since if we are predicting an arbitrary system, and thus have no information about the correctness of a program implementing a theory other than its length in bits, we are forced to assign equal probability to each of the two options for each bit, 0 and 1, and thus each additional bit adds a factor of to the total probability of the program. Furthermore, we can see how Solomonoff induction serves as a formalization of Occam’s razor, since it gives us a way of calculating how much to discount longer, more complex theories in favor of shorter, simpler ones.

Now, we will attempt to apply this formalism to assign probabilities to competing interpretations of quantum mechanics, which we will represent as elements of the set {*T _{i}*}. Let

*W*be the shortest program which computes the wave equation. Since the wave equation is a component of all quantum theories, it must be that |

*W*| ≤ |

*T*|. Thus, the smallest that any

_{i}*T*could possibly be is |

_{i}*W*|, such that any

*T*of length |

_{i}*W*| is at least twice as probable as a

*T*of any other length. The Everett interpretation is such a

_{i}*T*, since it requires nothing else beyond wave mechanics, and follows directly from it. Therefore, from the perspective of Solomonoff induction, the Everett interpretation is provably optimal in terms of program length, and thus also in terms of probability.

_{i}To get a sense of the magnitude of these effects, we will attempt to approximate how much less probable the Copenhagen interpretation is than the Everett interpretation. We will represent the Copenhagen interpretation *C* as made of three parts: *W*, wave mechanics; *O*, a machine which determines when to collapse the wave function; and *L*, classical mechanics. Then, where the Everett interpretation *E* is just *W*, we can write their relative probabilities as

How large are *O* and *L*? As a quick Fermi estimate for *L*, we will take Newton’s three laws of motion, Einstein’s general relativistic field equation, and Maxwell’s four equations of electromagnetism as the principles of classical mechanics, for a total of 8 fundamental equations. Assume the minimal implementation for each one averages 100 bits—a very modest estimate, considering the smallest Chess program ever written is 3896 bits long.[30] Then, the relative probability is at most

which is about the probability of picking four random atoms in the universe and getting the same one each time, and is thus so small as to be trivially dismissible.

## The Arrow of Time

Another objection to the Everett interpretation is that it is time-symmetric. Since the Everett interpretation is just the wave equation, its time symmetry follows from the fact that the Schrodinger equation is time-reversal invariant, or more technically, charge-parity-time-reversal (CPT) invariant. The Copenhagen interpretation, however, is not, since wave function collapse is a fundamentally irreversible event.[31] In fact, CPT symmetry is not the only natural property that wave function collapse lacks that the Schrodinger equation has—wave function collapse breaks linearity, unitarity, differentiability, locality, and determinism.[13][12][16][32] The Everett interpretation, by virtue of consisting of nothing but the Schrodinger equation, preserves all of these properties. This is an argument in favor of the Everett interpretation, since there are strong theoretical and empirical reasons to believe that such symmetries are properties of the universe.[33][34][35][5]

Nevertheless, as mentioned above, it has been argued that the Copenhagen interpretation’s breaking of CPT symmetry is actually a point in its favor, since it supposedly explains the arrow of time, the idea that time does not behave symmetrically in our everyday experience.[31] Unfortunately for the Copenhagen interpretation, wave function collapse does not actually imply any of the desired thermodynamic properties of the arrow of time.[31] Furthermore, under the Everett interpretation, the arrow of time can be explained using the standard thermodynamic explanation that the universe started in a very low-entropy state.[36]

In fact, accepting the Everett interpretation gets rid of the need for the current state of the universe to be dependent on subtle initial variations in that low-entropy state.[36] Instead, the current state of the universe is simply one of the many different components of the wave function that evolved deterministically from that initial state. Thus, the Everett interpretation is even simpler—from a Solomonoff perspective—than was shown in the “Simplicity” section, since it forgoes the need for its program to specify a complex initial condition for the universe with many subtle variations.

# Other Interpretations of Quantum Mechanics

The mathematical formalism of the quantum theory is capable of yielding its own interpretation.[9]

Bryce DeWitt, 1970

## Decoherence

It is sometimes proposed that wave mechanics alone is sufficient to explain the apparent phenomenon of wave function collapse without the need for the Everett interpretation’s multiple worlds. The justification for this assertion is usually based on the idea of decoherence. Decoherence is the mathematical result, following from the wave equation, that tightly-interacting superpositions tend to evolve into non-interacting superpositions.[37][38] Importantly, decoherence does not destroy the superposition—it merely “diagonalizes” it, which is to say, it removes the interference terms.[37] After decoherence, one is always still left with a superposition of multiple states.[39][40] The only way to remove the resulting superposition is to assume wave function collapse, which every statistical theory claiming to do away with multiple worlds has been shown to implicitly assume.[41][19] There is no escaping the logic presented in the “The Apparent Collapse of The Wave Function” section—if one accepts the universal applicability of the wave function, one must accept the multiple worlds it implies.

That is not to say that decoherence is not an incredibly valuable, useful concept for the interpretation of quantum mechanics, however. In the Everett interpretation, decoherence serves the very important role of ensuring that macroscopic superpositions—the multiple worlds of the Everett interpretation—are non-interacting, and that each one thus behaves approximately classically.[41][40] Thus, the simplest decoherence-based interpretation of quantum mechanics is in fact the Everett interpretation. From the Stanford Encyclopedia of Philosophy, “Decoherence as such does not provide a solution to the measurement problem, at least not unless it is combined with an appropriate interpretation of the theory [and it has been suggested that] decoherence is most naturally understood in terms of Everett-like interpretations.”[39] The discoverer of decoherence himself, German theoretical physicist Heinz-Dieter Zeh, is an ardent proponent of the Everett interpretation.[42][36]

Furthermore, we have given general arguments in favor of the existence of the multiple worlds implied by the Everett interpretation, which are all reasons to favor the Everett interpretation over any single-world theory. Specifically, calculations of the probability of the current state of the universe support the Everett interpretation[10], as does the fact that the Everett interpretation allows for the initial state of the universe to be simpler[36].

## Consistent Histories

The consistent histories interpretation of quantum mechanics, owing primarily to Robert Griffiths, eschews probabilities over “measurement” in favor of probabilities over “histories,” which are defined as arbitrary sequences of events.[43] Consistent histories provides a way of formalizing what classical probabilistic questions make sense in a quantum domain and which do not—that is, which are consistent. Its explanation for why this consistency always appears at large length scales is based on the idea of decoherence, as discussed above.[43][44] In this context, consistent histories is a very useful tool for reasoning about probabilities in the context of quantum mechanics, and for providing yet another proof of the natural origin of the Born rule.

Proponents of consistent histories claim that it does not imply the multiple worlds of the Everett interpretation.[43] However, since the theory is based on decoherence, there are always multiple different consistent histories, which cannot be removed via any natural history selection criterion.[45][44] Thus, just as the wave equation implies the Everett interpretation, so too does consistent histories. To see this, we will consider the fact that consistent histories works because of Feynmann’s observation that the amplitude of any given final state can be calculated as the sum of the amplitudes along all the possible paths to that state.[44][46] Importantly, we know that two different histories—for example, the different branches of a Mach-Zender interferometer—can diverge and then later merge back together and interfere with each other. Thus, it is not in general possible to describe the state of the universe as a *single* history, since other, parallel histories can interfere and change how that state will later evolve. A history is great for describing how a state came to be, but not very useful for describing how it might evolve in the future. For that, including the other parallel histories—the full superposition—is necessary.

Once one accepts that the existence of multiple histories is necessary on a microscopic level, their existence on a macroscopic level follows—excluding them would require an extra postulate, which would make consistent histories equivalent to the Copenhagen interpretation. If such an extra postulate is not made, then the result is macroscopic superposition, which is to say, the Everett interpretation. This formulation of consistent histories without any extra postulates has been called the theory of “the universal path integral,” exactly mirroring Everett’s theory of the universal wave function.[46] The theory of the universal wave function—the Everett interpretation—is to the theory of the universal path integral as wave mechanics is to the sum-over-paths approach, which is to say that they are both equivalent formalisms with the same implications.

## Pilot Wave Theory

The pilot wave interpretation, otherwise known as the de Broglie-Bohm interpretation, postulates that the wave function, rather than being physically real, is a background which “guides” otherwise classical particles.[47] As we saw with the Copenhagen interpretation, the obvious question to ask of the pilot wave interpretation is whether its extra postulate—in this case adding in classical particles—is necessary or useful in any way. The answer to this question is a definitive no. Heinz-Dieter Zeh says of the pilot wave interpretation, “Bohm’s pilot wave theory is successful only because it keeps Schrodinger’s (exact) wave mechanics unchanged, while the rest of it is observationally meaningless and solely based on classical prejudice.”[42] As we have previously shown in the “The Mathematics of the Everett Interpretation” section, wave mechanics is capable of solving all supposed problems of measurement without the need for any additional postulates. While it is true that pilot wave theory solves all these problems as well, it does so not by virtue of its classical add-ons, but simply by virtue of including the entirety of wave mechanics.[42][48]

Furthermore, since pilot wave theory has no collapse postulate, it does not even get rid of the existence of multiple words. If the universe computes the entirety of the wave function, including all of its multiple worlds, then all of the observers in those worlds should experience physical reality by the act of being computed—it is not at all clear how the classical particles could have physical reality and the rest of the wave function not.[21][42] In the words of David Deutsch, “pilot-wave theories are parallel-universes theories in a state of chronic denial. This is no coincidence. Pilot-wave theories assume that the quantum formalism describes reality. The multiplicity of reality is a direct consequence of any such theory.”[21]

However, since the extra classical particles only exist in one of these worlds, the pilot wave interpretation also does not resolve the problem of the low likelihood of the observed state of the universe[10] or the complexity of the required initial condition[36]. Thus, the pilot wave interpretation, despite being strictly more complicated than the Everett interpretation—both in terms of its extra postulate and the concerns above—produces exactly no additional explanatory power. Therefore, we can safely dismiss the pilot wave interpretation on the grounds of the same simplicity argument used against the Copenhagen interpretation in the “Simplicity” section.

# Conclusion

Harvard theoretical physicist Sidney Coleman uses the following parable from Wittgenstein as an analogy for the interpretation of quantum mechanics: “‘Tell me,’ Wittgenstein asked a friend, ‘why do people always say, it was natural for man to assume that the sun went round the Earth rather than that the Earth was rotating?’ His friend replied, ‘Well, obviously because it just looks as though the Sun is going round the Earth.’ Wittgenstein replied, ‘Well, what would it have looked like if it had looked as though the Earth was rotating?’”[49] Of course, the answer is *it would have looked exactly as it actually does!* To our fallible human intuition, it seems as if we are seeing the sun rotating around the Earth, despite the fact that what we are actually seeing is a heliocentric solar system. Similarly, it seems as if we are seeing the wave function randomly collapsing around us, despite the fact that this phenomenon is entirely explained just from the wave equation, which we already know empirically is a law of nature.

It is perhaps unfortunate that the Everett interpretation ended up implying the existence of multiple worlds, since this fact has led to many incorrectly viewing the Everett interpretation as a fanciful theory of alternative realities, rather than the best, simplest theory we have as of yet for explaining measurement in quantum mechanics. The Everett interpretation’s greatest virtue is the fact that it is barely even an interpretation of quantum mechanics, holding as its most fundamental principle that the wave equation can interpret itself. In the words of David Wallace: “If I were to pick one theme as central to the tangled development of the Everett interpretation of quantum mechanics, it would probably be: the formalism is to be left alone. What distinguished Everett’s original paper both from the Dirac-von Neumann collapse-of-the-wavefunction orthodoxy and from contemporary rivals such as the de Broglie-Bohm theory was its insistence that unitary quantum mechanics need not be supplemented in any way (whether by hidden variables, by new dynamical processes, or whatever).”[11]

There is a tendency of many physicists to describe the Everett interpretation simply as one possible answer to the measurement problem. It should hopefully be clear at this point why that view should be rejected—the Everett interpretation is not simply yet another solution to the measurement problem, but rather a straightforward conclusion of quantum mechanics itself that shows that *the measurement problem should never have been a problem in the first place.* Without the Everett interpretation, one is forced to needlessly introduce complex, symmetry-breaking, empirically-unjustifiable postulates—either wave function collapse or pilot wave theory—just to explain what was *already explicable* under basic wave mechanics. The Everett interpretation is not just another possible way of interpreting quantum mechanics, but a necessary component of any quantum theory that wishes to explain the phenomenon of measurement in a natural way. In the words of John Wheeler, Everett’s thesis advisor, “No escape seems possible from [Everett’s] relative state formulation if one wants to have a complete mathematical model for the quantum mechanics that is internal to an isolated system. Apart from Everett’s concept of relative states, no self-consistent system of ideas [fully explains the universe].”[6]

# References

[1] Heisenberg, W. (1927). THE ACTUAL CONTENT OF QUANTUM THEORETICAL KINEMATICS AND MECHANICS. *Zeitschrift für Physik*.

[2] Anon. The solvay conference, probably the most intelligent picture ever taken, 1927.

[3] Einstein, A., Podolsky, B. and Rosen, N. (1935). Can quantum-mechanical description of physical reality be considered complete? *Physical Review*.

[4] Greenberger, D. M. (1990). Bell’s theorem without inequalities. *American Journal of Physics*.

[5] Townsend, J. (2010). *Quantum physics: A fundamental approach to modern physics*. University Science Books.

[6] Wheeler, J. A. (1957). Assessment of everett’s “relative state” formulation of quantum theory. *Reviews of Modern Physics*.

[7] Everett, H. (1957). THE THEORY OF THE UNIVERSAL WAVE FUNCTION. *Princeton University Press*.

[8] Everett, H. (1957). “Relative state” formulation of quantum mechanics. *Reviews of Modern Physics*.

[9] DeWitt, B. S. (1970). Quantum mechanics and reality. *Physics Today*.

[10] Barrau, A. (2015). Testing the everett interpretation of quantum mechanics with cosmology.

[11] Wallace, D. (2007). Quantum probability from subjective likelihood: Improving on deutsch’s proof of the probability rule. *Studies in History and Philosophy of Science*.

[12] Saunders, S., Barrett, J., Kent, A. and Wallace, D. (2010). *Many worlds?: Everett, quantum theory, & reality*. Oxford University Press.

[13] Wallace, D. (2014). *The emergent multiverse*. Oxford University Press.

[14] Wallace, D. (2006). Epistemology quantized: Circumstances in which we should come to believe in the everett interpretation. *The British Journal for the Philosophy of Science*.

[15] Arndt, M., Nairz, O., Vos-Andreae, J., Keller, C., Zouw, G. van der and Zeilinger, A. (1999). Wave-particle duality of C60 molecules. *Nature*.

[16] Price, M. C. (1995). THE EVERETT FAQ.

[17] Hawking, S. S. (1975). Black holes and thermodynamics. *Physical Review D*.

[18] Deutsch, D. (1999). Quantum theory of probability and decisions. *Proceedings of the Royal Society of London*.

[19] Wallace, D. (2003). Everettian rationality: Defending deutsch’s approach to probability in the everett interpretation. *Studies in History and Philosophy of Science*.

[20] Clark, C. (2010). A theoretical introduction to wave mechanics.

[21] Deutsch, D. (1996). Comment on lockwood. *The British Journal for the Philosophy of Science*.

[22] Carroll, S. (2015). The wrong objections to the many-worlds interpretation of quantum mechanics.

[23] Hartle, J. B. (2014). SPACETIME QUANTUM MECHANICS AND THE QUANTUM MECHANICS OF SPACETIME.

[24] Zeh, H. D. (2011). Feynman’s interpretation of quantum theory. *The European Physical Journal*.

[25] Carroll, S. (2014). Why the many-worlds formulation of quantum mechanics is probably correct.

[26] Rae, A. I. M. (2009). Everett and the born rule. *Studies in History and Philosophy of Science*.

[27] Solomonoff, R. J. (1960). A PRELIMINARY REPORT ON a GENERAL THEORY OF INDUCTIVE INFERENCE.

[28] Soklakov, A. N. (2001). Occam’s razor as a formal basis for a physical theory.

[29] Altair, A. (2012). An intuitive explanation of solomonoff induction.

[30] Kelion, L. (2015). Coder creates smallest chess game for computers.

[31] Bitbol, M. (1988). THE CONCEPT OF MEASUREMENT AND TIME SYMMETRY IN QUANTUM MECHANICS. *Philosophy of Science*.

[32] Yudkowsky, E. (2008). The quantum physics sequence: Collapse postulates.

[33] Ellis, J. and Hagelin, J. S. (1984). Search for violations of quantum mechanics. *Nuclear Physics*.

[34] Ellis, J., Lopez, J. L., Mavromatos, N. E. and Nanopoulos, D. V. (1996). Precision tests of CPT symmetry and quantum mechanics in the neutral kaon system. *Physical Review D*.

[35] Agrawal, M. (2003). Linearity in quantum mechanics.

[36] Zeh, H. D. (1988). Measurement in bohm’s versus everett’s quantum theory. *Foundations of Physics*.

[37] Zurek, W. H. (2002). Decoherence and the transition from quantum to classical—revisited. *Los Alamos Science*.

[38] Schlosshauer, M. (2005). Decoherence, the meausrement problem, and interpretations of quantum mechanics.

[39] Bacciagaluppi, G. (2012). *The role of decoherence in quantum mechanics*. Stanford Encyclopedia of Philosophy.

[40] Wallace, D. (2003). Everett and structure. *Studies in History and Philosophy of Science*.

[41] Zeh, H. D. (1970). On the interpretation of measurement in quantum theory. *Foundations of Physics*.

[42] Zeh, H. D. (1999). Why bohm’s quantum theory? *Foundations of Physics Letters*.

[43] Griffiths, R. B. (1984). Consistent histories and the interpretation of quantum mechanics. *Journal of Statistical Physics*.

[44] Gell-Mann, M. and Hartle, J. B. (1989). Quantum mechanics in the light of quantum cosmology. *Int. Symp. Foundations of Quantum Mechanics*.

[45] Wallden, P. (2014). Contrary inferences in consistent histories and a set selection criterion.

[46] Lloyd, S. and Dreyer, O. (2015). The universal path integral. *Quantum Information Processing*.

[47] Bohm, D. J. and Hiley, B. J. (1982). The de broglie pilot wave theory and the further development of new insights arising out of it. *Foundations of Physics*.

[48] Brown, H. R. and Wallace, D. (2005). Solving the measurement problem: De broglie-bohm loses out to everett. *Foundations of Physics*.

[49] Coleman, S. (1994). Quantum mechanics in your face.

Fun fact: this paper was part of a paper contest that all undergraduate physics students at Harvey Mudd College participate in (which this paper won) for which there’s a longstanding tradition (perpetuated by the students) that each student get a random word and be challenged to include it in their paper. My word was “stallion.” ↩︎

In some of these sources, the equivalent formalism of Kolmogorov complexity is used instead. ↩︎

To be precise, these should be universal Turing machine programs. ↩︎

I used to read Lubos Motl’s blog (maybe between 2005-2010 or something?), first because I had had him as a QFT professor and liked him personally, and later because, I dunno, I found his physics posts informative and his non-physics ultra-right-wing posts weirdly entertaining and interesting in an insane way. Anyway he used to frequently post rants against the Many Worlds Interpretation, and in favor of the Copenhagen interpretation. (Maybe he still does, I dunno.) After reading those rants and sporadically pushing back in the comments, I

maybecame to understand his perspective, though I could be wrong.So, here’s my attempt to describe Lubos’s perspective (which he calls the Copenhagen interpretation) from your (and my) perspective:

Every now and then, you learn something about what Everett branch you happen to be in. For example, you peer at the spin-o-meter and it says “This electron is spin up”. Before you looked, you had written in your lab notebook that the (partial trace) density matrix for the electron was [[0.5, 0], [0, 0.5]]. But after you see the spin-o-meter, you pull out your eraser and write a new (partial trace) density matrix for the electron in your lab notebook, namely [[1, 0], [0, 0]].That thing you just did there, with the eraser and pencil? That’s called “collapsing the wavefunction”.So this version of “Copenhagen” is essentially the same as Everett, but with a different definition of words like “real” and “exists”. Since we don’t care about the other Everett branches, we say that whatever is happening in them is not “real” / doesn’t “exist”, and during experiments (and life) we continually track the “real / existing” part of the wavefunction, and “collapse” is the event where we throw out part of the wavefuntion when discovering that it is not part of the Everett branch that we find ourselves in. (Maybe a compromise position would be saying “the other Everett branches are not real

to us” or something.) I still don’t agree with this position, but it does seem like a more minor nitpicky terminology dispute on which reasonable people can disagree, if that’s really what it comes down to.Yeah… to paraphrase Deutsch, that just sounds like multiple worlds in a state of chronic denial. Also, it

ispossible for other Everett branches to influence yours, the probability just gets so infinitesimally tiny as they decohere that it’s negligible in practice.(Is this true even when we apply pressure to it (as in, can we design machines or systems that leverage this systematically)? And are there are actually no macroscopic phenomena that are downstream of branches interacting? Like, I feel like one could have said such a sentence about relativity a few decades back, but it would have been pretty obviously wrong, and you end up with weird stuff like black holes if you take relativity seriously. I feel like I would be quite surprised if we ended up with no macroscopic phenomena that doesn’t require explicitly modeling the interference by distant branches.)

Like I mention in the paper, the largest object for which we’ve done this so far (at least that I’m aware of) is Carbon 60 atoms which, while impressive, are far from “macroscopic.” Preventing a superposition from decohering is really, really difficult—it’s what makes building a quantum computer so hard. That being said, there are some wacky macroscopic objects that do sometimes need to be treated as quantum systems, like neutron stars (as I mention in the paper) or black holes (though we still don’t fully understand black holes from a quantum perspective).

Ah, yeah, neutron stars do feel like a good example. And I do just recall you mentioning them.

There is some reason to think we will never see effects that depend on the other Everett branches, because we could say that a branching event has occurred precisely when the differences between the two components are no longer effectively reversible.

Motl’s point was the opposite..that MWI is Copenhagen in denial because you keep having to get out your eraser and discard what you did not observe. (Which is relevant to the claim that MWI is simple: in terms of the minimal amount of calculation you need to do to get results, it is not simpler).

The confusion on the topic of interpretations comes from the failure to answer the question, what is an “interpretation” (or, more generally, a “theory of physics”) even supposed to be? What is its type signature, and what makes it true or false?

Imagine a robot with a camera and a manipulator, whose AI is a powerful reinforcement learner, with a reward function that counts the amount of blue seen in the camera. The AI works by looking for models that are good at predicting observations, and using those models to make plans for maximizing blue.

Now our AI discovered quantum mechanics. What does it mean? What kind of model would it construct? Well, the Copenhagen interpretation does a perfectly good job. The wave function evolves via the Schrodinger equation, and every camera frame there is collapse. As long as predicting observations is all we need, there’s no issue.

It gets more complicated if you want your agent to have a reward function that depends on

unobservedparameters (things in the outside world), e.g. the number of paperclips in the universe. In this case Copenhagen is insufficient, because in Copenhagen an observable is undefined when you don’t measure it. But MWI also doesn’t give an answer: our agent cares about classical observables, so how is it supposed to read their values from the wavefunction? I have some ideas about an new interpretation that solves it, but it would be its own essay.EDIT: More precisely, given an evolving wave function Ψ(t), a classical observable O (such as the number of paperclips) and moment of time t, we can use the Born rule to get a distribution over the values of O(t). However, what we would like is to have a distribution over

histories(i.e. we want an element of Δ(R→R) rather than of R→ΔR)) because our utility function might care about history in a non-trivial way, and because without being able to speak of histories it is not clear how to validate this is “the real O” (i.e. what makes this theory the right theory?). A distribution over histories is something we can get from hidden variable theories such as de Broglie-Bohm, but there are other issues with that.I think that physics is best understood as answering the question “in what mathematical entity do we find ourselves?”—a question that Everett is very equipped to answer. Then, once you have an answer to that question, figuring out your observations becomes fundamentally a problem of locating yourself within that object, which I think raises lots of interesting anthropic questions, but not additional physical ones.

I disagree. “in what mathematical entity do we find ourselves?” is a map-territory confusion. We are not in a mathematical entity, we use mathematics to construct

modelsof reality. And, in any case, without “locating yourself within the object”, it’s not clear how do you know whether your theory is true, so it’s very much pertinent to physics.Moreover, I’m not sure how this perspective justifies MWI. Presumably, the wavefunction contains multiple “worlds” hence you conclude that multiple worlds “exist”. However, consider an alternative universe with stochastic classical physics. The “mathematical entity” would be a probability measure over classical histories. So it can also be said to contains “multiple worlds”. But in that universe everyone would be comfortable with saying there’s just one non-deterministic world. So, you need something else to justify the multiple worlds, but I’m not sure what. Maybe you would say the stochastic universe also has multiple worlds, but then it starts looking a like a philosophical assumption that doesn’t follow from physics.

Then what would you call reality? It sure seems like it’s well-described as a mathematical object to me.

Put a simplicity prior over the combined difficulty of specifying a universe and specifying you within that universe. Then update on your observations.

Not necessarily. You can mathematically well-define 1) a Turing machine with access to randomness that samples from a probability measure and 2) a Turing machine which actually computes all the histories (and then which one you find yourself in is an anthropic question). What quantum mechanics says, though, is that (1) actually doesn’t work as a description of reality, because we see interference from those other branches, which means we know it has to be (2).

I call it “reality”. It’s irreducible. But I feel like this is not the most productive direction to hash out the disagreement.

Okay, but then the separation between “specifying a universe” and “specifying you within that universe” is meaningless. Sans this separation, your are just doing simplicity-prior-Bayesian-inference. If that’s what you’re doing, the Copenhagen interpretation is what you end up with (modulo the usual problems with Bayesian inference).

I don’t see how you get (2) out of quantum mechanics.

I’m very confused by the mathematical setup. Probably it’s because I’m a mathematician and not a physicist, so I don’t see things that would be clear for a physicists. My knowledge of quantum mechanics is very very basic, but nonzero. Here’s how I rewrote the setup part of your paper as I was going along, I hope I got everything right.

You have a system S which is some (seperable, complex, etc..) Hilbert space. You also have an observer system O (which is also a Hilbert space). Elements of various Hilbert spaces are called “states”. Then you have the joint system S⊗O of which Ψ is an element of, which comes with a (unitary) time-evolution E:S⊗O→S⊗O. Now if S were not being observed, it would evolve by some (unitary) time-evolution ES:S→S. We assume (though I think functional analysis gives this to use for free) that (vi)i is an orthonormal basis of eigenfunctions of ES, with eigenvalues (λi)i.

Ok, now comes the trick: we assume that observation doesn’t change the system, i.e. that the S-component of E is ES. Wait, that doesn’t make sense!E doesn’t have an “S-component”, something like an S-component makes sense only for pure states, if you have mixed states then the idea breaks down. Ok, so we assume that E, when acting on pure states, is equal to ES. So this would give E:φ⊗ψ↦(ESφ)⊗ψφ, where ψφ is defined so that this holds. Presumably something goes wrong if we do this, so we instead require the weaker E:φi⊗ψ↦(ESφi)⊗ψi. And bingo! Since the φi are eigenfunctions, we get that E(φi⊗ψ)=φi⊗(λiψi), and let’s redefine ψi to include the λi term because why not. Now, if we extend by linearity we get that E:ϕ⊗ψ↦∑iaiφi⊗ψi. Applying E again gives ∑iaiφi⊗(ψi)i, and the same for further powers.

Ok, let’s interpret that last part in terms of “observations”. If we take states of the combined system S⊗E, then time-evolution maps pure states with only a vi component to pure states with only a vi component. Wait, that’s exactly what we assumed, why should we be surprised? Well yeah, but if you started out with some linear combination of eigenfunctions, these will be mapped to a linear combination of pure states, and each pure state in this linear combination evolves as assumed, which may or may not be abig deal to you. In a mixed state that is a linear combination of pure states, we call each pure state a “separate observer” or something like this. Of course, mixed states in a tensor product state

cannotbe uniquely be written as a sum of pure states. However, if we take our preferred basis (vi)i and express our mixed states as pure states with respect to that basis in the S-component, this again makes sense.So it’s super important that we have already distinguished the eigenfunctions of ES at the start, we unfortunately don’t get them out “naturally”. But I guess we learn something about consistency, in the sense that “if eigenfunctions are important, then eigenfunctions are important”.

Ok, now assume our system S is itself a tensor-product of N subsystems S1⊗⋯⊗SN, which we think of as “repeating a measurement”. Now what we get if we start with some pure-state is (in general) a mixed state which can be written as a linear combination of pure states of eigenfunctions. As the eigenfunctions of the different systems are different (they are elements of different spaces), if you start out with some non-eigenfunction in each subsystem, you’ll end up with some mixed state that contains different eigenfunctions for the different systems. The “derivation of the Born rule” doesn’t need this step with multiple systems. Basically, we can see this already with just one system. If we start with a non-eigenfunction ∑iaiφi, then this gets mapped to some linear combination of pure states via the time-evolution. As the time-evolution is unitary, and the |a_i|^2 sum to 1, we can see that each pure state has “length” |a_i|^2.

Thanks for the great paper! I think I’ve finally understood the Everett interpretation.I think the basic point is that if you start by distinguishing your eigenfunctions, then you naturally get out distinguished eigenfunctions. Which is kind of disappointing, because the fact that eigenfunctions are so important is what I find weirdest about QM. I mean I could accept that the Schrödinger equation gives the evolution of the wave-function, but why care about its eigenfunctions so much?

I’m not sure if this will be satisfying to you but I like to think about it like this:

Experiments show that the order of quantum measurements matters. The mathematical representation of the physical quantities needs to take this into account. One simple kind of non-commutative objects are matrices.

If physical quantities are represented by matrices, the possible measurement outcomes need to be encoded in there somehow. They also need to be real. Both conditions are satisfied by the eigenvalues of self-adjoint matrices.

Experiments show that if we immediately repeat a measurement, we get the same outcome again. So if eigenvalues represent measurement outcomes the state of the system after the measurement must be related to them somehow. The eigenvectors of the matrix representing this state is a simple realization of this.

This isn’t a derivation but it makes the mathematical structure of QM somewhat plausible to me.

Right, but (before reading your post) I had assumed that the eigenvectors somehow “popped out” of the Everett interpretation. But it seems like they are built in from the start. Which is fine, it’s just deeply weird. So it’s kind of hard to say whether the Everett interpretation is more elegant. I mean in the Copenhagen interpretation, you say “measuring can only yield eigenvectors” and the Everett interpretation, you say “measuring can only yield eigenvectors and all measurements are done so the whole thing is still unitary”. But in the end even the Everett interpretation distinguishes “observers” somehow, I mean in the setup you describe there isn’t any reason why we can’t call the “state space” the observer space and the observer “the system being studied” and then write down the

samesystem from the other point of view...The “symmetric matrices<-> real eigenvectors” is of course important, this is essentially just the spectral theorem which tells us that real linear combinations of orthogonal projections

aresymmetric matrices (and vice versa).Nowadays matrices are seen as “simple non-commutative objects”. I’m not sure if this was true when QM was being developed. But then again, I’m not really sure how linear QM “really” is. I mean all of this takes place on vectors with norm 1 (and the results are invariant under change of phase), and once we quotient out the norm, most of the linear structure is gone. I’m not sure what the correct way to think about the phase is. On one hand, it seems like a kind of “fake” unobservable variable and it should be permissible to quotient it out somehow. On the other hand, the complex-ness of the Schrödinger equation seems really important. But is this complexness a red herring? What goes wrong if we just take our “base states” as discrete objects and try to model QM as the evolution of probability distributions over ordered pairs of these states?

This is a bit of a tangent but decoherence isn’t exclusive to the Everett interpretation. Decoherence is itself a measurable physical process independent of the interpretation one favors. So explanations which rely on decoherence are part of all interpretations.

In the derivations of decoherence you make certain approximations which loosely speaking depend on the environment being big relative to the quantum system. If you change the roles these approximations aren’t valid any more. I’m not sure if we are on the same page regarding decoherence, though (see my other reply to your post).

You might be interested in Lucien Hardy’s attempt to find a more intuitive set of axioms for QM compared to the abstractness of the usual presentation: https://arxiv.org/abs/quant-ph/0101012

Isn’t the whole point of the Everett interpretation that there

isno decoherence? We have a Hilbert space for the system, and a Hilbert space for the observer, and a unitary evolution on the tensor product space of the system. With these postulates (and a few more), we can start with a pure state and end up with some mixed tensor in the product space, which we then interpret as being “multiple observers”, right? I mean this is how I read your paper.We are surely not on the same page regarding decoherence, as I know almost nothing about it :)

The arxiv-link looks interesting, I should have a look at it.

Yes, the coherence-based approach (Everett’s original paper, early MWI) is quite different to the decoherence-based approach (Dieter Zeh, post 1970).

Deutsch uses the coherence based approach, while most other many worlders use the decoherence based approach.

He absolutely does establish that quantum computing is superior to classical computing, that underlying reality is not classical, and that the superiority of quantum computing requires some extra structure to reality. What the coherence based approach does not establish is whether the extra structure adds up to something that could be called “alternate worlds” or parallel universes , in the sense familiar from science fiction.

In the coherence based approach, Worlds” are coherent superpositions.That means they in exist at small scales, they can continue to interact with each other, after, “splitting” , and they can be erased. These coherent superposed states are the kind of “world” we have direct evidence for, although they seem to lack many of the properties requited for a fully fledged many worlds theory, hence the scare quotes.

In particular, if you just model the wave function, the only results you will get represent every possible outcome. In order to match observation , you will have to keep discarding unobserved outcomes and renormalising as you do in every interpretation. It’s just that that extra stage is performed manually, not by the programme.

I don’t know if it would make things clearer, but questions about why eigenvectors of Hermitian operators are important can basically be recast as one question of why orthogonal states correspond to mutually exclusive ‘outcomes’. From that starting point, projection-valued measures let you associate real numbers to various orthogonal outcomes, and that’s how you make the operator with the corresponding eigenvectors.

As for why orthogonal states are important in the first place, the natural thing to point to is the unitary dynamics (though there are also various more sophisticated arguments).

Yes, I know all of this, I’m a mathematician, just not one researching QM. The arxiv link looks interesting, but I have no time to read it right now. The question isn’t “why are eigenvectors of Hermitian operators interesting”, it is “why would we expect a system doing something as reasonable as evolving via the Schrödinger equation to do something as unreasonable as to suddenly collapse to one of its eigenfunctions”.

I guess I don’t understand the question. If we accept that mutually exclusive states are represented by orthogonal vectors, and we want to distinguish mutually exclusive states of some interesting subsystem, then what’s unreasonable with defining a “measurement” as something that correlates our apparatus with the orthogonal states of the interesting subsystem, or at least as an ideal form of a measurement?

I think my question isn’t really well-defined. I guess it’s more along the lines of “is there some ‘natural seeming’ reasoning procedure that gets me QM ”.

And it’s even less well-defined as I have no clear understanding of what QM is, as all my attempts to learn it eventually run into problems where something just doesn’t make sense—not because I can’t follow the math, but because I can’t follow the interpretation.

Yes, this makes sense, though “mutually exclusive state are represented by orthogonal vectors” is still really weird. I

kind ofget why Hermitian operators here makes sense, but then we apply the measurement and the system collapses to one of its eigenfunctions. Why?If I understand what you mean, this is a consequence of what we defined as a measurement (or what’s sometimes called a pre-measurement). Taking the tensor product structure and density matrix formalism as a given, if the interesting subsystem starts in a pure state, the unitary measurement structure implies that the reduced state of the interesting subsystem will generally be a mixed state after measurement. You might find parts of this review informative; it covers pre-measurements and also weak measurements, and in particular talks about how to actually implement measurements with an interaction Hamiltonian.

You could also turn around this question. If you find it somewhat plausible that that self-adjoint operators represent physical quantities, eigenvalues represent measurement outcomes and eigenvectors represent states associated with these outcomes (per the arguments I have given in my other post) one could picture a situation where systems hop from eigenvector to eigenvector through time. From this point of view, continuous evolution between states is the strange thing.

The paper by Hardy I cited in another answer to you tries to make QM as similar to a classical probabilistic framework as possible and the sole difference between his two frameworks is that there are

continuoustransformations between states in the quantum case. (But notice that he works in a finite-dimensional setting which doesn’t easily permit important features of QM like the canonical commutation relations).Well yeah sure. But continuity is a much easier pill to swallow than “continuity only when you aren’t looking”.

This

and this

doesn’t sound correct to me.

The basis in which the diagonalization happens isn’t put in at the beginning. It is determined by the nature of the interaction between the system and its environment. See “environment-induced superselection” or short “einselection”.

Ok, but OP of the post above starts with “Suppose we have a system S with eigenfunctions {φi}”, so I don’t see why (or how) they should depend on the observer. I’m not claiming these are just arbitrary functions. The point is that requiring the the time-evolution on pure states of the form ψ⊗φi to map to pure states of the same kind is arbitrary choice that distinguishes the eigenfunctions. Why can’t we chose any other orthonormal basis at this point, say some ONB (wi)i, and require that wi⊗ψ↦ESwi⊗ψi, where ψi is defined so that this makes sense and is unitary? (I guess this is what you mean with “diagonalization”, but I dislike the term because if we chose a non-eigenfunction orthonormal basis the construction still “works”, the representation just won’t be diagonal in the first component).

FYI, the SEP article on decoherence in QM is not anonymous, but rather by Guido Bacciagaluppi, which you can find by scrolling to the bottom.

Thanks—should be fixed now. Dunno how I missed that.

Accepting that probability is some function of the magnitude of the amplitude, why should it be linear exactly under orthogonal combinations?

Everett argued in his thesis that the unitary dynamics motivated this:

He made the analogy with Liouville’s theorem in classical dynamics, where symplectic dynamics motivated the Lebesgue measure on phase space.

I reply with the same point about orthogonality: Why should (2,1) split into one branch of (2,0) and one branch of (0,1), not into one branch of (1,0) and one branch of (1,1)? Only the former leads to probability equaling squared amplitude magnitude.

(I’m guessing that classical statistical mechanics is invariant under how we choose such branches?)

Again, it’s because of unitarity.

As Everett argues, we need to work with normalized states to unambiguously define the coefficients, so let’s define normalized vectors v1=(1,0) and v2=(1,1)/sqrt(2). (1,0) has an amplitude of 1, (1,1) has an amplitude of sqrt(2), and (2,1) has an amplitude of sqrt(5).

(2,1) = v1 + sqrt(2) v2, so we need M[sqrt(5)] = M[1] + M[sqrt(2)] for the additivity of measures. Now let’s do a unitary transformation on (2,1) to get (1,2) = −1 v1 + 2 sqrt(2) v2 which still has an amplitude of sqrt(5). So now we need M[sqrt(5)] = M[2 sqrt(2)] + M[-1] = M[2 sqrt(2)] + M[1]. This can only work if M[2 sqrt(2)] = M[sqrt(2)]. If one wanted a strictly monotonic dependence on amplitude, that’d be the end. We can keep going instead and look at the vector (a+1, a) = v1 + a sqrt(2) v2, rotate it to (a, a+1) = -v1 + (a+1) sqrt(2) v2, and prove that M[(a+1) sqrt(2)] = M[a sqrt(2)] for all a. Continuing similarly, we’re led inevitably to M[x] = 0 for any x. If we want a non-trivial measure with these properties, we have to look at orthogonal states.

We don’t lose unitarity just by choosing a different basis to represent the mixed states in the tensor-product space.

I don’t see how that relates to what I said. I was addressing why an amplitude-only measure that respects unitarity and is additive over branches has to use amplitudes for a mutually orthogonal set of states to make sense. Nothing in Everett’s proof of the Born rule relies on a tensor product structure.

Then I have misunderstood Everett’s proof of the Born rule. Because the tensor product structure seems absolutely crucial for this, as you just can’t get mixed states without a tensor product structure.

I will amend my statement to be more precise:

Everett’s proof that the Born rule measure (amplitude squared for orthogonal states) is the only measure that satisfies the desired properties has no dependence on tensor product structure.

Everett’s proof that a “typical” observer sees measurements that agree with the Born rule in the long term uses the tensor product structure and the result of the previous proof.

Yeah, that’s a great argument—Everett’s thesis always has the answers.

Linearity is a fundamental property of quantum mechanics. If I’m trying to just describe it in wave mechanics terms, I would say that the linearity of quantum mechanics derives from the fact that the wave equation describes a linear system and thus solutions to it must obey the (general, mathematical) principle of superposition.

It’d be fine if it were linear in general, but it’s not for combinations that aren’t orthogonal. Suppose a is drawn from R^2. P(sqrt(2))=P(|(1,1)|)=P(1,1)=P(1,0)+P(0,1)=2*P(|(1,0)|)=2*P(1) which agrees with your analysis, but P(sqrt(5))=P(|(2,1)|)=P(2,1)/=P(1,0)+P(1,1)=3*P(1) doesn’t add up.

It’s been a while since I’ve done any wave mechanics, but I’ll try to take a crack at this. The Schrodinger equation describes a linear PDE such that the sum of any two solutions is also a solution and any constant multiple of a solution is also a solution. Furthermore, the Schrodinger equation just takes the form ^HΨ=EΨ, thus “solutions to the Schrodinger equation” is equivalent to “eigenfunctions of the Hamiltonian.” Thus, if ϕi are eigenfunctions of the Hamiltonian with eigenvalues Ei, then ∑iaiϕi must also be an eigenfunction of the Hamiltonian. This raises a problem for any theory with P nonlinear across a sum of eigenfunctions, however, because it lets me change bases into an equivalent form with a potentially different result.

If a1 is 2 and phi1 has eigenvalue 3, and a2 is 4 and phi2 has eigenvalue 5, then 2*phi1+3*phi2 is mapped to 6*phi1+20*phi2 and therefore not an eigenfunction.

Ah, I see the confusion. Since we’re in a wave mechanics setting, I should have written ^HΨ=i¯h∂∂tΨ rather than ^HΨ=EΨ.

Sorry, but the Copenhagen interpretation, with the important proviso that

observables, not ‘the wavefunction’, are what’s real, is presently the best ‘interpretation’ of quantum mechanics, because it’s the only one that actually works in all situations where QM is applied.As someone wishing to understand reality, you are of course free to

speculatethat the wavefunction is a real thing and not just a step in a calculation, and that it is some kind of multiverse. But if you then wish to proclaim that this is obviously the truth, then the onus is on you, and MWI advocates in general, to exhibit a coherent theory with the rigor of pre-quantum physics. Which basis do you use in obtaining multiple worlds from a single wavefunction? How do you deal with relativity? How do you get the Born rule? MWI advocates give contradictory answers to these questions, or incoherent answers, or no answers at all. In your essay I only see the third question addressed.Without coherent answers to questions like these, MWI is simply not a self-sufficient theory; it’s just rhetoric—just ‘words’. Any actual application of QM still requires the Copenhagen approach.

The consistent histories or decoherent histories formalism

isa working, relativistic, computational framework that also has a many-worlds flavor. Perhaps it will eventually give rise to a true many-worlds theory. But for now it’s actually just the Copenhagen interpretation for quantum cosmology. One way to see this is that, just like in ordinary quantum mechanics, the ‘user’ of the consistent histories formalism gets to choose which observables they care about.Any diagonal basis—the whole point of decoherence is that the wavefunction evolves into a diagonalizable form over time.

Just use the Relativistic Schrodinger equation.

I asked

evhub replied

Then my next question would be, exactly when in this evolution does one world become many?

I also asked

evhub replied

In relativity, wavefunctions will only be defined with respect to a particular reference frame. You have to say which spacelike surfaces you are treating as surfaces of simultaneity; only then are you equipped to talk about e.g. EPR states. (The technical exception to this is asymptotic states at spacelike infinity.)

In relativistic quantum field theory, the wave equations have new meanings, they are now operator equations. You’re no longer talking about waves with definite values at space-time points (x,t), and a differential equation describing how those values vary. Instead you are talking about “observables” at space-time points (x,t), and operators which formally represent those observables, and the wave equation describes the algebraic relations among those operators; something which empirically translates into relationships among the observables, such as the uncertainty principle.

I don’t know how clear that explanation is, but the significant thing is that the field operators are consistent with relativity because they are anchored at individual space-time points, whereas wavefunctions are defined only with respect to a particular reference frame. The point being that this is a problem for an ontological interpretation which starts by saying that wavefunctions are what’s real.

See also discussion here; I’ll copy it for convenience:

Sometimes you find that the wavefunction |ψ⟩ is the sum of a discrete number of components |ψ⟩=|ψ1⟩+|ψ2⟩+⋯ , with the property that for any relevant observable

A, ⟨ψi|A|ψj⟩≈0 for i≠j. (Here, “≈0” also includes things like “has a value that varies quasi-randomly and super-rapidly as a function of time and space, such that it averages to 0 for all intents and purposes”, and “relevant observable” likewise means “observable that might come up in practice, as opposed to artificial observables with quasi-random super-rapidly-varying spatial and time-dependence, etc.”).When that situation comes up,

ifit comes up, you can start ignoring cross-terms, and calculate the time-evolution and other properties of the different |ψi⟩ as if they had nothing to do with each other, andthat’swhere you can use the term “branch” to talk about them.There isn’t a sharp line for when the cross-terms are negligible enough to properly use the word “branch”, but there are exponential effects such that it’s very clearly appropriate in the real-world cases of interest.

Well, it seems like the most important part of your answer comes in a subsequent comment

As far as I am concerned, that renders the theory unviable. We-here (as opposed to our copies in slightly divergent branches) inhabit a particular world. We definitely exist, therefore the object in the theory corresponding to our existence must also definitely exist; therefore if its existence is only a matter of degree or definition, then the theory is wrong.

But at least you have clarified the kind of MWI that you are talking about—worlds are defined only vaguely or exactly, and cannot be counted. This is not the case in all forms of MWI, e.g. see “many interacting worlds”.

Do you have anything to say about the criticism from relativity? That in relativistic quantum field theory, wavefunctions only exist in the context of a particular frame, and so can’t be ontologically fundamental?

I guess I don’t really understand what you’re getting at. For example, displacement and 4-velocity and electromagnetic 4-potential are all 4-vectors, such that their components are different in different frames. Whereas, say, the rest mass or electric charge of a particle is a Lorentz scalar, the same in every frame. Is your position that Lorentz scalars have a special status that Lorentz 4-vectors, 4-tensors, etc. don’t have, that allows them to be “ontologically fundamental”? If so, why? I haven’t ever thought of Lorentz scalars having a special status, and I don’t find that notion intuitive. Or sorry if I’m misunderstanding.

A wavefunction is spatially extended. Your description of MWI involves tracking how the properties of a wavefunction change over time. In relativity, that’s going to require choosing a reference frame, a particular division of space-time into space and time.

In a Copenhagen approach to, say, particle physics, that doesn’t matter, because everything that is frame-dependent vanishes by the end of the calculation (as does everything that is gauge-dependent). But I don’t see how you can reify wavefunctions without also having a preferred reference frame.

In quantum field theory the wave function is an operator at each point in spacetime, and it works out that everything is consistent with experiments across reference frame changes and nothing travels faster than the speed of light, etc. That’s all experimentally established. Can you say again what’s the problem?

I mean, velocity is frame-dependent, right? You can measure velocity, it doesn’t vanish at the end of the calculation… It’s different in different reference frames, of course, and that’s fine, because its reference-frame-dependence is consistent with everything else and with experiments. So what do you mean? Sorry if I’m just not understanding you here, you can try again...

Hmm, I guess you could make it clearer by focusing on gauge dependence. “The wave function is gauge dependent, so how can you say it’s “real”?” Is that similar to your argument? If so, I guess I’m sympathetic to that argument, and I would say that the “real” thing is the equivalence class of wave functions up to gauge transformations, or something like that...

The point seems so simple to me, I am having trouble expressing it… A wavefunction is the instantaneous state of a quantum system. It is extended spatially. In relativistic space-time, to talk about the instantaneous state of an extended object, you have to define simultaneity. This means choosing a particular decomposition of space-time into spacelike hypersurfaces that are treated as surfaces of simultaneity. In a relativistic universe, you cannot talk about finite time evolution of spatially extended wavefunctions without first breaking space-time into space and time.

In particle physics a la Copenhagen, there is no ontological commitment to wavefunctions as things that exist. They are just part of a calculation. But we are told that in MWI, the universal wavefunction is real and it is a superposition of worlds. As I have just argued, you can’t do what you want to do—study how this wavefunction evolves over time—without first breaking space-time into space and time, so that you have the hypersurfaces of simultaneity on which the wavefunction is defined. So it seems that belief in the wavefunction as something real, requires belief in an ontologically preferred frame, with respect to which that wavefunction’s time evolution is defined.

Is that any clearer?

Hmm. Again, “the universal wavefunction is real” is part of the theory but “it is a superposition of worlds” is not, the latter is just a way to talk loosely about particular situations that sometimes come up. I don’t think that people in different inertial reference frames have to agree about how many worlds there are, indeed I don’t even think people in the

sameinertial reference frame have to agree about how many worlds there are. It’s not part of the theory. The only other thing thatispart of the theory is some kind of indexical axiom, like I think one version is “if the complex amplitude for me having a certain brain state approaches zero, then the probability that I will find myself experiencing having that brain state also approaches zero”, or things like that, I think.In my experience when physicists challenge a proposal as being inconsistent with relativity, they try to come up with an example where two people in different reference frames would make different predictions about the same concrete experimental (or thought-experimental) result. Can you think of anything like that? It seems like you have a different demand, which is “people in different reference frames cannot disagree about the value of ontologically primitive things” even if the disagreement doesn’t shake out as a concrete prediction incompatibility. If so (and sorry if I’m misunderstanding), I guess I just don’t see why that’s important. Why can’t something be both ontologically primitive and reference frame dependent? Like velocity, to take an everyday example. I don’t know if it’s ontologically primitive, (partly because I’m not sure what ontologically primitive means), but anyway, I don’t see why reference frame dependence should count against it.

At this point I have nothing to say, because there’s no coherent concept of ‘world’ left to debate.

This could become a version of ‘many-minds interpretation’. But now you need to make ‘mind’ a rigorous concept. There has to be something exact in the ontology that corresponds to the specificity of what we see! - whether it’s a whole ‘world’, or just an ‘observer experience’. If

everythingother than the universal wavefunction is fuzzy and vague and a matter of convention, you no longer have a theory corresponding to observed reality.The 4-velocity (considered as an invariant geometric object, rather than in terms of covariant components) is the fundamental entity.

Good! Maybe we’re on the same page there. “World” is not part of the theory and is not a well-defined concept, in my opinion.

Hmm, I guess I would propose something like “the complete history of exactly which neurons in a brain fire at which times, to 1us accuracy, is a mind, for present purposes”. Then I would argue that different “minds” don’t exhibit measurable quantum interference with each other, or we can say “different minds are in different worlds / branches” as a casual shorthand for that, if we want. And there is a well-defined (albeit complicated) way to project the universal wavefunction into the subspace of one “mind”, in order to calculate its quantum amplitude, and then you can apply the Born rule for the indexical calculation of how likely you are to find yourself in that mind. Something like that, I guess. I haven’t thought it through very carefully, I just think something vaguely like that could work, with a bit more effort to iron out the details. I’m not sure what’s in the literature, maybe there’s a better approach...

I agree that it isn’t a problem for practical purposes but if we are talking about a fundamental theory about reality shouldn’t questions like “How many worlds are there?” have unambiguous answers?

No … “how many worlds are there” is not a question with a well-defined answer in Everett’s theory. It’s like “How many grains of sand make up a heap?” … just a meaningless question. The notion that there is a specific, well-defined number of worlds is sometimes implied by the language used in simplifications / popularizations of the theory, but it’s not part of the actual theory, and really it can’t possibly be, I don’t think.

I agree that the question “how many worlds are there” doesn’t have a well-defined answer in the MWI. I disagree that it is a meaningless question.

From the bird’s-eye view, the ontology of the MWI seems pretty clear: the universal wavefunction is happily evolving (or is it?). From the frog’s-eye view, the ontology is less clear. The usual account of an experiment goes like this:

The system and the observer come together and interact

This leads to entanglement and decoherence in a certain basis

In the final state, we have a branch for each measurement outcome. i.e. there are now multiple versions of the observer

This seems to suggest a nice ontology: first there’s one observer, then the universe splits and afterwards we have a certain number of versions of the observer. I think questions like “When does the split happen?” and “How many versions?” are important because they would have well-defined answers if the nice ontology was tenable.

Unfortunately it isn’t, so the ontology is muddled. We have to use terms like “approximately zero” and “for all practical purposes” which takes us most of the way back to give the person who determines which approximations are appropriate and what is practical—aka the observer—an important part in the whole affair.

The ontology doesn’t feel muddled to me, although it does feel… not very quantum? Like a thing that seems to be happening with collapse postulates is that it takes seriously the “everything should be quantized” approach, and so insists on ending up with one world (or discrete numbers of worlds). MWI instead seems to think that wavefunctions, while having quantized bases, are themselves complex-valued objects, and so there doesn’t need to be a discrete and transitive sense of whether two things are ‘in the same branch’, and instead it seems fine to have a continuous level of coherence between things (which, at the macro-scale, ends up looking like being in a ‘definite branch’).

[I don’t think I’ve ever seen collapse described as “motivated by everything being quantum” instead of “motivated by thinking that only what you can see exists”, and so quite plausibly this will fall apart or I’ll end up thinking it’s silly or it’s already been dismissed for whatever reason. But somehow this does seem like a lens where collapse is doing the right sort of extrapolating principles where MWI is just blindly doing what made sense elsewhere. On net, I still think wavefunctions are continuous, and so it makes sense for worlds to be continuous too.]

Like, I think it makes more sense to think of MWI as “first many, then even more many,” at which point questions of “when does the split happen?” feel less interesting, because the original state is no longer as special. When I think of the MWI story of radioactive decay, for example, at

every timestepyou get two worlds, one where the particle decayed at that moment and one where it held together, and as far as we can tell if time is quantized, it must have very short steps, and so this is very quickly a very large number of worlds. If time isn’t quantized, then this has to be spread across continuous space, and so thinking of there being a countable number of worlds is right out.What I called the “nice ontology” isn’t so much about the number of worlds or even countability but about whether the worlds are well-defined. The MWI gives up a unique reality for things. The desirable feature of the “nice ontology” is that the theory tells us what a “version” of a thing is. As we all seem to agree, the MWI doesn’t do this.

If it doesn’t do this, what’s the justification for speaking of different versions in the first place? I think pure MWI makes only sense as “first one, then one”. After all, there’s just the universal wave function evolving and pure MWI doesn’t give us any reason to take a part of this wavefunction and say there are many versions of this.

You can derive practical usefulness of Copenhagen approach from MWI without postulating reality of observables.

I never actually heard any coherent arguments in favor of reality of observables. If we giving up on minimizing complexity, why not go all the way to the original intuitions and say that Spirit of the Forest shows you the world consistent with QM calculations?

And to avoid misunderstanding: MWI means wavefunction is real, but worlds and Born rule are just arbitrary approximation.

First, I was already on board with all the content of this post. My question is this: would there be any difference, or would it help resolve any confusion for anyone, if instead we said something like “There is still just one ‘world’ in the sense that there’s one universal equation constantly following the same rule. The math shows that that world consists of many non-interacting parts, and the number of non-interacting parts grows with time. For convenience, when performing experiments, we ignore the non-interacting components, just like we already ignore components outside the experimental system, only now we also re-normalize to exclude the non-interacting components”?

Do you see any technical or conceptual challenges which the MWI has yet to address or do you think it is a well-defined interpretation with no open questions?

What’s your model for why people are not satisfied with the MWI? The obvious ones are 1) dislike for a many worlds ontology and 2) ignorance of the arguments. Do you think there are other valid reasons?

There are remaining open questions concerning quantum mechanics, certainly, but I don’t really see any remaining open questions concerning the Everett interpretation.

“Valid” is a strong word, but other reasons I’ve seen include classical prejudice, historical prejudice, dogmatic falsificationism, etc. Honestly, though, as I mention in the paper, my sense is that most big name physicists that you might have heard of (Hawking, Feynman, Gell-Mann, etc.) have expressed support for Everett, so it’s really only more of a problem among your average physicist that probably just doesn’t pay that much attention to interpretations of quantum mechanics.

Thanks for answering. I didn’t find a better word but I think you understood me right.

So you basically think that the case is settled. I don’t agree with this opinion.

I’m not convinced of the validity of the derivations of the Born rule (see IV.C.2 of this for some critcism in the literature). I also see valid philosophical reasons for preferring other interpretations (like quantum bayesianism aka QBism).

I don’t have a strong opinion on what is the “correct” interpretation myself. I am much more interested in what they actually say, in their relationships, and in understanding why people hold them. After all, they are empirically indistinguishable.

There are other big name physicists who don’t agree (Penrose, Weinberg) and I don’t think you are right about Feynman (see “Feynman said that the concept of a “universal wave function” has serious conceptual difficulties.” from here). Also in the actual quantum foundations research community, there’s a great diversity of opinion regarding interpretations (see this poll).

I’d like to know whether worlds consist of states in a coherent superposition, or whether they are decoherent.

Also:

Have you read Dowker and Kent’s paper?

And what is your solution to the basis problem?

This added to my layperson’s understanding of both MWI and quantum mechanics more generally.

Immediately under the subhead “The Apparent Collapse of The Wave Function,” what is a-sub-i in the initial state?

Glad you liked it!ai is the amplitude—the assumption is that ϕi are normalized, orthogonal eigenfunctions (that is, ∫ϕ∗iϕidx=1 and ∫ϕ∗iϕjdx=0).

When a QM state is written concretely enough to make a prediction, it is written on a basis. If it can be written as a single term on a suitable choice of basis, then it is what is known a a pure state. Note that there is no fact of the matter about whether a pure state is superposed, or how it is superposed, unless there is an objective fact about its basis. If the basis is not an intrinsic part of the state, but chosen by the experimenter, or implied by the way the experiment is conducted, then it is unmysterious that when one measures a system in a superposition of multiple states, it is only ever found in one of them.

While understanding basis as a “map” feature helps explain measurement, it undermines coherence-based many worlds, since there is no longer a fact of the matter about how “worlds”, ie states, are organised, or even about where there is more than one.

What exactly are we doing here? Calculating the complexity of a MW ontology versus a Copenhagen ontology, or figuring out the simplest way to predict observations?

The minimal subset of calculation you need to do in order to predict observation is in fact going to be the same whatever interpretation you hold to—it’s just the subset that “shut up and calculate” uses. Even many worlders would go through a cycle of renormalising according to observed data, and discarding unoberserved data, which is to say, behaving “as if” collapse were occurring… But even though they don’t interpret it that way. So just predicting observation doesn’t tell you which ontology is simplest.

On the other hand, modelling ontology without bothering about prediction can differentiate the complexity of ontologies. But why would you want to do that? What you are interested in is the simplest correct theory , not the simplest theory. Its easy to come up with simple theories that are not predictive

Wave function is described using imaginary numbers. If we “taking the wave function seriously as a physical entity”—does it mean that imaginary part have physical sense? For example, if a cat has amplitude (0;1) does it mean that real part of doesn’t exist, but imaginary is full of life?

I wouldn’t get too hung up on “real” and “imaginary”. A complex number is just a mathematical object. You can equally well say that there are no complex numbers in quantum mechanics at all … only ordered pair of real numbers like (x,y). You can “add” two pairs by (x,y)+(z,w) = (x+z,y+w), and you can “multiply” two pairs by (x,y) × (z,w) = (xz-yw, xw+yz). (See here.) To answer your question more directly, you can multiply all states simultaneously by an arbitrary phase factor like −1 or i or -i and it makes no difference—global phase factors are unobservable. Sometimes people say that quantum states are complex rays, or points in a complex

projectivespace, to make that point more clear. That actually makes the math more complicated and annoying in some cases … I learned this lesson by getting it wrong when writing the original version of this wikipedia article.Strongly downvoted for basic misunderstanding of how science works (you test your theories! not wax poetic about them), alt-facts (the whole section on falsifiability is nonsense) and citing sus sources like hedweb. But MWI is applause light on this forum, so whatever.

I’d say that only a part of the section on falsifiability is nonsense, not the whole thing.

Paragraph 2 is reasonable. Physically speaking, you more or less have 2 options: Arbitrary objects either can be placed in superposition, or they can’t. Unsurprisingly, this is experimentally testable by trying to put progressively larger and larger objects into superposition and seeing if you are successful. (Or more complicated objects, or more massive objects, whatever.) This also has practical consequences for our ability to build quantum computers.

Paragraph 3 is speculative, but also reasonable: If you believe that arbitrary objects can be placed into superposition, then you had better believe that applies to very massive objects and their gravitational fields. This experiment hasn’t been done yet, but it could be done in principle, and it’s even possible to do without messing around with supermassive black holes or Planck-energy accelerators. [1]

Paragraph 4 is a super sketchy anthropic-style argument that should definitely have been left out.

Paragraph 5 is about simplicity, which is great and all, but doesn’t really bear on how falsifiable a theory is. Also should have been left out, or at least moved to the section on simplicity. (The simplicity advantage of decoherence over objective collapse theories is overrated IMO. Yes, decoherence is probably somewhat simpler than objective collapse, but only by a handful of Kolmogorov bits. Not really enough to be conclusive, so we should just do the experiments.)

[1] https://physics.aps.org/articles/v10/s138

I don’t generally feed trolls, but I literally have no idea what hedweb even is and am honestly just curious how you even got that from my references.

Reference 16, The Everett FAQ by M. C. Price, is hosted on hedweb, which is a website about the hedonistic imperative to abolish suffering. IDK why it’s hosted there.