“Eliezer’s argument is that multiple worlds require no additions to the length of the theory if it was formally expressed, whereas a ‘deleting worlds’ function is additional. It’s also unclear where it would kick in, what ‘counts’ as a sufficiently fixed function to chop off the other bit.”
Run time is at least as important as length. If we want to simulate evolution of the wavefunction on a computer, do we get a more accurate answer of more phenomena by computing an exploding tree of alternatives that don’t actually significantly influence anything that we can ever observe, or does the algorithm explain more by pruning these irrelevant branches and elaborating the branches that actually make an observable difference? We save exponential time and thus explain exponentially more by pruning the branches.
“It’s not clear from your post if you think the other half’s chopped off because we haven’t observed it, or we don’t observe it because it’s chopped off!”
Neither. QM is objective and the other half is chopped off because decoherence created a mutually exclusive alternative. This presents no more problem for my interpretation (which might be called “quantum randomness is objective” or “God plays dice, get over it”) than it does for MWI (when does a “world” branch off?) It’s the sorities paradox either way.
“The other point is that if we are ‘Human-LEFT’ then we don’t expect the other part of the wave function to be observable to us. Does that mean we delete it from what is real?”
Yes, for the same reason we delete other imagined but unobserved things like Santa Claus, absolute space, and the aether from what we consider real. If we don’t observe them and they are unnecessary for explaining the world we do see, they don’t belong in science.
You’re arguing about something that seems interesting and possibly important, but it doesn’t sound like the mathematical likelihood of the theory. Eliezer starts from a Bayesian interpretation of this number as a rational degree of belief, theoretically determined by the evidence we have. As I understand it, this quantity has a correct value, and the question of how much the theory explains has a definite answer, whether or not we can calculate it. The alternate Discordian or solipsistic view has much to recommend it but runs into problems if we take it as a general principle.
Now run time has no obvious effect on likelihood of truth. I don’t know if message length does either, but at least we have an argument for this (see Solomonoff induction). And the claim that MWI adds an extra postulate of its own seems false. MWI tries to follow Occam’s Razor—in a form that seems to agree with Solomonoff and Isaac Newton—by saying that no causes exist but arrows attached to large sets of numbers, and the function that attaches them. Everything you call magical or imaginary follows directly from this.
Before moving on to the problem with this interpretation, please note that Bayesianism also gives a different account of “unobserved things”. Some of them, like aether and possibly absolute space, decrease the prior likelihood of a theory by adding extra assumptions to the math. (Eliezer argues this applies to objective collapse.) Others, like Santa Claus, would increase the probability of evidence we do not observe. This has no relevance for alternate worlds. The evidence you seem to want has roughly zero probability in the theory you criticize, so its absence doesn’t tell us anything. The argument for adopting the theory lies elsewhere, in the success of quantum math.
Now obviously the Born rule creates a problem for this argument. The theory has a great big mathematical hole in it. But from this Bayesian perspective, and going by the information I have so far, we have no reason to think that whatever fills the hole will reduce the number of “worlds” to exactly one, any more than we have reason to believe in exactly 666 worlds. It really does seem that simple. And from what I’ve managed to read of Feynman and Hibbs the authors definitely believe in more than one world. (“From what does the uncertainty arise? Almost without doubt it arises from the need to amplify the effects of single atomic events to such a level that they may be readily observed by large systems.” p.22) So I don’t think my simple view results from ignorance of QM as it existed then.
It sure seems to me as though Huve Erett advocates objective collapse. Maybe you can explain what part of the dialog convinces you that Huve Erett can’t be talking about objective collapse.
“This happens when, way up at the macroscopic level, we ‘measure’ something.”
vs. in objective collapse, when the collapse occurs has no necessary relationship to measurement. “Measurement” is a Copenhagen thing.
“So the wavefunction knows when we ‘measure’ it. What exactly is a ‘measurement’? How does the wavefunction know we’re here? What happened before humans were around to measure things?”
Again, this describes Copenhagen (or even Conscious Collapse, which is even worse). Objective collapse depends on neither measurements nor measurers.
Much of the rest of this parody might be characterized as a preposterously unfair roast of collapse theories, objective or otherwise, but the trouble is all the valid criticisms also apply to MWI. For example “the only law in all of quantum mechanics that is non-linear, non-unitary, non-differentiable and discontinuous” also applies to the law that is necessary for any actually scientific account of MWI, but that MWI people sweep under the rug with incoherent talk about “decoherence”, namely when “worlds” “split” such that we “find ourselves” in one but not the other. AFAIK, no MWI proponent has ever proposed a linear, unitary, or differentiable function that predicts such a split that is consistent with what we actually observe in QM. And they couldn’t, because “world split” is nearly isomorphic with “collapse”—it’s just an excessive way of saying the same thing. If MWI came up with an objective “world branch” function it would serve equallywell, or even better given Occam’s Razor, as an objective collapse function. In both MWI and collapse part of the wave function effectively disappears from the observable universe—MWI only adds a gratuitous extra mechanism, that it re-appears in another, imaginary, unobservable “world.”
BTW, the standard way that QM treats the nondeterministic event predicted probabilistically by the wavefunction and the Born probabilities (whether you choose to call such event “collapse”, “decoherence”, “branching worlds”, or otherwise) is completely non-linear, non-unitary, non-differentiable and discontinuous, and worst of all, nondeterminstic (horrors!). In the matrix model, the “collapse”, if you will forgive the phrase, of a large (often infinite) set of possible eigenvalues and corresponding eigenvectors to one, the one we actually observe, according to the Born probabilities. No matter how much “interpreters” try to sweep this under the rug this nondeterminstic disappearance of all eigenvectors (or their isomorphs in other algebras) save one is central to real-world QM math and if it weren’t so it wouldn’t predict the quantum events we actually observe. So the dispute here is with QM itself, not with collapse theories.
“This happens when, way up at the macroscopic level, we ‘measure’ something.”
vs. in objective collapse, when the collapse occurs has no necessary relationship to measurement.
Well, I don’t agree with the “vs”, but let that pass, since then the dialog quickly continues:
Then he reaches out for the paper, scratches out “When you perform a measurement on a quantum system”, and writes in, “When a quantum superposition gets too large.”
That occurs as early as one fourth of the way through the dialog, so that leaves three fourths of the dialog addressing what you are apparently calling an objective collapse theory.
Eliezer thinks objective collapse = Copenhagen. More precisely, I’ve never seen him distinguish the two, or acknowledge the possibility of denying that the wavefunction exists.
When an object leaves our Hubble volume does it cease to exist?
Run time is at least as important as length.
It’s reasonable to assume run time is important, but problematic to formalize. Run time is much more dependent on the underlying computational abstraction than description length is. Is the computer sequential? parallel? non-deterministic? quantum?
Depending on the underlying computer model MWI could actually be faster than a collapse hypothesis. MWI is totally local, hence easily parallelizable. Collapse hypotheses however require non-local communication, which create severe bottlenecks for parallel simulations.
“Imagine a universe containing an infinite line of apples.”
If we did I would imagine it, but we don’t. In QM we don’t observe infinite anything, we observe discrete events. That some of the math to model this involves infinities may be merely a matter of convenience to deal with a universe that may merely have a very large but finite number of voxels (or similar), as suggested by Planck length and similar ideas.
“It’s reasonable to assume run time is important, but problematic to formalize.”
Run time complexity theory (and also memory space complexity, which also grows at least exponentially in MWI) is much easier to apply than Kolmogorov complexity in this context. Kolmogorov complexity only makes sense as an order of magnitude (i.e. O(f(x) not equal to merely a constant), because choice of language adds an (often large) constant to program length. So from Kolmogorov theory it doesn’t much matter than one adds a small extra constant amount of bits to one’s theory, making it problematic to invoke Kolmogorov theory to distinguish between different interpretations and equations that each add only a small constant amount of bits.
(Besides the fact that QM is really wavefunction + nondeterministic Born probability, not merely the nominally deterministic wave function on which MWI folks focus, and everybody needs some “collapse”/”world split” rule for when the nondeterministic event happens, so there really is not even any clear constant factor equation description length parsimony to MWI).
OTOH, MWI clearly adds at least an exponential (and perhaps worse, infinitely expanding at every step!) amount of run time, and a similar amount of required memory, not merely a small constant amount. As for the ability to formalize this there’s a big literature of run-time complexity that is similar to, but older and more mature than, the literature on Kolmogorov complexity.
OTOH, MWI clearly adds at least an exponential (and perhaps worse, infinitely expanding at every step!) amount of run time, and a similar amount of required memory, not merely a small constant amount.
I see. I think you are making a common misunderstanding of MWI (in fact, a misunderstanding I had for years). There is no actual branching in MWI, so the amount of memory required is constant. There is just a phase space (a very large phase space), and amplitudes at each point in the phase space are constantly flowing around and changing (in a local way).
If you had a computer with as many cores as there are points in the phase space then the simulation would be very snappy. On the other hand, using the same massive computer to simulate a collapse theory would be very slow.
“Imagine a universe containing an infinite line of apples.”
If we did I would imagine it, but we don’t.
This is an answer to a question from another person’s thread. My question was “When an object leaves our Hubble volume does it cease to exist?” I’m still curious to hear your answer.
That’s an easy one—objective collapse QM predicts that with astronomically^astronomically^astronomically high probability objects large enough to be seen at that distance (or even objects merely the size of ourselves) don’t cease to exist. Like everything else they continue to follow the laws of objective collapse QM whether we observe them or not.
The hypo is radically different from believing in an infinitely expanding infinity of parallel “worlds”, none of which we have ever observed, either directly or indirectly, and none of which are necessary for a coherent and objective QM theory.
That’s an easy one—objective collapse QM predicts that with astronomically^astronomically^astronomically high probability objects large enough to be seen at that distance (or even objects merely the size of ourselves) don’t cease to exist. Like everything else they continue to follow the laws of objective collapse QM whether we observe them or not.
Then I can define a new hypothesis, call it objective collapse++, which is exactly your objective collapse hypothesis with the added assumption that objects cease to exist outside of our Hubble volume. Collapse++ has a slightly longer description length, but it has a greatly reduced runtime. If we care about runtime length, why would we not prefer Collapse++ over the original collapse hypothesis?
The hypo is radically different from believing in an infinitely expanding infinity of parallel “worlds”
See my above comment about MWI having a fixed phase space that doesn’t actually increase in size over time. The idea of an increasing number of parallel universes is incorrect.
“MWI having a fixed phase space that doesn’t actually increase in size over time.”
(1) That assumes we are already simulating the entire universe from the Big Bang forward, which is already preposterously infeasible (not to mention that we don’t know the starting state).
(2) It doesn’t model the central events in QM, namely the nondeterministic events which in MWI are interpreted as which “world” we “find ourselves” in.
Of course in real QM work simulations are what they are, independently of interpretations, they evolve the wavefunction, or a computationally more efficient but less accurate version of same, to the desired elaboration (which is radically different for different applications). For output they often either graph the whole wavefunction (relying on the viewer of the graph to understand that such a graph corresponds to the results of a very large number of repeated experiments, not to a particular observable outcome) or do a Monte Carlo or Markov simulation of the nondeterministic events which are central to QM. But I’ve never seen a Monte Carlo or Markov simulation of QM that simulates the events that supposedly occur in “other worlds” that we can never observe—it would indeed be exponentially (at least) more wasteful in time and memory, yet utterly pointless, for the same reasons that the interpretation itself is wasteful and pointless. You’d think that a good interpretation, even if it can’t produce any novel experimental predictions, could at least provide ideas for more efficient modeling of the theory, but MWI suggests quite the opposite, gratuitously inefficient ways to simulate a theory that is already extraordinarily expensive to simulate.
Objective collapse, OTOH, continually prunes the possibilities of the phase space and thus suggests exponential improvements in simulation time and memory usage. Indeed, some versions of objective collapse are bone fide new theories of QM, making experimental predictions that distinguish it from the model of perpetual elaboration of a wavefunction. Penrose for example bases his theory on a quantum gravity theory and several experiments have been proposed to test his theory.
BTW, it’s MWI that adds extra postulates. In both MWI and collapse, parts of the wavefunction effectively disappear from the observable universe (or as MWI folks like to say “the world I find myself in.”) MWI adds the extra and completely gratuitous postulate that this portion of the wave function magically re-appears in another, imaginary, completely unobservable “world”, on top of the gratuitous extra postulate that these new worlds are magically created, and all of us magically cloned, such that the copy of myself I experience finds me in one “world” but not another. And all that just to explain why we observe a nondeterministic event, one random eigenstate out of the infinity of eigenstates derived from the wavefunction and operator, instead of observing all of them.
Why not just admit that quantum events are objectively nondeterministic and be done with it? What’s so hard about that?
In both MWI and collapse, parts of the wavefunction effectively disappear from the observable universe (or as MWI folks like to say “the world I find myself in.”) MWI adds the extra and completely gratuitous postulate that this portion of the wave function magically re-appears in another, imaginary, completely unobservable “world”, on top of the gratuitous extra postulate that these new worlds are magically created, and all of us magically cloned, such that the copy of myself I experience finds me in one “world” but not another.
This does not correspond to the MWI as promulgated by Eliezer Yudkowsky, which is more like, “In MWI, parts of the wavefunction effectively disappear from the observable universe—full stop.” My understanding is that EY’s view is that chunks of the wavefunction decohere from one another. The “worlds” of the MWI aren’t something extra imposed on QM; they’re just a useful metaphor for decoherence.
This leaves the Born probabilities totally unexplained. This is the major problem with EY’s MWI, and has been fully acknowledged by him in posts made in years past. It’s not unreasonable that you would be unaware of this, but until you’ve read EY’s MWI posts, I think you’ll be arguing past the other posters on LW.
Upvoted, although my understanding is that there is no difference between Eliezer’s MWI and canonical MWI as originally presented by Everett. Am I mistaken?
Since I’m not familiar with Everett’s original presentation, I don’t know if you’re mistaken. Certainly popular accounts of MWI do seem to talk about “worlds” as something extra on top of QM.
Popular accounts written by journalists who don’t really understand what they are talking about may treat “worlds” as something extra on top of QM, but after reading serious accounts of MWI by advocates for over two decades, I have yet to find any informed advocate who makes that mistake. I am positive that Everett did not make that mistake.
I think that’s just a common misunderstanding most people have of MWI, unfortunately. Visualizing a giant decohering phase space is much harder than imagining parallel universes splitting off. I’m fairly certain that Eliezer’s presentation of MWI is the standard one though (excepting his discussion of timeless physics perhaps).
This leaves the Born probabilities totally unexplained.
Mainstream philosophy of science claims to have explained the Born probabilities; Eliezer and some others here disagree with the explanations, but it’s at least worth noting that the quoted claim is controversial among those who have thought deeply about the question.
Imagine a universe containing an infinite line of apples. You can see them getting smaller into the distance, until eventually it’s not possible to resolve individual apples. Do you want to say that we could never justify or regard-as-scientific a theory which said “this line of apples is infinite”?
“Eliezer’s argument is that multiple worlds require no additions to the length of the theory if it was formally expressed, whereas a ‘deleting worlds’ function is additional. It’s also unclear where it would kick in, what ‘counts’ as a sufficiently fixed function to chop off the other bit.”
Run time is at least as important as length. If we want to simulate evolution of the wavefunction on a computer, do we get a more accurate answer of more phenomena by computing an exploding tree of alternatives that don’t actually significantly influence anything that we can ever observe, or does the algorithm explain more by pruning these irrelevant branches and elaborating the branches that actually make an observable difference? We save exponential time and thus explain exponentially more by pruning the branches.
“It’s not clear from your post if you think the other half’s chopped off because we haven’t observed it, or we don’t observe it because it’s chopped off!”
Neither. QM is objective and the other half is chopped off because decoherence created a mutually exclusive alternative. This presents no more problem for my interpretation (which might be called “quantum randomness is objective” or “God plays dice, get over it”) than it does for MWI (when does a “world” branch off?) It’s the sorities paradox either way.
“The other point is that if we are ‘Human-LEFT’ then we don’t expect the other part of the wave function to be observable to us. Does that mean we delete it from what is real?”
Yes, for the same reason we delete other imagined but unobserved things like Santa Claus, absolute space, and the aether from what we consider real. If we don’t observe them and they are unnecessary for explaining the world we do see, they don’t belong in science.
You’re arguing about something that seems interesting and possibly important, but it doesn’t sound like the mathematical likelihood of the theory. Eliezer starts from a Bayesian interpretation of this number as a rational degree of belief, theoretically determined by the evidence we have. As I understand it, this quantity has a correct value, and the question of how much the theory explains has a definite answer, whether or not we can calculate it. The alternate Discordian or solipsistic view has much to recommend it but runs into problems if we take it as a general principle.
Now run time has no obvious effect on likelihood of truth. I don’t know if message length does either, but at least we have an argument for this (see Solomonoff induction). And the claim that MWI adds an extra postulate of its own seems false. MWI tries to follow Occam’s Razor—in a form that seems to agree with Solomonoff and Isaac Newton—by saying that no causes exist but arrows attached to large sets of numbers, and the function that attaches them. Everything you call magical or imaginary follows directly from this.
Before moving on to the problem with this interpretation, please note that Bayesianism also gives a different account of “unobserved things”. Some of them, like aether and possibly absolute space, decrease the prior likelihood of a theory by adding extra assumptions to the math. (Eliezer argues this applies to objective collapse.) Others, like Santa Claus, would increase the probability of evidence we do not observe. This has no relevance for alternate worlds. The evidence you seem to want has roughly zero probability in the theory you criticize, so its absence doesn’t tell us anything. The argument for adopting the theory lies elsewhere, in the success of quantum math.
Now obviously the Born rule creates a problem for this argument. The theory has a great big mathematical hole in it. But from this Bayesian perspective, and going by the information I have so far, we have no reason to think that whatever fills the hole will reduce the number of “worlds” to exactly one, any more than we have reason to believe in exactly 666 worlds. It really does seem that simple. And from what I’ve managed to read of Feynman and Hibbs the authors definitely believe in more than one world. (“From what does the uncertainty arise? Almost without doubt it arises from the need to amplify the effects of single atomic events to such a level that they may be readily observed by large systems.” p.22) So I don’t think my simple view results from ignorance of QM as it existed then.
You’re almost exactly playing the part of Huve Erett in this dialog:
http://lesswrong.com/lw/q7/if_manyworlds_had_come_first/
Emphasize the “almost”. I’m advocating objective collapse, not Copenhagen.
It sure seems to me as though Huve Erett advocates objective collapse. Maybe you can explain what part of the dialog convinces you that Huve Erett can’t be talking about objective collapse.
“This happens when, way up at the macroscopic level, we ‘measure’ something.”
vs. in objective collapse, when the collapse occurs has no necessary relationship to measurement. “Measurement” is a Copenhagen thing.
“So the wavefunction knows when we ‘measure’ it. What exactly is a ‘measurement’? How does the wavefunction know we’re here? What happened before humans were around to measure things?”
Again, this describes Copenhagen (or even Conscious Collapse, which is even worse). Objective collapse depends on neither measurements nor measurers.
Much of the rest of this parody might be characterized as a preposterously unfair roast of collapse theories, objective or otherwise, but the trouble is all the valid criticisms also apply to MWI. For example “the only law in all of quantum mechanics that is non-linear, non-unitary, non-differentiable and discontinuous” also applies to the law that is necessary for any actually scientific account of MWI, but that MWI people sweep under the rug with incoherent talk about “decoherence”, namely when “worlds” “split” such that we “find ourselves” in one but not the other. AFAIK, no MWI proponent has ever proposed a linear, unitary, or differentiable function that predicts such a split that is consistent with what we actually observe in QM. And they couldn’t, because “world split” is nearly isomorphic with “collapse”—it’s just an excessive way of saying the same thing. If MWI came up with an objective “world branch” function it would serve equallywell, or even better given Occam’s Razor, as an objective collapse function. In both MWI and collapse part of the wave function effectively disappears from the observable universe—MWI only adds a gratuitous extra mechanism, that it re-appears in another, imaginary, unobservable “world.”
BTW, the standard way that QM treats the nondeterministic event predicted probabilistically by the wavefunction and the Born probabilities (whether you choose to call such event “collapse”, “decoherence”, “branching worlds”, or otherwise) is completely non-linear, non-unitary, non-differentiable and discontinuous, and worst of all, nondeterminstic (horrors!). In the matrix model, the “collapse”, if you will forgive the phrase, of a large (often infinite) set of possible eigenvalues and corresponding eigenvectors to one, the one we actually observe, according to the Born probabilities. No matter how much “interpreters” try to sweep this under the rug this nondeterminstic disappearance of all eigenvectors (or their isomorphs in other algebras) save one is central to real-world QM math and if it weren’t so it wouldn’t predict the quantum events we actually observe. So the dispute here is with QM itself, not with collapse theories.
Well, I don’t agree with the “vs”, but let that pass, since then the dialog quickly continues:
That occurs as early as one fourth of the way through the dialog, so that leaves three fourths of the dialog addressing what you are apparently calling an objective collapse theory.
Eliezer thinks objective collapse = Copenhagen. More precisely, I’ve never seen him distinguish the two, or acknowledge the possibility of denying that the wavefunction exists.
When an object leaves our Hubble volume does it cease to exist?
It’s reasonable to assume run time is important, but problematic to formalize. Run time is much more dependent on the underlying computational abstraction than description length is. Is the computer sequential? parallel? non-deterministic? quantum?
Depending on the underlying computer model MWI could actually be faster than a collapse hypothesis. MWI is totally local, hence easily parallelizable. Collapse hypotheses however require non-local communication, which create severe bottlenecks for parallel simulations.
“Imagine a universe containing an infinite line of apples.”
If we did I would imagine it, but we don’t. In QM we don’t observe infinite anything, we observe discrete events. That some of the math to model this involves infinities may be merely a matter of convenience to deal with a universe that may merely have a very large but finite number of voxels (or similar), as suggested by Planck length and similar ideas.
“It’s reasonable to assume run time is important, but problematic to formalize.”
Run time complexity theory (and also memory space complexity, which also grows at least exponentially in MWI) is much easier to apply than Kolmogorov complexity in this context. Kolmogorov complexity only makes sense as an order of magnitude (i.e. O(f(x) not equal to merely a constant), because choice of language adds an (often large) constant to program length. So from Kolmogorov theory it doesn’t much matter than one adds a small extra constant amount of bits to one’s theory, making it problematic to invoke Kolmogorov theory to distinguish between different interpretations and equations that each add only a small constant amount of bits.
(Besides the fact that QM is really wavefunction + nondeterministic Born probability, not merely the nominally deterministic wave function on which MWI folks focus, and everybody needs some “collapse”/”world split” rule for when the nondeterministic event happens, so there really is not even any clear constant factor equation description length parsimony to MWI).
OTOH, MWI clearly adds at least an exponential (and perhaps worse, infinitely expanding at every step!) amount of run time, and a similar amount of required memory, not merely a small constant amount. As for the ability to formalize this there’s a big literature of run-time complexity that is similar to, but older and more mature than, the literature on Kolmogorov complexity.
I see. I think you are making a common misunderstanding of MWI (in fact, a misunderstanding I had for years). There is no actual branching in MWI, so the amount of memory required is constant. There is just a phase space (a very large phase space), and amplitudes at each point in the phase space are constantly flowing around and changing (in a local way).
If you had a computer with as many cores as there are points in the phase space then the simulation would be very snappy. On the other hand, using the same massive computer to simulate a collapse theory would be very slow.
This is an answer to a question from another person’s thread. My question was “When an object leaves our Hubble volume does it cease to exist?” I’m still curious to hear your answer.
That’s an easy one—objective collapse QM predicts that with astronomically^astronomically^astronomically high probability objects large enough to be seen at that distance (or even objects merely the size of ourselves) don’t cease to exist. Like everything else they continue to follow the laws of objective collapse QM whether we observe them or not.
The hypo is radically different from believing in an infinitely expanding infinity of parallel “worlds”, none of which we have ever observed, either directly or indirectly, and none of which are necessary for a coherent and objective QM theory.
Then I can define a new hypothesis, call it objective collapse++, which is exactly your objective collapse hypothesis with the added assumption that objects cease to exist outside of our Hubble volume. Collapse++ has a slightly longer description length, but it has a greatly reduced runtime. If we care about runtime length, why would we not prefer Collapse++ over the original collapse hypothesis?
See my above comment about MWI having a fixed phase space that doesn’t actually increase in size over time. The idea of an increasing number of parallel universes is incorrect.
“MWI having a fixed phase space that doesn’t actually increase in size over time.”
(1) That assumes we are already simulating the entire universe from the Big Bang forward, which is already preposterously infeasible (not to mention that we don’t know the starting state).
(2) It doesn’t model the central events in QM, namely the nondeterministic events which in MWI are interpreted as which “world” we “find ourselves” in.
Of course in real QM work simulations are what they are, independently of interpretations, they evolve the wavefunction, or a computationally more efficient but less accurate version of same, to the desired elaboration (which is radically different for different applications). For output they often either graph the whole wavefunction (relying on the viewer of the graph to understand that such a graph corresponds to the results of a very large number of repeated experiments, not to a particular observable outcome) or do a Monte Carlo or Markov simulation of the nondeterministic events which are central to QM. But I’ve never seen a Monte Carlo or Markov simulation of QM that simulates the events that supposedly occur in “other worlds” that we can never observe—it would indeed be exponentially (at least) more wasteful in time and memory, yet utterly pointless, for the same reasons that the interpretation itself is wasteful and pointless. You’d think that a good interpretation, even if it can’t produce any novel experimental predictions, could at least provide ideas for more efficient modeling of the theory, but MWI suggests quite the opposite, gratuitously inefficient ways to simulate a theory that is already extraordinarily expensive to simulate.
Objective collapse, OTOH, continually prunes the possibilities of the phase space and thus suggests exponential improvements in simulation time and memory usage. Indeed, some versions of objective collapse are bone fide new theories of QM, making experimental predictions that distinguish it from the model of perpetual elaboration of a wavefunction. Penrose for example bases his theory on a quantum gravity theory and several experiments have been proposed to test his theory.
BTW, it’s MWI that adds extra postulates. In both MWI and collapse, parts of the wavefunction effectively disappear from the observable universe (or as MWI folks like to say “the world I find myself in.”) MWI adds the extra and completely gratuitous postulate that this portion of the wave function magically re-appears in another, imaginary, completely unobservable “world”, on top of the gratuitous extra postulate that these new worlds are magically created, and all of us magically cloned, such that the copy of myself I experience finds me in one “world” but not another. And all that just to explain why we observe a nondeterministic event, one random eigenstate out of the infinity of eigenstates derived from the wavefunction and operator, instead of observing all of them.
Why not just admit that quantum events are objectively nondeterministic and be done with it? What’s so hard about that?
This does not correspond to the MWI as promulgated by Eliezer Yudkowsky, which is more like, “In MWI, parts of the wavefunction effectively disappear from the observable universe—full stop.” My understanding is that EY’s view is that chunks of the wavefunction decohere from one another. The “worlds” of the MWI aren’t something extra imposed on QM; they’re just a useful metaphor for decoherence.
This leaves the Born probabilities totally unexplained. This is the major problem with EY’s MWI, and has been fully acknowledged by him in posts made in years past. It’s not unreasonable that you would be unaware of this, but until you’ve read EY’s MWI posts, I think you’ll be arguing past the other posters on LW.
Upvoted, although my understanding is that there is no difference between Eliezer’s MWI and canonical MWI as originally presented by Everett. Am I mistaken?
Since I’m not familiar with Everett’s original presentation, I don’t know if you’re mistaken. Certainly popular accounts of MWI do seem to talk about “worlds” as something extra on top of QM.
Popular accounts written by journalists who don’t really understand what they are talking about may treat “worlds” as something extra on top of QM, but after reading serious accounts of MWI by advocates for over two decades, I have yet to find any informed advocate who makes that mistake. I am positive that Everett did not make that mistake.
I think that’s just a common misunderstanding most people have of MWI, unfortunately. Visualizing a giant decohering phase space is much harder than imagining parallel universes splitting off. I’m fairly certain that Eliezer’s presentation of MWI is the standard one though (excepting his discussion of timeless physics perhaps).
Mainstream philosophy of science claims to have explained the Born probabilities; Eliezer and some others here disagree with the explanations, but it’s at least worth noting that the quoted claim is controversial among those who have thought deeply about the question.
Good to know.
Imagine a universe containing an infinite line of apples. You can see them getting smaller into the distance, until eventually it’s not possible to resolve individual apples. Do you want to say that we could never justify or regard-as-scientific a theory which said “this line of apples is infinite”?