As I understand it, EY’s commitment to MWI is a bit more principled than a choice between soccer teams. MWI is the only interpretation that makes sense given Eliezer’s prior metaphysical commitments. Yes rational people can choose a different interpretation of QM, but they probably need to make other metaphysical choices to match in order to maintain consistency.
MWI distinguishes itself from Copenhagen by making testable predictions. We simply don’t have the technology yet to test them to a sufficient level of precisions as to distinguish which meta-theory models reality.
In the mean time, there are strong metaphysical reasons (Occam’s razor) to trust MWI over Copenhagen.
Indeed there are, but this is not the same as strong metaphysical reasons to trust MWI over all alternative explanations. In particular, EY argued quite forcefully (and rightly so) that collapse postulates are absurd as they would be the only “nonlinear, non CPT-symmetric, acausal, FTL, discontinuous...” part of all physics. He then argued that since all single-world QM interpretations are absurd (a non-sequitur on his part, as not all single-world QM interpretations involve a collapse), many-worlds wins as the only multi-world interpretation (which is also slightly inaccurate, not that many-minds is taken that seriously around here). Ultimately, I feel that LW assigns too high a prior to MW (and too low a prior to bohmian mechanics).
It’s not just about collapse—every single-world QM interpretation either involves extra postulates, non-locality or other surprising alterations of physical law, or yields falsified predictions. The FAQ I linked to addresses these points in great detail.
MWI is simple in the Occam’s razor sense—it is what falls out of the equations of QM if you take them to represent reality at face value. Single-world meta-theories require adding additional restrictions which are at this time completely unjustified from the data.
Single world decoherence is a single world QM interpretation that avoids all those problems.
Nothing about decoherence necessitates many worlds. Decoherence was introduced as a fix to Everett’s theory , because it isn’t empirically adequate by itself. Preferred basis is another attempted fix.
So there isn’t a single many worlds theory and they all require additional ingredients to work.
There is an approach to MWI based on coherent superpositions, and a version based on decoherence. These are (for all practical purposes) incompatible opposites, but are treated as interchangeable in Yudkowsky’s writings. . Decoherent branches are large, stable, non interacting and irreversible...everything that would be intuitively expected of a “world”. But there is no empirical evidence for them (in the plural) , nor are they obviously supported by the core mathematics of quantum mechanics, the Schrödinger equation.Coherent superpositions are small scale , down to single particles, observer dependent, reversible, and continue to interact (strictly speaking , interfere) after “splitting”. the last point is particularly problematical. because if large scale coherent superposition exist , that would create naked eye macrocsopic scale:, e.g. ghostly traces of a world where the Nazis won.
We have evidence of small scale coherent superposition, since a number of observed quantum.effects depend on it, and we have evidence of decoherence, since complex superposition are difficult to maintain. What we don’t have evidence of is decoherence into multiple branches. From the theoretical perspective, decoherence is a complex , entropy like process which occurs when a complex system interacts with its environment. Decoherence isn’t simple. But without decoherence, MW doesn’t match observation. So there is no theory of MW that is both simple and empirically adequate, contra Yudkowsky and Deutsch.
The original, Everettian, or coherence based approach , is minimal, but fails to predict classical observations. (At all. It fails to predict the appearance of a broadly classical universe). if everything is coherently superposed, so are observers...but then observers would only ever see superpositions of dead and living cats, etc. A popular but mistaken idea is that splitting happens microscopically, whenever any system, not necessarily a macroscopic observer, becomes entangled with a superposition, and only requires the additional assumption that splitting occurs in a classical basis to match observation. But that would make complex superpositions non-existent, whereas a number of instruments and technologies depend on them—so it’s empirically false
The later, decoherence based approach, is more empirically adequate, but seems to require additional structure, placing its simplicity in doubt. In any case, without a single definitive mechanism, there is no definitive answer to “how complex is MWI”.
Coherent superpositions exist, but their components aren’t worlds in any intuitive sense. Decoherent branches would be worlds in the intuitive sense, and while there is plenty of evidence for decoherence—it is difficult to maintain complex coherent superposition—there is no empirical evidence of decoherence causing multiple branches.There could be a theoretical justification for decoherent branching , but that is what much of the ongoing research is about—it isn’t a done deal, and therefore not a “slam dunk”. And, inasmuch as there is no agreed mechanism for decoherent branching, there is no definite fact about the simplicity of decoherent MWI.
The Yudkowsky-Deutsch claim is that there is a single MW theory, which is obviously simpler than its rivals. But there isn’t an MWI that is both known to be simple, and to to imply the existence of “worlds” in the intuitive sense.
There is an approach to MWI based on coherent superpositions, and a version based on decoherence. These are (for all practical purposes) incompatible opposites, but are treated as interchangeable in Yudkowsky’s writings. <...> Coherent superpositions are small scale , down to single particles, observer dependent, reversible, and continue to interact (strictly speaking , interfere) after “splitting”. the last point is particularly problematical. because if large scale coherent superposition exist , that would create naked eye macrocsopic scale:, e.g. ghostly traces of a world where the Nazis won.
I don’t believe Yudkowsky ever supported such “branches” which could change each other’s internal content? Wherever I look, I see only that they interfere with each other’s amplitudes (thus also with probabilities). If so, I’m going to drop “coherent superpositions” at all.
I’m quite interested what labels would you use for the following experiment.
We’re talking to each other on LessWrong.
I invoke a quantum RNG to get a number between 1 and 8, and look at its response.
You do not know the generated number yet.
I say that the generator yielded ‘1’.
This random number is not very relevant for you, and you forget it over time.
I would label it so:
There is some region of world-branches, differing in some facts we do not know, but in each of those branches we’re talking to each other on LessWrong (and each of the concepts is meaningful).
The quantum RNG had some superposition of states; through measurement, this uncertainty also gets to sensors, is transmitted over the wire, so on, until I see the result. Thereon, my region of world-branches splits into eight branch-sections, equally sized and equally plausible, identified by the number I had seen.
For you, these branch-sections are not distinguishable yet; some could say that you are essentially in single branch.
Now, the decoherence (or should I say information?) is again transmitted over the wire, and you land in the eight sections with different random numbers too.
If you forget the number, your region of world-branches essentially merges the eight sections back.
I did really use an online, presumably quantum RNG.
Wherever I look, I see only that they interfere with each other’s amplitudes (thus also with probabilities)
That’s what I meant. And it’s bad enough. It means that when an observer goes into a coherent superposition with themselves, they are not split into two observers who are unaware of each others existence, as required by the standard many worlds account of observation, ie. there is not even an appearance of collapse. It important to notice that amplitudes aren’t just the probabilities of an eventual sharp, classical style observation, they also define the interference between the components of a superposed states, and the evolution of a coherently superposed state depends on the interference between all it’s components...you can’t treat them individually and separately. (You can with decoherent branches, and that’s one of the differences).
If so, I’m going to drop “coherent superpositions” at all.
The overall argument is that branching is either coherent or decoherent, and both are flawed. I’ve explained the problems with coherence. To restate the problems with decoherence:-
There is no empirical evidence of decoherence causing multiple branches.
There could be a theoretical justification for decoherent branching , but there currently isnt. Decoherence could be a single world phenomenon.
And it isn’t clearly simpler, since the mechanism isn’t fully understood.
The Yudkowsky-Deutsch claim is that there is a single MW theory, which explains everything that needed explaining, and is obviously simpler than its rivals. But coherence doesn’t save appearances , and decoherence, while more workable, is not simple.
If we are talking to each other, we are not in decoherent branches.
If I am.entangled with your eight possible observations , then I am in a quantum superposition with myself, and my previous objection applies. Why is the forgetting important? I can’t forget a single definite observation, because I haven’t made one...you haven’t introduced any collapse of decoherence. Or are you saying that remembering is decoherence?
Or are you saying that branching is subjective? Well, it is ,.for coherent “branching”.
If I am.entangled with your eight possible observations , then I am in a quantum superposition with myself, and my previous objection applies. Why is the forgetting important? I can’t forget a single definite observation, because I haven’t made one...you haven’t introduced any collapse of decoherence. Or are you saying that remembering is decoherence?
Yes, that is a superposition! I don’t think it can leave any “ghastly traces”—there are billions or more particles which moved differently (at least several angstroms away) depending on which exact number you see in my comment, and interference—even on amplitude level—fades. Presumably exponentially.
Remembering has to do with how many particles have different places now, and thus determines amount of decoherence; if the random number I generated was entirely forgotten (that is, no conscious nor unconscious nor whatever else visual memory), nor produced effects larger than thermal noise, I argue we would be exactly back to step 3 when I had not named the result yet.
Decoherence could be a single world phenomenon.
What would happen to decoherent branches which had their distinguishing features “evolve out to nothing” (become very close to “our” branch)? They will come back and influence the amplitudes and probabilities we observe once again.
Or are you saying that branching is subjective?
Not particularly. Though it is very convenient when the wavefunction factors into several world parts—say, it would be very strange if my generating a random number would influence a LW non-reader—in that sense they can subjectively just not consider that I had done anything.
Yes, that is a superposition! I don’t think it can leave any “ghastly traces”—there are billions or more particles which moved differently (at least several angstroms away) depending on which exact number you see in my comment, and interference—even on amplitude level—fades. Presumably exponentially.
Ok. That’s an argument for decoherence..but not an argument for multi branch decoherence.
And minor remnants can be left, so long as they stay minor… they are not going to affect observation much. (Although the reasons for that could be cosmological..an expanding universe with slightly negative curvature is the ideal way to get rid of unwanted information).
What would happen to decoherent branches which had their distinguishing features “evolve out to nothing” (become very close to “our” branch)? They will come back and influence the amplitudes and probabilities we observe once again.
But not much? Theres an argument that decoherent splitting can’t be completely irrevocable, because it emerges from time reversible microphysics … but there’s also an argument that the “losing” branches spread out, and become so thinly distributed in the overall mish mash that they can no longer matter in practice..and for for macrophysical reasons.
Though it is very convenient when the wavefunction factors into several world parts—say, it would be very strange if my generating a random number would influence a LW non-reader—in that sense they can subjectively just not consider that I had done anything.
That’sexactly how decoherent branching works .. if it works. It’s not a causal process that leaves causal traces.
but there’s also an argument that the “losing” branches spread out
They don’t spread much faster compared to “winning” branches I guess? World has no particular dependence on what random number I generated above, so all the splits and merges have approximately same shape in either of the eight branch regions.
That’sexactly how decoherent branching works .. if it works. It’s not a causal process that leaves causal traces.
With a remark that “decoherent branching” and “coherent branching” are presumably just one process differing in how much the information is contained or spreads out, and noting that should LW erase the random number from my comment above plus every of us to totally forget it, the branches would approximately merge,
yes I agree. Contents of worlds in those branches do not causally interact with us, but amplitudes might at some point in future. AFAIK Eliezer referenced the latter while assigning label “real” to each and every world (each point of wavefunction).
They don’t spread much faster compared to “winning” branches I guess
They don’t spread faster, they spread wider. Their low amplitude information is smeared over an environmental already containing a lot of other low amplitude information, noise in effect. So the chances of recovering it are zero for all practical purposes.
With a remark that “decoherent branching” and “coherent branching” are presumably just one process differing in how much the information is contained or spreads out
Well, no. In a typical measurement, a single particle interacts with an apparatus containing trillions, and that brings about decoherence very quickly, so quickly it can appear like collapse. Decoherent branches, being macroscopic , stable and irreversible, for all practical purposes, are the opposite to coherent ones.
It’s actually not, his commitment to MWI is more rooted in his ignorance of QM, not due to any metaphysical commitments. Plenty of people with actual degrees in the field all disagree on what the right interpretation is, even today we still don’t. Metaphysics doesn’t play into it, MWI is just one way to explain the math but it has it’s short comings.
Aumann’s agreement theorem doesn’t factor in here. The simpler explanation is that EY doesn’t understand QM, which is why he assumes Many Worlds it the only one. In fact he’s often extremely confident (and stubborn) about things he not only doesn’t understand but is provably wrong in.
The truth about QM is that no one really understands what’s going on there.
Yes rational people can choose a different interpretation of QM, but they probably need to make other metaphysical choices to match in order to maintain consistency.
However, Robin Hanson has presented an argument that Bayesians who agree about the processes that gave rise to their priors (e.g., genetic and environmental influences) should, if they adhere to a certain pre-rationality condition, have common priors.
The metaphysical commitment necessary is weaker than it looks.
This theorem (valuable though it may be) strikes me as one of the easiest abused things ever. I think Ayn Rand would have liked it: if you don’t agree with me, you’re not as committed to Reason as I am.
I believe he’s saying that rational people should agree on metaphysics (or probability distributions over different systems). In other words, to disagree about MWI, you need to dispute EY’s chain of reasoning metaphysics->evidence->MWI, which Perplexed says is difficult or dispute EY’s metaphysical commitments, which Perplexed implies is relatively easier.
As I understand it, EY’s commitment to MWI is a bit more principled than a choice between soccer teams. MWI is the only interpretation that makes sense given Eliezer’s prior metaphysical commitments. Yes rational people can choose a different interpretation of QM, but they probably need to make other metaphysical choices to match in order to maintain consistency.
MWI distinguishes itself from Copenhagen by making testable predictions. We simply don’t have the technology yet to test them to a sufficient level of precisions as to distinguish which meta-theory models reality.
See: http://www.hedweb.com/manworld.htm#unique
In the mean time, there are strong metaphysical reasons (Occam’s razor) to trust MWI over Copenhagen.
Indeed there are, but this is not the same as strong metaphysical reasons to trust MWI over all alternative explanations. In particular, EY argued quite forcefully (and rightly so) that collapse postulates are absurd as they would be the only “nonlinear, non CPT-symmetric, acausal, FTL, discontinuous...” part of all physics. He then argued that since all single-world QM interpretations are absurd (a non-sequitur on his part, as not all single-world QM interpretations involve a collapse), many-worlds wins as the only multi-world interpretation (which is also slightly inaccurate, not that many-minds is taken that seriously around here). Ultimately, I feel that LW assigns too high a prior to MW (and too low a prior to bohmian mechanics).
It’s not just about collapse—every single-world QM interpretation either involves extra postulates, non-locality or other surprising alterations of physical law, or yields falsified predictions. The FAQ I linked to addresses these points in great detail.
MWI is simple in the Occam’s razor sense—it is what falls out of the equations of QM if you take them to represent reality at face value. Single-world meta-theories require adding additional restrictions which are at this time completely unjustified from the data.
Single world decoherence is a single world QM interpretation that avoids all those problems.
Nothing about decoherence necessitates many worlds. Decoherence was introduced as a fix to Everett’s theory , because it isn’t empirically adequate by itself. Preferred basis is another attempted fix.
So there isn’t a single many worlds theory and they all require additional ingredients to work.
MWI is more than one theory.
There is an approach to MWI based on coherent superpositions, and a version based on decoherence. These are (for all practical purposes) incompatible opposites, but are treated as interchangeable in Yudkowsky’s writings. . Decoherent branches are large, stable, non interacting and irreversible...everything that would be intuitively expected of a “world”. But there is no empirical evidence for them (in the plural) , nor are they obviously supported by the core mathematics of quantum mechanics, the Schrödinger equation.Coherent superpositions are small scale , down to single particles, observer dependent, reversible, and continue to interact (strictly speaking , interfere) after “splitting”. the last point is particularly problematical. because if large scale coherent superposition exist , that would create naked eye macrocsopic scale:, e.g. ghostly traces of a world where the Nazis won.
We have evidence of small scale coherent superposition, since a number of observed quantum.effects depend on it, and we have evidence of decoherence, since complex superposition are difficult to maintain. What we don’t have evidence of is decoherence into multiple branches. From the theoretical perspective, decoherence is a complex , entropy like process which occurs when a complex system interacts with its environment. Decoherence isn’t simple. But without decoherence, MW doesn’t match observation. So there is no theory of MW that is both simple and empirically adequate, contra Yudkowsky and Deutsch.
The original, Everettian, or coherence based approach , is minimal, but fails to predict classical observations. (At all. It fails to predict the appearance of a broadly classical universe). if everything is coherently superposed, so are observers...but then observers would only ever see superpositions of dead and living cats, etc. A popular but mistaken idea is that splitting happens microscopically, whenever any system, not necessarily a macroscopic observer, becomes entangled with a superposition, and only requires the additional assumption that splitting occurs in a classical basis to match observation. But that would make complex superpositions non-existent, whereas a number of instruments and technologies depend on them—so it’s empirically false
The later, decoherence based approach, is more empirically adequate, but seems to require additional structure, placing its simplicity in doubt. In any case, without a single definitive mechanism, there is no definitive answer to “how complex is MWI”.
Coherent superpositions exist, but their components aren’t worlds in any intuitive sense. Decoherent branches would be worlds in the intuitive sense, and while there is plenty of evidence for decoherence—it is difficult to maintain complex coherent superposition—there is no empirical evidence of decoherence causing multiple branches.There could be a theoretical justification for decoherent branching , but that is what much of the ongoing research is about—it isn’t a done deal, and therefore not a “slam dunk”. And, inasmuch as there is no agreed mechanism for decoherent branching, there is no definite fact about the simplicity of decoherent MWI.
The Yudkowsky-Deutsch claim is that there is a single MW theory, which is obviously simpler than its rivals. But there isn’t an MWI that is both known to be simple, and to to imply the existence of “worlds” in the intuitive sense.
I don’t believe Yudkowsky ever supported such “branches” which could change each other’s internal content? Wherever I look, I see only that they interfere with each other’s amplitudes (thus also with probabilities). If so, I’m going to drop “coherent superpositions” at all.
I’m quite interested what labels would you use for the following experiment.
We’re talking to each other on LessWrong.
I invoke a quantum RNG to get a number between 1 and 8, and look at its response.
You do not know the generated number yet.
I say that the generator yielded ‘1’.
This random number is not very relevant for you, and you forget it over time.
I would label it so:
There is some region of world-branches, differing in some facts we do not know, but in each of those branches we’re talking to each other on LessWrong (and each of the concepts is meaningful).
The quantum RNG had some superposition of states; through measurement, this uncertainty also gets to sensors, is transmitted over the wire, so on, until I see the result. Thereon, my region of world-branches splits into eight branch-sections, equally sized and equally plausible, identified by the number I had seen.
For you, these branch-sections are not distinguishable yet; some could say that you are essentially in single branch.
Now, the decoherence (or should I say information?) is again transmitted over the wire, and you land in the eight sections with different random numbers too.
If you forget the number, your region of world-branches essentially merges the eight sections back.
I did really use an online, presumably quantum RNG.
That’s what I meant. And it’s bad enough. It means that when an observer goes into a coherent superposition with themselves, they are not split into two observers who are unaware of each others existence, as required by the standard many worlds account of observation, ie. there is not even an appearance of collapse. It important to notice that amplitudes aren’t just the probabilities of an eventual sharp, classical style observation, they also define the interference between the components of a superposed states, and the evolution of a coherently superposed state depends on the interference between all it’s components...you can’t treat them individually and separately. (You can with decoherent branches, and that’s one of the differences).
The overall argument is that branching is either coherent or decoherent, and both are flawed. I’ve explained the problems with coherence. To restate the problems with decoherence:-
There is no empirical evidence of decoherence causing multiple branches.
There could be a theoretical justification for decoherent branching , but there currently isnt. Decoherence could be a single world phenomenon.
And it isn’t clearly simpler, since the mechanism isn’t fully understood.
The Yudkowsky-Deutsch claim is that there is a single MW theory, which explains everything that needed explaining, and is obviously simpler than its rivals. But coherence doesn’t save appearances , and decoherence, while more workable, is not simple.
If we are talking to each other, we are not in decoherent branches.
If I am.entangled with your eight possible observations , then I am in a quantum superposition with myself, and my previous objection applies. Why is the forgetting important? I can’t forget a single definite observation, because I haven’t made one...you haven’t introduced any collapse of decoherence. Or are you saying that remembering is decoherence?
Or are you saying that branching is subjective? Well, it is ,.for coherent “branching”.
Yes, that is a superposition! I don’t think it can leave any “ghastly traces”—there are billions or more particles which moved differently (at least several angstroms away) depending on which exact number you see in my comment, and interference—even on amplitude level—fades. Presumably exponentially.
Remembering has to do with how many particles have different places now, and thus determines amount of decoherence; if the random number I generated was entirely forgotten (that is, no conscious nor unconscious nor whatever else visual memory), nor produced effects larger than thermal noise, I argue we would be exactly back to step 3 when I had not named the result yet.
What would happen to decoherent branches which had their distinguishing features “evolve out to nothing” (become very close to “our” branch)? They will come back and influence the amplitudes and probabilities we observe once again.
Not particularly. Though it is very convenient when the wavefunction factors into several world parts—say, it would be very strange if my generating a random number would influence a LW non-reader—in that sense they can subjectively just not consider that I had done anything.
Ok. That’s an argument for decoherence..but not an argument for multi branch decoherence.
And minor remnants can be left, so long as they stay minor… they are not going to affect observation much. (Although the reasons for that could be cosmological..an expanding universe with slightly negative curvature is the ideal way to get rid of unwanted information).
But not much? Theres an argument that decoherent splitting can’t be completely irrevocable, because it emerges from time reversible microphysics … but there’s also an argument that the “losing” branches spread out, and become so thinly distributed in the overall mish mash that they can no longer matter in practice..and for for macrophysical reasons.
That’sexactly how decoherent branching works .. if it works. It’s not a causal process that leaves causal traces.
They don’t spread much faster compared to “winning” branches I guess? World has no particular dependence on what random number I generated above, so all the splits and merges have approximately same shape in either of the eight branch regions.
With a remark that “decoherent branching” and “coherent branching” are presumably just one process differing in how much the information is contained or spreads out,
and noting that should LW erase the random number from my comment above plus every of us to totally forget it, the branches would approximately merge,
yes I agree. Contents of worlds in those branches do not causally interact with us, but amplitudes might at some point in future. AFAIK Eliezer referenced the latter while assigning label “real” to each and every world (each point of wavefunction).
They don’t spread faster, they spread wider. Their low amplitude information is smeared over an environmental already containing a lot of other low amplitude information, noise in effect. So the chances of recovering it are zero for all practical purposes.
Well, no. In a typical measurement, a single particle interacts with an apparatus containing trillions, and that brings about decoherence very quickly, so quickly it can appear like collapse. Decoherent branches, being macroscopic , stable and irreversible, for all practical purposes, are the opposite to coherent ones.
It’s actually not, his commitment to MWI is more rooted in his ignorance of QM, not due to any metaphysical commitments. Plenty of people with actual degrees in the field all disagree on what the right interpretation is, even today we still don’t. Metaphysics doesn’t play into it, MWI is just one way to explain the math but it has it’s short comings.
Aumann’s agreement theorem doesn’t factor in here. The simpler explanation is that EY doesn’t understand QM, which is why he assumes Many Worlds it the only one. In fact he’s often extremely confident (and stubborn) about things he not only doesn’t understand but is provably wrong in.
The truth about QM is that no one really understands what’s going on there.
Presumably you wouldn’t say this of actual physicists who believe in MWI?
MWI is just one of many interpretations, I might not say it of actual physicists who believe it but if EY says it then one can ignore it.
The same way one can ignore him on AI and most things.
But in regards to what is correct on interpretation, no one knows for sure and we may never know.
Aumann’s agreement theorem.
assumes common priors, i.e., a common metaphysical commitment.
The metaphysical commitment necessary is weaker than it looks.
This theorem (valuable though it may be) strikes me as one of the easiest abused things ever. I think Ayn Rand would have liked it: if you don’t agree with me, you’re not as committed to Reason as I am.
I believe he’s saying that rational people should agree on metaphysics (or probability distributions over different systems). In other words, to disagree about MWI, you need to dispute EY’s chain of reasoning metaphysics->evidence->MWI, which Perplexed says is difficult or dispute EY’s metaphysical commitments, which Perplexed implies is relatively easier.
That’s interesting. The only problem now is to find a rational person to try it out on.
Except that isn’t what I said.
If MWI is wrong, I want to believe that MWI is wrong. If MWI is right, I want to believe MWI is right.
He still shouldn’t be stating it as a fact when it based on “commitments”.