# If Many-Worlds Had Come First

*Not that I’m claiming I could have done better, if I’d been born into that time, instead of this one…*

Macroscopic decoherence, a.k.a. many-worlds, was first proposed in a 1957 paper by Hugh Everett III. The paper was ignored. John Wheeler told Everett to see Niels Bohr. Bohr didn’t take him seriously.

Crushed, Everett left academic physics, invented the general use of Lagrange multipliers in optimization problems, and became a multimillionaire.

It wasn’t until 1970, when Bryce DeWitt (who coined the term “many-worlds”) wrote an article for *Physics Today*, that the general field was first informed of Everett’s ideas. Macroscopic decoherence has been gaining advocates ever since, and may now be the majority viewpoint (or not).

But suppose that decoherence and macroscopic decoherence had been realized immediately following the discovery of entanglement, in the 1920s. And suppose that no one had proposed collapse theories until 1957. Would decoherence now be steadily declining in popularity, while collapse theories were slowly gaining steam?

Imagine an alternate Earth, where the very first physicist to discover entanglement and superposition said, “Holy flaming monkeys, there’s a zillion other Earths out there!”

In the years since, many hypotheses have been proposed to explain the mysterious Born probabilities. But no one has *yet* suggested a collapse postulate. That possibility simply has not occurred to anyone.

One day, Huve Erett walks into the office of Biels Nohr…

“I just don’t understand,” Huve Erett said, “why no one in physics even seems *interested* in my hypothesis. Aren’t the Born statistics the greatest puzzle in modern quantum theory?”

Biels Nohr sighed. Ordinarily, he wouldn’t even bother, but something about the young man compelled him to try.

“Huve,” says Nohr, “every physicist meets dozens of people per year who think they’ve explained the Born statistics. If you go to a party and tell someone you’re a physicist, chances are at least one in ten they’ve got a new explanation for the Born statistics. It’s one of the most famous problems in modern science, and worse, it’s a problem that everyone thinks they can understand. To get attention, a new Born hypothesis has to be… pretty darn good.”

“And *this*,” Huve says, “*this* isn’t *good?*”

Huve gestures to the paper he’d brought to Biels Nohr. It is a short paper. The title reads, “The Solution to the Born Problem.” The body of the paper reads:

When you perform a measurement on a quantum system, all parts of the wavefunction except one point vanish, with the survivor chosen non-deterministically in a way determined by the Born statistics.

“Let me make absolutely sure,” Nohr says carefully, “that I understand you. You’re saying that we’ve got this wavefunction—evolving according to the Wheeler-DeWitt equation—and, all of a sudden, the whole wavefunction, except for one part, just spontaneously goes to zero amplitude. Everywhere at once. This happens when, way up at the macroscopic level, we ‘measure’ something.”

“Right!” Huve says.

“So the wavefunction knows when we ‘measure’ it. What exactly is a ‘measurement’? How does the wavefunction know we’re here? What happened before humans were around to measure things?”

“Um…” Huve thinks for a moment. Then he reaches out for the paper, scratches out “When you perform a measurement on a quantum system,” and writes in, “When a quantum superposition gets too large.”

Huve looks up brightly. “Fixed!”

“I see,” says Nohr. “And how large is ‘too large’?”

“At the 50-micron level, maybe,” Huve says, “I hear they haven’t tested that yet.”

Suddenly a student sticks his head into the room. “Hey, did you hear? They just verified superposition at the 50-micron level.”

“Oh,” says Huve, “um, whichever level, then. Whatever makes the experimental results come out right.”

Nohr grimaces. “Look, young man, the truth here isn’t going to be comfortable. Can you hear me out on this?”

“Yes,” Huve says, “I just want to know why physicists won’t listen to me.”

“All right,” says Nohr. He sighs. “Look, if this theory of yours were actually true—if whole sections of the wavefunction just instantaneously vanished—it would be… let’s see. The only law in all of quantum mechanics that is non-linear, non-unitary, non-differentiable and discontinuous. It would prevent physics from evolving locally, with each piece only looking at its immediate neighbors. Your ‘collapse’ would be the only fundamental phenomenon in all of physics with a preferred basis and a preferred space of simultaneity. Collapse would be the only phenomenon in all of physics that violates CPT symmetry, Liouville’s Theorem, and Special Relativity. In your original version, collapse would also have been the only phenomenon in all of physics that was inherently mental. Have I left anything out?”

“Collapse is also the only acausal phenomenon,” Huve points out. “Doesn’t that make the theory more wonderful and amazing?”

“I think, Huve,” says Nohr, “that physicists may view the exceptionalism of your theory as a point not in its favor.”

“Oh,” said Huve, taken aback. “Well, I think I can fix that non-differentiability thing by postulating a second-order term in the—”

“Huve,” says Nohr, “I don’t think you’re getting my point, here. The reason physicists aren’t paying attention to you, is that your theory isn’t physics. It’s magic.”

“But the Born statistics are the greatest puzzle of modern physics, and this theory provides a mechanism for the Born statistics!” Huve protests.

“No, Huve, it doesn’t,” Nohr says wearily. “That’s like saying that you’ve ‘provided a mechanism’ for electromagnetism by saying that there are little angels pushing the charged particles around in accordance with Maxwell’s equations. Instead of saying, ‘Here are Maxwell’s equations, which tells the angels where to push the electrons,’ we just say, ‘Here are Maxwell’s equations’ and are left with a strictly simpler theory. Now, we don’t know *why* the Born statistics happen. But you haven’t given the slightest reason why your ‘collapse postulate’ should eliminate worlds in accordance with the Born statistics, rather than something else. You’re not even making use of the fact that quantum evolution is unitary—”

“That’s because it’s not,” interjects Huve.

“—which everyone pretty much knows has got to be the key to the Born statistics, somehow. Instead you’re merely saying, ‘Here are the Born statistics, which tell the collapser how to eliminate worlds,’ and it’s strictly simpler to just say ‘Here are the Born statistics.’ ”

“But—” says Huve.

“*Also*,” says Nohr, raising his voice, “you’ve given no justification for why there’s only *one* surviving world left by the collapse, or why the collapse happens before any *humans* get superposed, which makes your theory *really suspicious* to a modern physicist. This is exactly the sort of untestable hypothesis that the ‘One Christ’ crowd uses to argue that we should ‘teach the controversy’ when we tell high school students about other Earths.”

“I’m not a One-Christer!” protests Huve.

“Fine,” Nohr says, “then *why* do you just assume there’s only one world left? And that’s not the only problem with your theory. Which part of the wavefunction gets eliminated, exactly? And in which basis? It’s clear that the whole wavefunction isn’t being compressed down to a delta, or ordinary quantum computers couldn’t stay in superposition when any collapse occurred anywhere—heck, ordinary molecular chemistry might start failing—”

Huve quickly crosses out “one point” on his paper, writes in “one part,” and then says, “Collapse doesn’t compress the wavefunction down to one point. It eliminates all the amplitude *except* one world, but leaves *all* the amplitude in that world.”

“Why?” says Nohr. “In principle, once you postulate ‘collapse,’ then ‘collapse’ could eliminate any part of the wavefunction, anywhere—why just one neat world left? Does the collapser *know we’re in here?*”

Huve says, “It leaves one whole world because that’s what fits our experiments.”

“Huve,” Nohr says patiently, “the term for that is ‘post hoc.’ Furthermore, decoherence is a continuous process. If you partition by whole brains with distinct neurons firing, the partitions have almost zero mutual interference within the wavefunction. But plenty of other processes overlap a great deal. There’s no possible way you can point to ‘one world’ and eliminate everything else without making completely arbitrary choices, including an arbitrary choice of basis—”

“But—” Huve says.

“And *above all*,” Nohr says, “the *reason* you can’t tell me which part of the wavefunction vanishes, or exactly when it happens, or exactly what triggers it, is that if we did adopt this theory of yours, it would be *the only informally specified, qualitative fundamental law* taught in all of physics. Soon no two physicists anywhere would agree on the exact details! Why? Because it would be the *only fundamental law in all of modern physics that was believed without experimental evidence to nail down exactly how it worked*.”

“What, really?” says Huve. “I thought a lot of physics was more informal than that. I mean, weren’t you just talking about how it’s impossible to point to ‘one world’?”

“That’s because worlds aren’t *fundamental*, Huve! We have massive experimental evidence underpinning the fundamental law, the Wheeler-DeWitt equation, that we use to describe the evolution of the wavefunction. We just apply exactly the same equation to get our description of macroscopic decoherence. But for difficulties of calculation, the equation would, in principle, tell us *exactly* when macroscopic decoherence occurred. We don’t know where the Born statistics come from, but we have massive evidence for what the Born statistics *are*. But when I ask you when, or where, collapse occurs, you don’t know—*because there’s no experimental evidence whatsoever to pin it down*. Huve, even if this ‘collapse postulate’ worked the way you say it does, *there’s no possible way you could* know *it!* Why not a gazillion other equally magical possibilities?”

Huve raises his hands defensively. “I’m not saying my theory should be taught in the universities as accepted truth! I just want it experimentally tested! Is that so wrong?”

“You haven’t specified when collapse happens, so I can’t construct a test that falsifies your theory,” says Nohr. “Now with that said, we’re already looking experimentally for any part of the quantum laws that change at increasingly macroscopic levels. Both on general principles, in case there’s something in the 20th decimal point that only shows up in macroscopic systems, and also in the hopes we’ll discover something that sheds light on the Born statistics. We check decoherence times as a matter of course. But we keep a *broad* outlook on what might be different. Nobody’s going to privilege your non-linear, non-unitary, non-differentiable, non-local, non-CPT-symmetric, non-relativistic, a-frikkin’-causal, faster-than-light, *in-bloody-formal* ‘collapse’ when it comes to looking for clues. Not until they see absolutely unmistakable evidence. And believe me, Huve, it’s going to take a hell of a lot of evidence to unmistake *this*. Even if we did find anomalous decoherence times, and I don’t think we will, it wouldn’t force your ‘collapse’ as the explanation.”

“What?” says Huve. “Why not?”

“Because there’s got to be a billion more explanations that are more plausible than violating Special Relativity,” says Nohr. “Do you realize that if this really happened, there would only be a *single* outcome when you measured a photon’s polarization? Measuring one photon in an entangled pair would influence the other photon a light-year away. Einstein would have a heart attack.”

“It doesn’t *really* violate Special Relativity,” says Huve. “The collapse occurs in exactly the right way to prevent you from ever actually *detecting* the faster-than-light influence.”

“That’s not a point in your theory’s favor,” says Nohr. “Also, Einstein would still have a heart attack.”

“Oh,” says Huve. “Well, we’ll say that the relevant aspects of the particle *don’t exist*until the collapse occurs. If something doesn’t exist, influencing it doesn’t violate Special Relativity—”

“You’re just digging yourself deeper. Look, Huve, as a general principle, theories that are actually *correct* don’t generate this level of confusion. But above all, there isn’t any evidence for it. You have no logical way of knowing that collapse occurs, and no reason to believe it. You made a mistake. Just say ‘oops’ and get on with your life.”

“But they *could* find the evidence someday,” says Huve.

“I can’t think of what evidence could determine *this particular* one-world hypothesis as an explanation, but in any case, right now we *haven’t* found any such evidence,” says Nohr. “We haven’t found anything even vaguely suggestive of it! You can’t update on evidence that could theoretically arrive someday but hasn’t arrived! Right now, today, there’s no reason to spend valuable time thinking about this rather than a billion other equally magical theories. There’s absolutely nothing that justifies your belief in ‘collapse theory’ any *more* than believing that someday we’ll learn to transmit faster-than-light messages by tapping into the acausal effects of praying to the Flying Spaghetti Monster!”

Huve draws himself up with wounded dignity. “You know, if my theory is wrong—and I do admit it might be wrong—”

“*If?*” says Nohr. “*Might?*”

“If, I say, my theory is wrong,” Huve continues, “then somewhere out there is another world where *I* am the famous physicist and *you* are the lone outcast!”

Nohr buries his head in his hands. “Oh, not this again. Haven’t you heard the saying, ‘Live in your own world’? And *you* of all people—”

“Somewhere out there is a world where the vast majority of physicists believe in collapse theory, and no one has even *suggested* macroscopic decoherence over the last thirty years!”

Nohr raises his head, and begins to laugh.

“What’s so funny?” Huve says suspiciously.

Nohr just laughs harder. “Oh, my! Oh, my! You really think, Huve, that there’s a world out there where they’ve known about quantum physics for thirty years, and nobody has even *thought* there might be more than one world?”

“Yes,” Huve says, “that’s exactly what I think.”

“Oh my! So you’re saying, Huve, that physicists detect superposition in microscopic systems, and work out quantitative equations that govern superposition in every single instance they can test. And for thirty years, not *one person* says, ‘Hey, I wonder if these laws happen to be universal.’ ”

“Why should they?” says Huve. “Physical models sometimes turn out to be wrong when you examine new regimes.”

“But to not even *think* of it?” Nohr says incredulously. “You see apples falling, work out the law of gravity for all the planets in the solar system except Jupiter, and it doesn’t even *occur* to you to apply it to Jupiter because Jupiter is too large? That’s like, like some kind of comedy routine where the guy opens a box, and it contains a spring-loaded pie, so the guy opens another box, and it contains another spring-loaded pie, and the guy just keeps doing this without even *thinking* of the possibility that the next box contains a pie too. You think John von Neumann, who may have been the highest-*g* human in history, wouldn’t think of it?”

“That’s right,” Huve says, “He wouldn’t. Ponder that.”

“This is the world where my good friend Ernest formulates his Schrödinger’s Cat thought experiment, and in this world, the thought experiment goes: ‘Hey, suppose we have a radioactive particle that enters a superposition of decaying and not decaying. Then the particle interacts with a sensor, and the sensor goes into a superposition of going off and not going off. The sensor interacts with an explosive, that goes into a superposition of exploding and not exploding; which interacts with the cat, so the cat goes into a superposition of being alive and dead. Then a human looks at the cat,’ and at this point Schrödinger stops, and goes, ‘gee, I just can’t imagine what could happen next.’ So Schrödinger shows this to everyone else, and they’re also like ‘Wow, I got no idea what could happen at this point, what an amazing paradox.’ Until finally *you* hear about it, and you’re like, ‘hey, maybe at *that*point half of the superposition just vanishes, at random, faster than light,’ and everyone else is like, ‘Wow, what a great idea!’ ”

“That’s right,” Huve says again. “It’s got to have happened somewhere.”

“Huve, this is a world where every single physicist, and probably the whole damn human species, is too dumb to sign up for cryonics! We’re talking about the Earth where George W. Bush is President.”

- Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists by 24 Sep 2019 4:12 UTC; 299 points) (
- Firewalling the Optimal from the Rational by 8 Oct 2012 8:01 UTC; 167 points) (
- Undiscriminating Skepticism by 14 Mar 2010 23:23 UTC; 136 points) (
- Causal Universes by 29 Nov 2012 4:08 UTC; 125 points) (
- Normal Cryonics by 19 Jan 2010 19:08 UTC; 104 points) (
- Faster Than Science by 20 May 2008 0:19 UTC; 71 points) (
- The Contrarian Status Catch-22 by 19 Dec 2009 22:40 UTC; 70 points) (
- Einstein’s Speed by 21 May 2008 2:48 UTC; 68 points) (
- Science Doesn’t Trust Your Rationality by 14 May 2008 2:13 UTC; 65 points) (
- The Quantum Physics Sequence by 11 Jun 2008 3:42 UTC; 65 points) (
- I’m Not Saying People Are Stupid by 9 Oct 2009 16:23 UTC; 51 points) (
- This Failing Earth by 24 May 2009 16:09 UTC; 50 points) (
- Many Worlds, One Best Guess by 11 May 2008 8:32 UTC; 50 points) (
- Blood Is Thicker Than Water 🐬 by 28 Sep 2021 3:21 UTC; 37 points) (
- 16 Sep 2009 7:41 UTC; 30 points) 's comment on The Absent-Minded Driver by (
- And the Winner is… Many-Worlds! by 12 Jun 2008 6:05 UTC; 28 points) (
- 27 Sep 2019 0:11 UTC; 25 points) 's comment on Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists by (
- 1 Apr 2013 21:34 UTC; 21 points) 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (
- Principles of Disagreement by 2 Jun 2008 7:04 UTC; 20 points) (
- 12 May 2011 16:53 UTC; 14 points) 's comment on The elephant in the room, AMA by (
- the quantum amplitude argument against ethics deduplication by 12 Mar 2023 13:02 UTC; 11 points) (
- 18 Oct 2011 12:44 UTC; 10 points) 's comment on [SEQ RERUN] Beware Stephen J. Gould by (
- [SEQ RERUN] If Many-Worlds Had Come First by 1 May 2012 2:18 UTC; 8 points) (
- 9 Jul 2022 1:56 UTC; 6 points) 's comment on On Deference and Yudkowsky’s AI Risk Estimates by (EA Forum;
- 20 Mar 2011 5:45 UTC; 6 points) 's comment on On Being Decoherent by (
- 12 Jun 2008 20:44 UTC; 6 points) 's comment on Quantum Physics Revealed As Non-Mysterious by (
- 4 Dec 2011 5:05 UTC; 5 points) 's comment on [SEQ RERUN] The Amazing Virgin Pregnancy by (
- 12 May 2008 22:39 UTC; 5 points) 's comment on The Failures of Eld Science by (
- Rationality Reading Group: Part S: Quantum Physics and Many Worlds by 28 Jan 2016 1:18 UTC; 5 points) (
- 5 Jun 2013 9:20 UTC; 4 points) 's comment on Open Thread, June 2-15, 2013 by (
- Hiroshima Day by 6 Aug 2008 23:15 UTC; 4 points) (
- 7 Aug 2008 11:12 UTC; 4 points) 's comment on Hiroshima Day by (
- 19 Mar 2011 16:21 UTC; 3 points) 's comment on On Being Decoherent by (
- 11 Aug 2013 19:26 UTC; 3 points) 's comment on Common sense as a prior by (
- 18 Aug 2009 0:23 UTC; 2 points) 's comment on Bloggingheads: Yudkowsky and Aaronson talk about AI and Many-worlds by (
- 27 Apr 2014 19:55 UTC; 2 points) 's comment on Tapestries of Gold by (
- 23 Jan 2021 18:06 UTC; 2 points) 's comment on Deutsch and Yudkowsky on scientific explanation by (
- 10 May 2008 18:01 UTC; 2 points) 's comment on Collapse Postulates by (
- 30 Aug 2021 19:04 UTC; 1 point) 's comment on Superintelligent Introspection: A Counter-argument to the Orthogonality Thesis by (
- 19 Sep 2013 19:00 UTC; 1 point) 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (
- 5 Jun 2008 6:50 UTC; 0 points) 's comment on Living in Many Worlds by (
- 29 Sep 2009 17:20 UTC; 0 points) 's comment on Why Many-Worlds Is Not The Rationally Favored Interpretation by (
- 29 Sep 2009 17:20 UTC; 0 points) 's comment on Why Many-Worlds Is Not The Rationally Favored Interpretation by (
- 20 Dec 2022 2:37 UTC; -1 points) 's comment on Veganism, Optimal Health, and Intellectual Honesty by (EA Forum;
- 25 Jun 2012 19:50 UTC; -2 points) 's comment on [SEQ RERUN] Is Morality Preference? by (
- 21 Jul 2013 13:05 UTC; -3 points) 's comment on Problems of the Deutsch-Wallace version of Many Worlds by (

Or both at the same time?

I hope the following isn’t completely off-topic:

What exactly does a hypothetical scenario where “person X was born Y years earlier” even look like? I could see a somewhat plausible interpretation of that description in periods of extremely slow scientific and technological progress, but the twentieth century doesn’t qualify. In the 1920s: 1) The concept of a turing machine hadn’t been formulated yet. 2) There were no electronic computers. 3) ARPANET wasn’t even an idea yet, and wouldn’t be for decades. 4) Television was a novelty, years away from being used by a significant number of people. 5) WW1 was recent history.Two persons with the same DNA and, except for results of global changes, very similar local environments during their childhood, would most likely turn into completely different adult humans if one of them was born in the 1920s and the other at some point in the last 30 years (roughly chosen to guarantee exposure to the idea of the internet as a teenager), and they both grew up in industrialized countries. The scientific and technological level one is born into is critical for mind development. What does it mean to consider a hypothetical world where a specific person was born into an environment very different in those respects? Why is this worth thinking about?

What if it had seemed that there was no way to get the Born rule with just simple decoherence—what if that seemed to clearly imply a uniform probability rule. Would the random collapse view seem more plausible then?

What if it had seemed that there was no way to get the Born rule with just simple decoherence—what if that seemed to clearly imply a uniform probability rule. Would the random collapse view seem more plausible then?No. Eight strikes and it’s out. There is no possible reason for adopting a theory that unphysical, or even spending more than thirty seconds thinking about it, without crushingly unmistakable experimental evidence that nails it down.

If you’re postulating new fundamental physics, things that don’t show up microscopically but do show up macroscopically, to explain the Born statistics, there would be a hundred better possibilities that

don’tviolate Special Relativity.One thing you’re currently having trouble explaining is not an excuse to import magic out of nowhere and say, “Oh,

thatmust be the explanation.” Doesn’t work for intelligent design and it doesn’t work for collapse either.If the Born rule comes from decoherence, and if decoherence comes from the SWE, the Born rule comes from what you would class as acceptable physics. In fast, since the Born rule is part if what makes QM work, any MWI type theory must justify it. You replied as tthough you read “the Born rule ” as “mysterious nonlocal collapse process”. The Born rule is just a piece of maths,

If MWI has no observable consequences, does it matter other than as a point of principle? Or are you going to get to ethical consequences, like the spaceship that doesn’t disappear when it passes the horizon?

I’m surprised by the last sentence. Politics is the mind-killer, and all that.

Correct me if I am wrong, but MWI does have noticeable consequences, or at least implications: for example, interference at all length-scales and proper evaluation of the waveform equations implying the Born probabilities. Neither of these are implicit in the Copenhagen interpretation—in fact, the first is contradicted.

03:16 was me—curse you, Typepad!

If there really are consequences of one of the hypotheses that differ from the consequences of others, that is extremely important to know.

I don’t see how decoherence is an automatic win for MWI. Decoherence has been used in several different interpretations of quantum mechanics, notably in consistent histories and in certain hidden variable interpretations. Why should we choose MWI before those, particularly since it seems less parsimonious than consistent histories? For that matter, the language of Rovelli and Smolin’s relational quantum mechanics very nearly turns decoherence into its own interpretation (if you compare papers on decoherence which shirk the metaphysical interpretation to the interpretation put forward by Rovelli, they’re almost identical). Relational quantum mechanics requires much less in the way of grand assertions than MWI and is a natural framework for decoherence, so why pick MWI over relational quantum mechanics?

As far as I can tell, the only possible coherent state of affairs corresponding to RQM—the only reality in which you can embed these systems relating to each other—is MWI. To this is added some bad amateur incoherent epistemology trying to dance around the issue without addressing it.

You can quote me on the following:

RQM is MWI in denial.

Any time you might uncharitably get the impression that RQM is merely playing semantic word-games with the notion of reality, RQM is, in fact, merely playing semantic word-games with the notion of reality.

RQM’s epistemology is drunk and needs to go home and sleep it off.

Some people consider it a good form to back up your accusations with examples, facts and proofs, even when discussing topics dear to their hearts. Give it a try some time.

Okay. Name a state of affairs that could correspond to RQM without being MWI.

PS: Whenever you say that something is ‘true relative to’ B, please replace it with a state of affairs and a description of B’s truth-predicate over possible states of affairs.

First, the onus is on you to show that the above is both relevant to your claim of “bad amateur incoherent epistemology” and that there is no such state of affairs, since it’s your claim that RQM is just a word game.

But, to indulge you, here is one example:

Whereas in MWI, unless I misunderstand it, each interaction (after the decoherence has ran its course) irrevocably splits the world into “eigenworlds” of the interaction, and there is no observer for which the world is as yet unsplit:

P.S. Just to make it clear, I’m not an adherent of RQM, not until and unless it gives new testable predictions not available without it. Same applies to all other interpretations. I’m simply pointing out that MWI is not the only game in town.

So in MWI, this presumably arises when e.g. you’ve got 3 possible states of X, and version A of you decoheres with state 1 while version B is entangled with the superposition of 2+3. In RQM this is presumably described sagely as X being definitely-1 relative to A while X is 2+3 relative to B. Then if you ask them whether or not this statement itself is a true, objective state of affairs (where a ‘yes’ answer immediately yields MWI) there’s a bunch of hemming and hawing.

Ignoring your unhelpful sarcastic derision… You should know better, really.

Take an EPR experiment with spatially separated observers A and B. If A measures a state of a singlet and the world is split into Aup and Adown, when does B split in this world, according to MWI?

In RQM, it does not until it measures its own half of the singlet, which can be before of after A in a given frame. Its model of A is a superposition until A and B meet up and compare results (another interaction). The outcome depends on whether A actually measured anything and if so, in which basis. None of this is known until A and B interact.

I know I’m late to the party, but I couldn’t help but notice that this interesting question hadn’t been answered (here, at least). So here it is: as far as I know, B ‘splits’ immediately, but this in an unphysical question.

In MWI we would have observers A and B, who could observe Aup or Adown and Bup or Bdown (and start in |Aunknown> and |Bunknown> before measuring) respectively. If we write |PAup> and |PAdown> for the wavefunctions corresponding to the particle near observer A being in the up resp. down states, and introduce similar notation for the particle near observer B, then the initial configuration is:

|Aunkown>

|Bunknown>(|PAup>|PBdown> - |PAdown>|PBup>) / \sqrt(2)Now if we let person A measure the particle the complete wavefunction changes to:

|Bunknown>

(|Aup>|PAup>|PBdown> - |Adown>|PAdown> * |PBup>) / \sqrt(2)Important is that this is a local change to the wavefunction, what happened here is merely that A measured the particle near A. Since observer A is a macroscopic object we would expect the two branches of the wavefunction above (separated by the minus sign) to be quite far apart in configuration space, so the worlds have definitely split here. But B still isn’t correlated to any particular branch: from the point of A, person B is now in a superposition. In particular observer B doesn’t notice anything from this splitting—as we would expect (splitting being a local process and observers A and B being far apart). This is also why I called the question as to when B splits ‘unphysical’ above, since it is a property known only locally at A, and in fact the answer to this question wouldn’t change any of B’s anticipations.

This might seem a lot like RQM, and that is because RQM happens to get the answer to this question right. The problem with RQM (at least, the problem I ran into while reading the paper) was that the author claims that measurements are ontologically fundamental, and wavefunctions are only a mathematical tool. This seems to confuse the map with the territory: if wavefunctions are only part of our maps, what are they maps of? Also if wavefunctions aren’t part of the territory an explanation is needed for the observation that different observers can get the same results when measuring a system, i.e. an explanation is needed for the fact that all observations are consistent. It seems unnecessarily complicated to demand that wavefunctions aren’t real, and then separately explain why all observations are consistent

as they would have been if the wavefunction were real.I think this is what Eliezer might have meant with

RQM seems to assert precisely what MWI asserts, except that it denies the existence of objective reality, and then needs a completely new and different explanation for the consistency between measurements by different observers. I found the insults hurled at RQM by Eliezer disrespectful but, on close inspection, well-deserved. Denying reality doesn’t seem like a good property for a theory of physics to have.

Denying reality, and denying the reality of the .WF aren’t the same thing.

Suppose RQM is only doing the latter. Then, you have observers who are observing a consistent objective reality, and mapping it accurately with WFs, then their maps will agree. But that doesn’t mean the terrain had all the features of the map. Accuracy is a weaker condition than identity.

Consider an analogy with relativity. There is a an objective terrain of objects with locations and momenta, but to represent it an observer must supply a coordinate system which is not part of the territory.

I am starting to get confused by RQM, I really did not get the impression that this is what was claimed. But suppose it is.

To stick with the analogy of relativity, great efforts have been made there to ensure that all important physical formulas are Lorentz-invariant, i.e. do not depend on these artificial coordinate system. In an important sense the system does not depend on your coordinates, although for actual calculations (on a computer or something) such coordinates are needed. So while (General) Relativity indeed satisfies the last line you gave, it also explains exactly how (un)necessary such coordinate systems are, and explains exactly what can be expected to be shown without choosing a coordinate system.

Back to RQM. Here this important explanation of which observables are still independent of the observer(/initial frame) and which formulas are universal are painfully absent. It seems that RQM as stated above is more of an anti-prediction - we accept that each observer can accurately describe his experimental outcomes using QM, and different observers agree with eachother because they are looking at the same territory, hence they should get matching maps, and finally we reject the idea that these observer-dependent representations can be combined to one global representation.

Again I stuggle to combine this method of thought with the fact that humans themselves are made of atoms. If we assume that wavefunctions are only very useful tools for predicting the outcomes of experiments, but the actual territory is not made of something that would be accurately represented by a wavefunction, I run into two immediate problems:

1) In order to make this belief pay rent I would like to know what sort of thing an accurate description of the universe would look like, according to RQM. In other words, where should we begin searching for maps of a territory containing observers that make accurate maps with QM that cannot be combined to a global map?

2) What experiment could we do to distinguish between RQM and for example MWI? If indeed multiple observers automatically get agreeing QM maps by virtue of looking at the same territory, then what experiment will distinguish between a set of knitted-together QM maps and an RQM map as proposed by my first question? Mind you, such experiments might well exist (QM has trumped non-mathy philosophy without much trouble in the past), I just have a hard time thinking of one. And if there is no observable difference, then why would e favour RQM over the stiched-together map (which is claiming that QM is universal, which should make it simpler than having local partial QM with some other way of extending this beyond our observations)?

My apologies for creating such long replies, summarizing the above is hard. For what it’s worth I’d like to remark that your comment has made me update in favour of RQM by quite a bit (although I still find it unlikely) - before your comment I thought that RQM was some stubborn refusal to admid that QM might be universal, thereby violating Occam’s Razor, but when seen as an anti-prediction it seems sorta-plausible (although useless?).

By the way, your complaint here...

..is echoed by no less than Jaynes:-

http://arxiv.org/abs/1206.6024

RQM may not end in an I, but it is still an interptetation.

What the I in MWI means is that it is an interpretation, not a theory, and therefore neither offers new mathematical apparatus, nor testable predictions.

Not exactly, RQM objects to

observer independentstate. You can have global state, providing it is from the perspective of a Test Observer, and you can presumably stitch multiple maps into such a picture.Or perhaps you mean that if you could write state in a manifestly basis-free way, you would no longer need to insist on an observer? I’m not sure. A lot of people are concerned about the apparent disappearance of the world in RQM. There seems to be a realistic and a non realistic version of RQM. Rovellis version was not realistic, but some have added an ontology of relations.

its more of a should not than a cannot.

Well, we can’t distinguish between MWI and CI, either.

Just because something is called an ‘interpretation’ does not mean it doesn’t have testable predictions. For example, macroscopic superposition discerns between CI and MWI (although CI keeps changing its definition of ‘macroscopic’).

I notice that I am getting confused again. Is RQM trying to say that reality via some unknown process the universe produces results to measurements, and we use wavefunctions as something like an interpolation tool to account for those observations, but different observations lead to different inferences and hence to different wavefunctions?

There is nothing in Copenhagen that forbids macroscopic superposition. The experimental results of macroscopic superposition in SQUIDs are usually calculated in terms of copenhagen (as are almost all experimental results).

That’s mainly because Copenhagen never specified macrsoscopic …but the idea of an unequivocal “cut” was at the back of a lot of copenhagenists minds, and it has been eaten away by various things over the years.

So there are obviously a lot of different things you could mean by “Copenhagen” or “in the back of a lot of copenhagenist minds” but the way it’s usually used by physicists nowadays is to mean “the Von Neumann axioms” because that is what is in 90+% of the textbooks.

The von Neumann axioms aren’t self interpreting .

Physicists are trained to understand things in terms of mathematical formalisms and experimental results, but that falls over when dealing with interpretation. Interpretations canot be settled empirically, by definition,, and formulae are not self interpreting.

My point was only that nothing in the axioms prevents macroscopic superposition.

For some values of “wavefunction”, you are going to have different observers writing different wavefunctions just because they are using different bases...that’s a practical issue that’s still true if you believe in, but cannot access, theOne True Basis, like a many worlder.

How are you defining territory here? If the territory is ‘reality’ the only place where quantum mechanics connects to reality is when it tells us the outcome of measurements. We don’t observe the wavefunction directly, we measure observables.

I think the challenge of MWI is to make the probabilities a natural result of the theory, and there has been a fair amount of active research trying and failing to do this. RQM side steps this by saying “the observables are the thing, the wavefunction is just a map, not territory.”

See my reply to TheAncientGeek, I think it covers most of my thoughts on this matter. I don’t think that your second paragraph captures the difference between RQM and MWI—the probabilities seem to be just as arbitrary in RQM as they are in any other interpretation. RQM gets some points by saying “Of course it’s partially arbitrary, they’re just maps people made that overfit to reality!”, but it then fails to explain exactly which parts are overfitting, or where/if we would expect this process to go wrong.

To my very limited understanding, most of QM in general is completely unnatural as a theory from a purely mathematical point of view. If that is actually so, what precisely do you mean by “natural result of the theory”?

Actually most of it is quite natural, QM is the most obvious extension that you get when you try to extend the concept of ‘probability’ to complex numbers, and there are some suggestions why you would want to do this (I think the most famous/commonly found explanation is that we want ‘smooth’ operators, for example if turning around is an operator there should also be an operator describing ‘half of turning around’, and another for ‘1/3 of turning around’ etc., which for mathematical reasons immediately gives you complex numbers (try flipping a sign in two identical steps, this is the same as multiplying by i)).

To my best knowledge the question of why we use wavefunctions is a chicken-and-the-egg type question - we want square integrable wavefunctions because those are the solution of Schrodingers equation, we want Schrodingers equation because it is (almost) the most general Hermitian time-evolution operator, time-evolution operators should be Hermitian because that is the only way to preserve unitarity and unitarity should be preserved because then the two-norm of the wavefunction can be interpreted as a probability. We’ve made a full circle.

As for your second question: I think a ‘natural part of the theory’ is something that Occam doesn’t frown upon - i.e. if the theory with the extra part takes a far shorter description than the description of the initial theory plus the description of the extra part. Informally, something is ‘a natural result of the theory’ if somehow the description for the added result is somehow already partly specified by the theory.

Again my apologies for writing such long answers to short questions.

Thank you, that was certainly insightful. I see now that it is some kind of natural extension of relevant concepts.

I have been told however that from a

formalpoint of view a lot of QM (maybe they were talking only about QED) makes no sense whatsoever and the only reason why the theory works is because many of the objects coming up have been redefined so as to make the theory work. I don’t really know to what extent this is true, but if so I would still consider it a somewhat unnatural theory.I’ve since decided to not argue about what is and isn’t in the territory, given how I no longer believe in the territory.

I confess I’m not quite clear on your question. Local processes proceed locally with invariant states of distant entanglement. Just suppose that the global wavefunction is an objective fact which entails all of RQM’s statements via the obvious truth-condition, and there you go.

I confess I’m not quite clear on your answer.

Not sure what this means, at least not past “local processes proceed locally”, which is certainly uncontroversial, if you mean to say that interaction is limited to light speed.

“an objective fact”? As in a map from something to C? If so, what is that something? Some branching multiverse? Or what do you mean by an objective fact?

You lost me here, sorry.

Tell me what the basis is, and where it comes from, and I will...

What’s B? A many-worlds counterpart of A? Another observer enitrely?

In rQM, if one observer measures X to be in state 1, no other observer can disagree (How may times do I have to point that out?). But they can be uiniformed as to what state it is—ie it is superposed for them.

By definition, interpretations don’t give testable predictions. Theories give testable predictions.

EDIT: having said that, rQM ontology, where information is in relations, not in relata, predicts a feature of the formalism—that when you combine Hilbert spaces, what you have is a product not a sum. That is important for understanding the advantages of quantum computation.

Definitions can be wrong.

I understand that well-meaning physics professor may have once told you that. However the various quantum mechanics interpretations

doin fact pre-suppose different underlying mechanisms, and therefore result in different predictions in obscure corner cases. For example, reversible measurement of quantum phenomenon results in different probabilities on the return path in many-worlds vs the Copenhagen interpretation. (Unfortunately we lack the capability at this time to make fully reversible experimental aparatus at this scale.)A real testable difference between QM interpretations is a Nobel-worthy Big Deal. I doubt it will be coming.

Actually, Nobel does not begin to cover it, whether it would be awarded or not (even J.S. Bell didn’t get one, though he was nominated the year he died). Showing experimentally that, say, there is an objective collapse mechanism of some sort would probably be the biggest deal since the invention of QM.

And even just formally applying all the complexity stuff that is alluded to in the sequences, to the question of QM interpretation, would be a rather notable deed.

There are real testable differences:

http://www.hedweb.com/manworld.htm#unique

That page lists three ways in which MWI differs from the Copenhagen interpretation.

One has to two with further constraints that MWI puts on the grand unified theory: namely that gravity must be quantized. If it turns out that gravity is not quantized, that would be strong evidence against the basic MWI explanation.

The second has to do with testable predictions which could be made if it turns out that linearity is violated. Linearity is highly verified, but perhaps it does break down at high energies, in which case it could be used to communicate between or simply observe other Everett branches.

Finally, there’s an actual testable prediction: make a reversible device to measure electron spin. Measure one axis to prepare the electron. Measure an orthogonal axis, then reverse that measurement. Finally measure again on the first axis. You’ve lost your recording of the 2nd measurement, but in Copenhagen the 1st and 3rd should agree 50% of the time by random chance, because there was an intermediate collapse, whereas in MWI they agree 100% of the time, because the physical process was fully reversed, bringing the branches back into coherence.

We just lack the capability to make such a device, unfortunately. But feel free to do so and win that Nobel prize.

But such device is not physically realizable, as it would involve reversing the thermodynamic arrow of time.

? What aspect of measuring an electron’s spin is not reversible? Physics at this scale is entirely reversible.

You can reversibly entangle an electron’s spin to the state of some other small quantum system, that’s not questioned by any interpretation of QM, but unless this entanglement propagates to the point of producing a macroscopic effect, it is not considered a measurement.

It’s even worse than that. Zurek’s einselection relies on decoherence to get rid of non-eigenstates, and reversibility is necessarily lost in this (MWI-compatible) model of measurement. There is no size restriction, but the measurement apparatus (including the observer looking at it) must necessarily leak information to the environment to work as a detector. Thus a reversible computation would not be classically detectable.

Which is why the experiment as described in the link I provided requires an artificial intelligence running on a reversible computing substrate to perform the experiment in order to provide the macroscopic effect.

That is, it would require inverting the thermodynamic arrow of time.

If you define a measurement as an the creation of a (FAPP) irreversible record....then, no.

Indeed. Truly reversing the measurement would involve also forgetting what the result of the measurement was, and Copenhagenists would claim this forgotten intermediate result does not count as a “measurement” in the sense of something that (supposedly) collapses the wave function.

Easy: no observer-independent state. No contradictory observations. No basis problem.

(Of course that isn’t an empirical expectation-predicting difference, and of course there is no reason it should be, since interpretations are not theories).

“Quantum state is in the territory” versus “state is just model”

“Universal quantum state is a coherent notion” versus “universal quantum state cannot be correctly defined”

“We need to get a universal basis from somewhere” versus “we don’t”

Etc, etc.

That is not a state of affairs, it is a list of questions you aren’t trying to answer. I am asking for a concrete description of how the universe could possibly be that would correspond to RQM being true and MWI being false.

It isn’t a list of questions, it is a list of assertions about state of the state of the universe made by rQM paired with differing ones made by MWI. If you can spot the MWI ones, you can figout the rQM ones. If you can’t, Ill pull out the rQM ones:

There is no universal state.

There is universal basis.

State is a observer’s map,

“Collapse” is receipt of information by an observer, not an objective process.

There is an ontology of relations.

Observers cannot disagree about information, but can have different levels of information.

“There is no universal state.” is barely an assertion about the state of the universe. Okay, there’s no “universal state”. What is there instead? I can’t write a simulation of a universe with “no universal state” without further information.

Are you having trouble understanding the published materials as well?

I am disappointed that this move was validated with compliance.

I thought it had enough justice to comply with.

To be fair, I should have pointed out what I meant, and I didn’t:

That’s three adjectives in a row with a negative connotation. In a reasonably rational discourse one would expect some comparative discussion of epistemology in both interpretations and pointing relative strength and weaknesses of each.

This requires showing that RQM is a subset of MWI, so it’s a repetition of the original statement, only with some extra derision.

How would you phrase it in a neutral way?

That’s just insults, surely not the best way to get your point across.

To be fair, my reply had some of the same faults:

This was quite unfair of me. Most of your writings do have a good number of “examples, facts and proofs”, as well as eloquence and lucidity. The problem arises when you get annoyed or frustrated, which is only human.

No, I understood what you meant. Otherwise I wouldn’t have taken a shot at complying. Really RQM deserves its own post carefully dissecting it, but I may not have time to write it.

A very quick but sufficient refutation is that the same math taken as a description of an objectively existing causal process gives us MWI, hence there is no reason to complicate our epistemology beyond this to try to represent RQM, even if RQM could somehow be made coherent within a more complicated ontology that ascribed primitive descriptiveness to ideas like ‘true relative to’. MWI works, and RQM doesn’t add anything over MWI (not even Born probabilities).

rQM

subtractsobjective state and therefore does not have MWI’s basis problem.I tend to agree with you. As I said before, to me RQM to MWI is what “shut up and calculate” is to Copenhagen. Unfortunately, I have a feeling that I am missing some important point Eliezer is making (he tends to make important points, in my experience). For example, in the statement

I do not understand where, in his opinion, RQM adds a complication to (what?) epistemology.

Instead of having causal processes which are real, we now need causal processes which are ‘real relative to’ other causal processes. To prevent the other worlds from being real enough to have people inside them, we need to insist very loudly that this whole diagram of what is ‘real relative to’ other things, is not itself real. I am not clear on how this loud insistence can be accomplished. Also, since only individual points in configuration space allow one particle to say that another particle is in an exact position and have this be ‘real’, if you take a blob of amplitude large enough to contain a person’s causal process, you will find that elements of a person disagree about what is real relative to them...

...and all these complications are just pointless, there’s no need for our ontology to have a notion like ‘real relative to’ instead of just talking about causes and effects. RQM doesn’t even get any closer to explaining the Born probabilities, so why bother? It’s

exactlylike a version of Special Relativity that insists on talking about ‘real lengths relative to’ instead of observer-invariant Minkowskian spacetime.My best guess at the lack of agreement here is the difference in yours and mine ontology at a rather basic level. Specifically, your ontology seems to be

whereas mine does not have “the thingy that determines my experimental results” and treats these results as primitive instead. As a consequence, everything is a model (“belief”), and good models predict experimental results better. So there is no need to use the term “real” except maybe as a shorthand for the territory in the map-territory model (which is an oft useful model, but only a model).

You can probably appreciate that this ontological difference makes statements like

where the term “real” is repeated multiple times, lose meaning if one only cares about making accurate models.

Now, I cannot rule out that your ontology is better than my ontology in some sense of the term “better” acceptable to me, but that would be a discussion to be had first, before going into the interpretational problems of Quantum Mechanics. I can certainly see how adopting your ontology of objective reality may lead one to dislike RQM, which evades pinning down what reality is in the RQM view. On the other hand, you can probably agree that removing objective reality from one’s ontology would make MWI an unnecessary addition to a perfectly good model called relational quantum mechanics.

This sounds like ‘shut up and calculate’ to me. After applying “shut up and calculate” to RQM the results are identical to the results of applying “shut up and calculate” to MWI, so there’s no reason to claim that you’re shutting up about RQM instead of shutting up about MWI or rather just shutting up about quantum mechanics in general, unless you’re not really shutting up. To put it another way, there is no such thing as shutting up about RQM or MWI, only shutting up about QM without any attempt to say what underlying state of affairs you are shutting up about.

If that’s not what you mean by denying that you intend to talk about a thingy that generates your experimental results and treating the results as primitive, please explain what that was supposed to say.

First, I think that we agree that ‘shut up and calculate’ reflects the current unfortunate state of affairs, where no other approach is more accurate despite nearly a century of trying. It postulates the Born rule (measurement results in projection onto an eigenstate), something each interpretation also postulates in one form or another, where the term “measurement” is generally understood as an interaction of a simple transparent ( = quantum) system with a complex opaque ( = classical) one. The term decoherence describes how this simple system becomes a part of the complex one it interacts with (and separates from it once the two stop interacting).

Now, I agree that

And indeed I’m not shutting up, because the quantum-classical transition is a mystery to be solved, in a sense that one can hopefully construct a more accurate model (one that predicts new experimental results, not available in “shut up and calculate”).

The question is, which are the more promising avenues to build such a model on. RQM suggests a minimal step one has to take, while MWI boldly goes much further, postulating an uncountable (unless limited by the Planck scale) number of invisible new worlds appearing all the time everywhere, without explaining the mysterious splitting process in its own ontology (how does world splitting propagate? how do two spacelike-separated splits interact?).

Now, I am willing to concede that some day some extension of MWI may give a useful new testable prediction and thus will stop being an ‘I’. My point is that, unless you postulate reality as ontologically fundamental, MWI is not the smallest increment in modeling the observed phenomenon of the quantum-classical transition.

No approach is ever more accurate than ‘shut up and calculate’. The ‘Shut up and calculate’ version of Special Relativity, wherein we claim that Minkowski’s equations give us classical lengths but refuse to speculate about how this mysterious transition from Minkowski intervals to classical lengths is achieved, is just as accurate as Special Relativity. It’s just, well, frankly in denial about how the undermining of your intuition of a classical length is not a good reason to stick your fingers in your ears and go “Nah nah nah I’m not listening” with respect to Minkowski’s equations representing physical reality, the way they actually do.

You believe thiswith respect to Special Relativity, and General Relativity, and every other “shut up and calculate” version of every physical theory from chemistry to nuclear engineering—that there’s noreasonto shut up with respect to these other disciplines. I just believe it with respect to quantum mechanics too.So do I, and have stated as much. Not sure where the misunderstanding is coming from.

You ought to, however, agree that QM is special: no other physical model has several dozens of interpretations, seriously discussed by physicists and philosophers alike. This is an undisputed experimental fact (about humans, not about QM).

What is so special about QM that inspires interpretations? Many other scientific models are just as counter-intuitive, yet there is little arguing about the underlying meaning of equations in General Relativity (not anymore, anyway) or in any other model. To use your own meta-trick, what is it so special about the Quantum theory (not about the quantum reality, if you believe in such) that inspires people to search for interpretations? Maybe if we answer this reasonably easy cognitive science question first, we can then proceed to productively discuss the merits of various interpretations.

Perhaps you mean the sheer quantity is so great. But there have been, an are, disputes about classical pysyics and relativity. Some of them have been resolved by just

beiievingthe theory and abandoning contrary intuitions. At one time, atoms were dismissed as a “mere calculational device”. Sound familiar?Sure, every new theory is like that initially. But it only takes a short time for the experts to integrate the new weird ideas, like relative spacetime, or event horizons, or what have you. There is no agreement among the experts about the ontology of QM (beyond the undisputed assertion that head-in-the-sand “shut up and calculate” works just fine), and it’s been an unusually long time. Most agree that the wave function is, in some sense, “real”, but that’s as far as it goes. So the difference is qualitative, not just quantitative. Simply “trusting the SE” gives you nothing useful, as far as the measurement is concerned.

It doesn’t work “fine”, or at all, as an

interpretation. It’s silent as to what it means.There are slowly emerging themes, such as the uselessness of trying to recover classical physics at the fundamental level, and the importance of decoherence.

I don’t see what you mean by that. An

interpretationthat says “trust the SE” (I suppose you mean “reify the evolution of the WF according to the SE”) won’t give you anything results-wise, because its aninterpretationUh, no. It’s not an interpretation (i.e. “explanation”), it’s an explicit refusal to interpret the laws.

Anyway, time to disengage, we are not converging.

Yeah. Note also that if you are observing a probability distribution, that doesn’t imply that something computed the probability density function. E.g. if you observe random dots positions of which follow Gaussian distribution, that could be count of heads in a long string of coin tosses rather than Universe Machine really squaring some real number, negating result, and calculating an exponent.

There’s certainly one obvious explanation which occurs to me. There being a copy of you in another universe seems more counterintuitive than needing to give up on measuring distances, so it’s getting more like the backlash and excuses that natural selection got, or that was wielded to preserve vitalism, as opposed to the case of Special Relativity. Also the simple answer seems to have been very hard to think of due to some wrong turns taken at the beginning, which would require a more complex account of human cognitive difficulty. But either way it doesn’t seem at all unnatural compared to backlash against the old Earth, natural selection, or other things that somebody thought was counterintuitive.

You need to realize that the “simple answer” isn’t so simple- no one has been able to use the axioms for many worlds to make an actual calculation of anything. By kicking away the Born amplitudes, they’ve kicked away the entire predictive structure of the theory. You are advocating that physicists give up the ability to make predictions!

Its even worse when you go to quantum field theories and try to make many worlds work- the bulk of the amplitude will be centered on “world’s” with undefined particle number.

You mean that “simple answer” that still can’t make predictions?

Cat neither dead nor alive until you open the box?

Yeah, that’s pretty special, but why?

On a related note, in MWI there is an uncountable number of worlds with the cat is in various stages of decay once the box is open. Is that weird or what.

You’re asking exactly what it is about a theory which speaks of unobserved cats as dwelling in existential limbo, that would inspire people to seek alternatives?

Read Elizier’s sequence on quantum mechanics. The cat does not collapse into a dead or alive state, the cat is dead, and another cat is alive. One of the many worlds has a dead cat, another has a live cat.

Read this thread where that idea is shown not to be a “slam dunk”.

You have to remember that ‘interpretations’ of quantum mechanics are actually reformulations of quantum mechanics. Just as classical mechanics can be described by Newton’s laws, or one of several action principals (Hamilton/Jacobi,Maupertuis’ principle,etc), quantum mechanics has many formulations, each with their own axioms- there is nothing unique about quantum in this sense.

What IS unique about quantum mechanics is that so many interpretations are incomplete. Copenhagen is circular (to make sense of the measurement axiom, you need correspondence principle axiom, but classical needs to be a limit of quantum mechanical.) The measurement problem is a formal problem with the axioms of the theory.

Of course, many worlds is in an even worse position. No one has yet to effectively derive the Born amplitudes which means the interpretation is broken, there is no recipe to extract information about measurements from the theory.

Bohm might be an actual complete interpretation but its nearly impossible to extend the formalism to quantum field theories, Consistent histories is where I put my money- the homogenous history class operator seems potentially like the missing piece.

Hmm, I could never make sense of the formalism of CH (it seems to rely on time-ordering and density matrices, neither of which inspire confidence, given that one expects a relativistically invariant evolution of a pure state), and the popular write-ups sound like advocacy.

Why would you expect relativistic invariance? The Schroedinger equation isn’t even Galilean invariant ( the mass comes through as a central charge, the probabilities are Galilean but not lorentz invariant)

The best reference for consistent histories is Bob Griffith’s excellent text (not to be confused with the other Griffiths)

Because I would expect a model that has a hope in hell of getting deeper toward the measurement problem than “shut up and calculate” to give a relativistically invariant account of the EPR, and because I expect such a model to be built on top of some form of QFT (as I mentioned in another reply, the number of particles is not conserved during the measurement, so the Hilbert space doesn’t cut it, you need something like a Fock space, second quantization etc.).

But the only way you are going to get relativistic invariance is to throw out the Schroedinger equation. The hope is that an interpretation makes it easier to move to QFT, NOT that a given interpretation will be Lorentz invariant (which is impossible, given the Schreodinger equation).

So far none of the interpretations of quantum are built on top of QFT, mostly because QFT isn’t yet formalized, its a hodge podge of heuristics that gets the right answer. The handful of axiomatic field theories don’t actually describe physical systems. Some people have a pipe dream that finding better quantum axioms will point the way toward better QFT axioms, but I’m not in that camp.

The SE should be a non-relativistic limit of whatever model is the next step. Not sure if it requires a formalization of QFT, it just needs to make decent predictions. Physicists are not overly picky. As long as it’s reasonably self-consistent. Or not even. As long as it helps you calculate something new and interesting unambiguously.

QFT IS the obvious next-step, but the reason people play with standard quantum formulations instead of trying to work in the context of QFT and ‘push the interpretation down’ is that QFT isn’t yet on firm footing.

::does some reading on Wikipedia::

Hmmm… apparently making QM play nice with Special Relativity isn’t quite as simple as using the Dirac equation instead of the Schrodinger equation, because the Dirac equation has negative energy solutions, and making it impossible for electrons to “decay” into these negative energy states requires kludges.

(Why is it that, the more I learn about QM, the more it seems like one kludge after another?)

Quantized wave function solves the negative energy problem, at the expense of introducing a bunch of infinities, some of which are easier to work around (renormalize) than others. For example, there is no way to usefully quantize gravitational field.

This isn’t quite true. What solves the problem isn’t quantizing wave functions, its insisting that positive energy propagate forward in time- i.e. picking the Feynman propagator (instead of the retarded or advanced propagator, etc) that solves the problem. You still have to make a division between the positive energy and negative energy pole in the propagator (unfortunately, all observers can’t agree on which states have positive and what states have negative energy, which is the basis of the Unruh effect- two observers accelerating relative two each other cannot agree on particle number).

Also, its a misconception that you can’t simply quantize the gravitational field. If you treat GR as an effective theory you can make calculations of arbitrary accuracy with a finite number of measured parameters, with just canonical quantization. The standard model is ALSO not a renormalizable field theory (not since the addition of neutrino masses). Weinberg has recently tried to make the argument that maybe GR + canonical quantization (i.e. gravity is asymptotically safe)

Thanks for the corrections, my area is mostly classical GR, not Standard Model physics. And a good point on the Unruh effect. As for quantizing GR, note the “useful” disclaimer. I am deeply suspicious of any technique that treats GR as an effective field theory on some background spacetime, as it throws away the whole reason why GR is unlike any other field theory. Weinberg is especially prone to to doing that, so, while I respect anything he does in HEP, I don’t put much stock into his GR-related efforts. If anything, I expect the progress to come from the entropic gravity crowd, with nothing to quantize.

When I worked in physics I did perturbative QCD stuff in graduate school and then effective theories for medium energy scattering, and finally axiomatic quantum field theories as a postdoc before I left physics for a field with actual employment opportunities (statistics/big data stuff).

But why shouldn’t GR be treated as just another field theory? It certainly has the structure of a field theory. Feynman and then Weinberg managed to show that GR is THE self-consistent, massless spin-2 field theory- so to that extent it IS just another field theory.

Treating GR as an effective theory works. I doubt that the theory is asymptotically safe, but for an effective theory, who cares? Why should we treat the matter part of the action any differently than we treat the spacetime piece of the action?

That’s a separate discussion, but let me just note that the action would have to be summed not just over all paths (in which spacetime?), but also over all possible (and maybe impossible) topologies, as well.

No mechanism is required, you just get that from the SWE (taken realistically..as it isn’t in rQM). Are you a physicist?

Not sure what SWE stands for.

Schrodinger (Wave) Equation.

Oh. The Schroedinger equation says nothing about the measurement. In all likelihood, a theory of quantum to classical transition would require at least some elements of QFT, as the measurement, as an irreversible process, results in emission of photons, phonons or some other real or quasi-particles. Thus you have to go from the Hilbert space to some sort of Fock space, since the number of particles is not conserved.

Measurement of what? I was responding to your comment that MWI does not explain splitting ontologically In fact the ontology is just “the territory is just what a SWE of the universe says it is”.

This should screen off the title/profession “physicist” entirely, I think. If that’s what you meant in the first place, then it wasn’t quite clear.

It seems at first like you’re asking about academic degrees and titles and tribal levels of authority.

I was surprised at the mistake.

This isn’t actually correct- there is not a “shut-up-and-calculate” version of many world’s- without the born probabilities you can’t calculate anything. Maybe someday Deutsch,Wallace or some other enterprising many worlds advocate will show us a way to do calculations without the measurement postulate. That hasn’t happened yet, so many worlds does not let us calculate. As far as I know, this inability to calculate is the primary reason physicists reject it.

I’m very curious as to why I’m being downvoted for expressing this sentiment, if any down voter cares to explain, I’d be much obliged.

FYI, “territory” means “territory”, not map.

Model of what? If you subtract the ontology from an interpretation, what are you left with knowledge of?

A basis to build a testable model on.

In this and your previous comment, you write as though as though rQM is a different formalism, a different theory, leading to different results. It isn’t.

Feel free to quote the statement that led you to such a strange conclusion.

and

In principle rQM could suggest a different mental picture, and one better capable of inspiring further models that will make successful predictions. (Assuming shminux’s bizarre positivist-like approach admits the existence of mental pictures.) The “better capable” part seems unlikely to this layman. Feynman’s path integrals have a very MWI-like feel to me, and Feynman himself shared that impression when he wrote the book with Hibbs. But since paths that go back in time seem to pose a problem for Eliezer’s causality-based approach, perhaps shminux has some reason for preferring rQM that I don’t see. I’m still betting against it.

In RQM, there are no other worlds in the MWI sense. MWI allows observers to make contradictory measurements, such as |up> and |down> and then tries to remove the contradiction by indexing each measurement to its own world. rQM does not allow observers to make contradictory measurements, so there is no need to wish away worlds, because there was never a need to introduce them.

“However, the comparison does not lead to contradiction because the comparison is itself a physical process that must be understood in the context of quantum mechanics. Indeed, O′ can physically interact with the electron and then with the l.e.d. (or, equivalently, the other way around). If, for instance, he finds the spin of the electron up, quantum mechanics predicts that he will then consistently find the l.e.d. on (because in the first measurement the state of the composite system collapses on its [spin up/l.e.d. on] component). That is, the multiplicity of accounts leads to no contradiction precisely because the comparison between different accounts can only be a physical quantum interaction. This internal self-consistency of the quantum formalism is general, and it is perhaps its most remarkable aspect. This self consistency is taken in relational quantum mechanics as a strong indication of the relational nature of the world.”—SEP

rQM has an ontology. It’s an ontology of relations. rQM denies state—non-relational infmoration. rQM does not need to say anything is real relativee to anything else—only that some information is not available to some systems.

I have no idea what that means.

Maybe he’s counting the lack of an objective state as additional information?

Basic question I probably should’ve asked earlier: Does shminux::RQM entail not-MWI?

If the answer is “no” then shminux::RQM is indeed plausibly shutting up, since by adding further information we can arrive at MWI. I plead guilty to failing to ask this question, note that shminux failed to volunteer the information, and finally plead that I think most RQMers would claim that theirs is an alternative to MWI.

MWI=universal state

Rovelli-rQM=no universal state

Can you describe in more detail what you mean by ‘no universal state’?

By “state” I mean information physically embodied in a non relational way.

By “universal” I mean the maximal ensemble: universe, multiverse, cosmos, whatever.

(I think you might have been hearing “the universe does not have a state” as “nothing is real” or “nothing is out there”. There is

somethingout there, but it is not anything that can even beconceivedas existing in a classical view-from nowhere style. “Following the idea of relational networks above, an RQM-oriented cosmology would have to account for the universe as a set of partial systems providing descriptions of one another. The exact nature of such a construction remains an open question.”—WP)To the extent that this seems to be meaningful at all, this would seem to imply that not only is the universe mysterious and ineffable, it’s also uncomputable—since anything you can calculate in a turing machine (or even a few kinds of hypercomputers) can be “conceived of as existing in a classical view-from nowhere style” (it’s just a list of memory states, together with the program). That’s a lot of complexity just to be able to deny the idea of objective reality!

Well, general relativity, while descriptively very simple, is

awfullycomplex if you measure complexity by the length of a simulator program, so perhaps in the interest of consistency you should join the anti Einsteinian crank camp first.Those incredibly successful theories were based entirely on the notion of complexity in a more abstract language where things like having no outside view and no absolute spacetime are simpler than having outside view.

Nice non-sequitor you’ve got there. Newtonian mechanics

issimpler than general relativity. It also happens to be wrong, so there’s no point going back to it. But GR is not even that complex relative to a theory that claims that the cosmos is an ineffable mystery—GR has well defined equations, and takes place in a fixed riemannian manifold. You can in fact freely talk about the objective spacetime location of events in GR, using whatever coordinate system you like. This is because it is a good theory.Actually GR shows the advantage of having an outside view and being able to fit things into a comprehensive picture. If my graduate GR course had refused to talk about manifolds and tensors and insisted that you could only measure “lengths relative to specific observers”, and shown us a bunch of arcane equations for converting measurements between different observers’ realties, I imagine it wouldn’t have been half as fun.

(Although the fact that certain solutions to the GR equations allow closed timelike curves and thereby certain kinds of hypercomputation is less than ideal—hopefully future unified theories will conspire to eliminate such shenanigens.)

The point is that absence of the absolute time really gets in the way of implementing a naive simulator, the sort that just updates per timestep. Furthermore, there is no preferred coordinate frame in GR, but there is a preferred coordinate frame in a simulator.

Ultimately, a Turing machine is highly arbitrary and comes with a complex structure, privileging implementations that fit into that structure, over conceptually simpler theories which do not.

But it’s no problem for a simulator that derives a proof of the solution to the equations, such as a SAT solver. Linear time is not neccesary for simulation, just easier for humans to grasp.

Even if this is true, if the simulation is correct, the existence of such a preferred reference frame is unobservable to any observer inside the simulation, and therefore makes no difference. A simulation that does GR calculations in a particular coordinate system, still does GR calculations.

How are you even going to do those calculations exactly? If you approximate, itll be measurable.

Ultimately there is this minimal descriptive complexity approach that yields things like GR based on assumptions of as few absolutes as possible, and then theres this minimal complexity of implementation on a very specific machine approach, which would yield a lot of false predictions had anybody bothered to try to use it as the measurements improved.

edit: also under an ontology where invariants and relationals with no absolutes are not simpler its awfully strange to find oneself in an universe wheich looks like ours. The way i see it, there are better and worse ways to assign priors, and if you keep making observations with very low priors under one assignment but not other, you should consider the prioes scheme where you keep predicting wrong to be worse.

You seem to think I find GR and quantum mechanics strange, or something. No, it’s perfectly normal to live in a universe with no newtonian ideas of “fixed distance”. GR does not have “no absolutes”, it has plenty of absolutes. It has a fixed riemannian manifold with a fixed metric tensor (that can be decomposed into components in any coordinate system you like).

A model like GR’s is exactly the kind that I would like to see for quantum mechanics—one where it’s perfectly clear what the universe is and what equations apply to it, and ideally an explanation of how we

observersarise within it. For this position, MWI seems to be the only serious contender, followed perhaps by objective collapse, although the latter seems unlikely.But wouldn’t GR still fall prey to the same ‘hard to implement on a TM’ argument? Also, one could define a relational model of computation which does not permit an outside view (indeed the relational QM is such a thing), by the way. It’s not clear which model of computation would be more complex.

With regards to the objective collapse, I recall reading some fairly recent paper regarding the impact of slight non-linearities in QFT on MWI-like superpositions, with conclusion that slight non-linearities would lead to objective collapse which occurs when the superposition is too massive. Collapse does seem unlikely on it’s own—if you view it as some nasty inelegant addition, but if it arises as a product of a slight non linearity, it seems entirely reasonable, especially if the non-linearity exists as a part of quantum gravity. It has been historically common that a linear relationship would be found non-linear as measurements improve. (The linear is simplest, but the non-linear models are

many—one specific non-linear model is apriori less likely than a linear one, but the totality of non-linear models is not).Without collapse you still have the open question of Born’s law, by the way. There been a suggestion to count the distinct observers somehow, but it seems to me that this wouldn’t work right if part of the wavefunction is beamed into the space (and thus doesn’t participate in decoherence of an observer), albeit I never seen a concrete proposal as to how the observers should be counted...

And back to the Turing machines, they can’t do true real numbers, so any physics as we know it can only be approximated, and it’s not at all clear what an approximate MWI should look like.

QM is computable. rQM doesnt change that. If an observer wants to do quantum cosmology, they can observe the universe, not from nowhere, but from their perspective, store observations and compute with them. Map-wise, nothing much has changed.

Territory-wise, it looks like the universe can’t be a (classical) computer. Is that a problem?

As I understand it, any quantum computer can be modeled on a classical one, possibly with exponential slowdown.

Be modeled doesn’t mean be.

I guess that’s the root of our disagreement about instrumentalism.

The dictionary seems to be on my side.

I can see how your conclusion follows from that assumption, but the assumption is as strange as the conclusion. Ideally, an argument should proceed from plausible premises.

Disengaging due to lack of convergence.

Well, that’s one way of avoiding update.

“The universe is not anything that can even be conceived as existing in a classical view-from nowhere style” also means that the universe can’t be

modeledon a computer (classical or otherwise). From a complexity theory point of view, this makes the rQM cosmology an exceptionally bad one, since you must have to add something uncomputable to QM to make this true (if there is even any logical model that makes this true at all).The fact that you can still computably model a specific observer’s subjective perspective isn’t really relevant.

Out of the box, a classical computer doesn’t represent the ontology of rQM because all information has an observer-independent representation, but s software layer can hide literal representations in the way a LISP gensym does. Uncomputability is not required.

In any case, classical computability isn’t a good index of complexity. It’s an index of how close something is to a classical computer. Problems are harder or easier to solve according to the technology used to solve them. That’s why people don’t write device drivers in LISP.

Um, computability has very little to do with “classical” computers. It’s a very general idea relating to the existence of any algorithm at all.

Uncomputability isn’t needed to model the ontology of rQM,

I

thinkwhat EY is saying is that, rQM entails MWI, and only an extra layer of epistemological interpretation denies the reality to the worlds. ie, he thinks MWI says “QM implies many worlds” whereas rQM says “QM implies many worlds, but we should just ignore that”. (One man’s ontological minimalism is another man’s epistemological maximism).But that’s all based on a sequence of misunderstanings. rQM doesn’t allow observers to make contradictory observations AND there is no observer-indepenent world-state in rQM, so there are no multiple world-states in rQM.

So true.

Or MWI could be said to be complicating the ontology unnecessarily. To be sure, rQM answers epistemologically some questions that MWI answers ontologically, but that isn’t obviously a Bad Thing. A realistitc interpretation of the WF is a postive metaphyscial assumption, not some neutral default. A realistic quantum state of the universe is a further assumption that buys problems other interpretations don’t have.

To the extent that it is worth replying to such things it is worth replying

well. A terse reply will tend to (in my personal experience) and in this case seems to have set you up for further sniping and will typically result in undermining your position independently of actual merit. I expect the net effect of this engagement to be (admittedly trivial) undermining of the credibility of your MWI lesson.My own interest is not in the QM but instead that several (relatively) subtle rhetorical techniques related to intellectual and moral high ground manipulation and asymmetric application of norms that I would like to see less of are in fact being rewarded with success and approval. ie. You are feeding the .

Or here’s another way of looking at it:

MWI = Minkowskian spacetime. Clear objective state of affairs, observer-invariant intervals separating events.

Single-world QM = Pre-Minkowski mysterious “Lorentz contractions” as a result of moving through the ether. The ether seems mysteriously unobservable and it’s really odd that the Lorentz contractions just happen to be exactly right to make motion undetectable, when in principle the ether could be doing anything (just like it’s mysterious that the worldeater eats off parts of the wavefunction according to the Born probabilities rather than something else, and only leaves one world behind). Also, since you don’t know about the Lorentz transformation for time at this point in the history of physics, your equations will yield the wrong answers for extreme circumstances (just as a large enough quantum computer could contain observers who still wouldn’t collapse).

“Shut up and calculate” = Use Minkowskian spacetime but refuse to admit that your equations might refer to something.

RQM = Relational Special Relativity = You repeatedly talk about how “motion” can only be defined relative to an observer, and it’s impossible for the universe as a whole to move because it would have to be moving relative to something; you use this to insist that every observer has their private reality in which objects

really aremoving at a certain rate relative to them, and timereally isprogressing at a certain rate, and there’s no conflict with other observers and their observed rates of motion because reality is not objective. If anyone shows you Minkowskian spacetime and asks why they should adopt your weird epistemology when there’s all these perfectly natural invariants to use, or asks you what it would even mean for everyone to have a private reality, yell at them that the universe as a whole clearly can’t have an objective state of motion because there’s nothing else it could be moving relative to. Basically, Special Relativity only you’d rather give up the attempt to describe a coherent state of affairs than give up on talking separately about space and time the way you’re accustomed to.(If that didn’t make sense check SEP or Wikipedia on RQM.)

Reversing the direction of the analogy, what are the “invariants” of MWI? A natural, emergent multiversal basis? nah. A natural, emergent Born’s law? Nah...

That’s actually a perfectly reasonable argument.

rQM is coherent, observers can’t make contradictory observations. It just isn’t objective. It also isn’t anything-goes philosophical subjectivism. It is an interpretation that agrees with all the results of the formalism, like any interpretation properly so called, so it does not break anything or make anything unscientific.

Why is there so much effort spent on philosophical interpretations of QM, when there probably will be more fundamental levels of description such as string theory?

Is it to be expected that the least complex interpretation of QM will also apply to the one-day victorious string theory model?

It would be unlikely for any more fundamental theory not to be subject to the same set of evasions as QM. Roughly, we have people claiming that atoms are just theoretical figments of the imagination which merely yield good predictions, discovering neutrons isn’t going to change their arguments. String theory in particular doesn’t help.

I once asked a QM person (who shall remain nameless) why people argue about interpretations despite their untestability, and (s)he conjectured that what they are

reallyarguing about is ramifications of these interpretations for “hard problems” (e.g. consciousness) which was an answer that surprised me.It is written: a physicist does not live on instrumentalism alone.

The way that we currently build theories in physics is to write down a classical theory, and then ‘quantize it’ (which involves replacing classical numbers with operators and enforcing some non-commutation. Or it involves promoting the idea that the action is extremized with a path integral over the action). String theory is no exception, you typically start with a classical string-action.

Because of this, most of the underlying structure of quantum mechanics comes along for the ride. Unfortunately, this usually leads to formal problems (no one has yet developed a satisfying axiomatic quantum field theory, and the situation in string theory is even worse), but physicists ignore these issues, because such theories, while not formally developed, make the right predictions.

Any interpretation could be

calledsemantic word game, since the whole point is tointerpreta mathematical formalism. To do that you have to usewords(shock!) and discuss what things might really mean (horror!).If you take decoherence realistically, you get something like MWI. CH is different because, like, CI, it is less realistic.

There’s no form of decoherence which is equivalent to ontological collapse, to actually snipping off branches. (Penrose has a nice discussion of this somewhere). So decoherence as an interpretation can’t be saying anything different to what MWI says as an interpetation.. Decoeherence just gives an criterion—albeit a fuzzy and subjective one—for world-formation.

Hear, hear.

That sure is far beyond my current educational horizon but I would love to see Eliezer answer that comment. Until now I haven’t even heard of Relational Quantum Mechanics. I searched LW and that comment by Dustin2 seems to be one of two comments that mention it.

Bravo, Eliezer, bravo. Have you sold the screen rights yet?

Inspired by this post, I was reading some of the history today, and I learned something that surprised me: in all of his writings, Bohr apparently never once talked about the “collapse of the wavefunction,” or the disappearance of all but one measurement outcome, or any similar formulation. Indeed, Huve Erett’s theory would have struck the historical Bohr as complete nonsense, since Bohr didn’t believe that wavefunctions were real in the first place—there was nothing to collapse!

So it might be that MWI proponents (and Bohmians, for that matter) underestimate just how non-realist Bohr really was. They ask themselves: “what would the world have to

be likeif Copenhagenism were true?”—and the answer they come up with involves wavefunction collapse, which strikes them as absurd, so then that’s what they criticize. But the whole point of Bohr’s philosophy was that you don’t evenasksuch questions. (Needless to say, this is not a ringing endorsement of his philosophy.)Incidentally, I’m skeptical of the idea that MWI never even

occurredto Bohr, Heisenberg, Schrödinger, or von Neumann. I conjecture that something like itmusthave occurred to them, as an obviousreductio ad absurdum—further underscoring (in their minds) why one shouldn’t regard the wavefunction as “real”. Does anyone have any historical evidence either way?Yes, the real CI is rather minimal and non-commital. That, not idiocy, explains its widespread adoption. Objective Reduction is a different and alter theory.

I think you’re being a bit hard on Schrödinger here. I thought the whole point of Schrödinger’s cat was to point out that the “observers cause collapse” idea was kind of stupid.

The “One Christers” are a nice SF touch.

Nice one Eli, I haven’t been able to read OB for about a month, and whith your breakneck pace it was tough to catch up, but this has been good. I enjoyed this post in particular!

Schrodinger killed his cat.

First, W Bush was just 11 in 1957. However, that does make me wonder over what fraction of the many-worlds he ended up being an idiotic asshole—much less President now… And, wow, imagine the possible alternate world where he was a good President!

Second, though I generally liked your post, I feel it was a bit disingenuous to not mention the hidden variable hypothesis in regard to the Copenhagen interpretation. Early 20th century physicists weren’t thinking collapse was an extraordinary violation of know physics—they thought it was a temporarily opaque—and deceptively random in appearance—layer on an underlying deterministic physics.

It wasn’t until 1964 that the traditional interpretation started to really fall apart. And the modern split is, I suspect, largely down a deterministic/stochastic universe preference. The CI survivors are waiting for a workable replacement to hidden variable. The growth in the MWI camp is because they haven’t come up with anything in the last four decades.

I too hadn’t been reading OB for months. Then I went to Yahoo Pipes, and created a feed that stripped out EY. It’s amazing how much better the blog in my absence.

What do pink cup cakes mean??

What the hell are the Born statistics?

Jeeves: whaaaa?

Smedly: the Born rule… the whole probability of what you seem to experience observing is proportional to the squared magnitude thing. ie, if you had a two state system, say a qbit, in a superposition of, say,

^{2}⁄_{3}|0> + sqrt(5)i/3*|1>, then if you take a measurement of a bunch of qbits that are independantly in that state, then you’d expect about^{4}⁄_{9}of them to be 0, and^{5}⁄_{9}of them to be 1.Given that QM is linear, you can see why the existance of such a rule may be a bit confusing. And given the many worlds perspective, the question of “probability of… what, exactly?” is a question too. Seems hard to even phrase the rule without invoking consciousness. Thus, we, or at least I (did everyone else solve it and simply keep me out of the loop? :)) am confused on this matter.

Not really; anticipation seems easy enough to define without consciousness.

Nick: anticipation of… what? Don’t misunderstand, I’m not saying “oooh, Born probabilities transcend understanding” sort of thing, I just mean that I’m unsure how, in the context of many worlds, to state it. Robin’s Mangled Worlds idea, if it pans out, would certainly help, but until then, I’m stumped about how to really say it in any way other than “anticipation of experience”

Anticipation of input, which at least doesn’t seem like it immediately implies conscious experience—does a minimalist Bayesian decision system feel anything?

Nick: But… what do you mean? ie, if you have some sort of decoherence event so that one can meaningfully distinguish between world with input A occuring and world with input B occuring...

What are we anticipating? ie, both input A occurs

andinput B occurs.If they have different amplitudes, so we square the magnitudes to figure out the anticipation… anticipation of… what? ie,

bothoutcomes occur.Yet in some sense they well be “weighted” differently. What do we mean by that other than “how much do we anticipate experiencing one or the other?”

Again, presumably there’s some way to clear this all up, but right now “anticipation of input” doesn’t really seem to be reducing my confusion on the subject.

“Anticipation of input” is the same as “anticipation of experience”, but without any reference to consciousness—a non-conscious Bayesian decision system should also derive the squared-modulus law, and “anticipate” (in an qualia-free way) future “observations” (again qualia-free) to follow it. (Shouldn’t it?) IMO, this is actually more confusing.

Nick: presumably in same way it would… but I don’t really see how. Remember, this is indexical uncertainty. It doesn’t correspond to uncertainty about what actually happened so much as uncertainty about which branch of reality this version of you is in.

So… There’s a version of you in A, and a version of you in B.

In A, all the computations that happen are more or less analogous to those in B, except that B uses slightly larger numbers to represent the computations...

So exactly why would that change any anticipation of anything? I’d be unsure what a nonconscious Bayesian decision system would be computing/anticipating, unless the Born rule was already hard coded into it.

Yes, presumably there’s

someactual physical reason which gives rise to the Born statistics, and once we know that, that may even help us talk about it better. But right now, I don’t really even see any obvious way to state the rule without invoking anticipation of experience.Since both branch A and B are real… what exactly are we weighing other than something along the lines of “where is more of our consciousnes experience flowing?”

And it’s really annoying to have to phrase it like that. I know I’m confused on this. But right now, I don’t see any obvious way to state the rule without saying something to that effect.

Meanwhile, imagine yet another alternate Earth, where the very first physicists to notice nonlocality, said, “Holy brachiating orangutans, there’s a non-local force in Nature!”

In the years since, the theory has been successfully extended to encompass every observed phenomenon. The biggest mystery in physics is the relationship between nonlocality and relativity. The basic equations have a preferred reference frame, but it’s undetectable. Everyone thinks that there must be a relativistic way to write the equations, but no-one knows how to do it.

One day, Bavid Dohm walks into the office of Huve Erett...

Bavid gestures to the paper he’d brought to Huve Erett. It is a short paper. The title reads, “The Solution to the Relativity Problem”. The body of the paper reads:

“There is no classical trajectory. The pilot wave already contains the world that we see, along with infinitely many others.”

“Let me make absolutely sure,” Erett says carefully, “that I understand you. You’re saying that there is no space-time, as we know it, separate from Hilbert space. There’s just the pilot wave, evolving according to the Schrodinger equation. But the pilot wave actually contains space-time—infinitely many space-times.”

“Right!” says Bavid.

“Where?” says Erett.

“Everywhere throughout configuration space!” says Bavid. “The configurations

arethe worlds.”“But if every possible configuration exists, how do you predict anything?” asks Erett.

“Er, well, it’s not the configurations which are the worlds, then”, says Bavid. “It’s the blobs of amplitude hovering

overthe configurations.”“I still don’t see how you make predictions. Or eliminate a universal time coordinate”, says Erett.

“Decoherence!” says Bavid. “If you don’t count the blobs where the amplitude really thins out, then the numbers come out correctly.”

“But the blobs are still there?” asks Erett.

“Yes… they’re just… thinner”, says Bavid.

“Why shouldn’t I count them, then?” asks Erett.

“Because the numbers won’t come out right otherwise!” says Bavid.

“I

see”, says Erett. “And relativity? You did say this is a relativistic theory.”“Yes, well, my idea is to get rid of time entirely”, says Bavid.

“Ah yes, the old ‘H=0’ approach. The pilot wave is a

standingwave. But how is that relativistic? Relativity mingles space and time. H=0 just abolishes time and leaves space”, says Erett.“Er...” says Bavid.

At which point Erett politely but firmly shows Mr Dohm out of his office.

In bohmian mechanics, the particles are purely epiphenomenal. Everything is indeed contained in the pilot wave. (If this weren’t true, it would not be a mere interpretation—it would have different predictions.)

I’m a chemist; we actually have to use quantum physics on a routine basis. The main reason many-worlds never got traction is that it doesn’t make a testable prediction. Most physicists realize that making a model of reality that predicts experiment (as far as possible) is, well, science; BSing about what the implications are is more of a late night and beer thing.

In other words, if the model implies that there may be other worlds, but they can’t conceivably be detected, then who cares?

One last thing: there’s some pretty good evidence of nonlocal physics these days. It’s inconsistent with general relativity, but no biggie. We already knew that general relativity and quantum physics were incompatible. The current situation in physics (for the last 30 years or so) is considerable confusion at the level of fundamental theory, but extremely robust models for every actual physical situation that we can probe. The robustness of the models is exactly what has halted progress.

I am not sure that it is possible to interpret this sentence without admitting to what amounts to Eliezer’s position. In other words, for this to be either right

orwrong, Eliezer has to be right.This sentence is most plausibly unpacked as assuming that the Copenhagen Interpretation and MWI are consistent with all findings, and that pride of place is naturally given to the first interpretation that makes predictions no other interpretation has. Science may not be wrong to, in general and as a heuristic, only accept new theories that make

betterpredictions than the old. After all, even creationism or magic faerieism can be molded to beconsistentwith all known observations, whatever they are.Eliezer simply asserts that MWI is simpler. He appeals to the Occam’s razor heuristic, not the “new testable predictions” one, as reason for the reader to accept MWI. (If you caught it, MWI is making a prediction—that no quantum superposition will be too small to cause a result interpreted as a collapse under CI—but that’s relatively small potatoes, since MWI is succeeding where CI is agnostic. However, that testable position isn’t the point here, the non-socially scientific criteria of theoretical simplicity is.)

Eliezer says: MWI is better that CI under Occam’s razor. You say: scientists care about subsequent theories having

superiortestable positions, not their being simpler under Occam’s razor.Eliezer may reply: OK, there is good reason for science to work like that, since theoretically more complicated theories can always be just as predictive as previously discovered simpler ones by cheating and stealing their results, plus adding complexity, while never being more predictive. However, there is good reason to believe in the theoretically superior theory. (Perhaps he might add: also if you look closely CI is doing the cheating by looking at MWI to see when to declare a superposition.)

Ultimately, you have failed to dispute that MWI is simpler or that it is superior, and your offering a sociological explanation (CI’s coming before MWI) for why CI may be more broadly accepted despite theoretical inferiority does not engage Eliezer’s points in opposition, it shows the strength of one particular argument

that assumes his point: CI is accepted only because it came before MWI.What are you referring to? The kind of non-locality exhibited in the EPR paradox is consistent with special relativity—or at least there’s an elegant way of looking at this in which it is consistent. So are you talking about something totally different? Something incompatible with GR but

notSR? Or both?If resources (including mental ones) are being spent fighting for a less plausible theory, isn’t that enough?

It sounds like you are mocking the post, not expressing genuine amusement.

I imagine that wasn’t your actual intent, though; judging tone over the internet is notoriously difficult. Your comment follows the same format as other mocking posts, so I’d avoid it—i.e. starting off with “haha”, “scream with laughter”, it comes off as sarcasm.

In addition to that, my comment was badly written and not really insightful.

Hilarious and 100% true! Thank you! The only thing I might add to this is that in Huve’s theory, information is created out of nowhere.

Eliezer, don’t you have a whole post about why you shouldn’t use examples from politics if you can possibly avoid it?

This is a very nice essay attacking the Copenhagen interpretation, and other objective collapse models. But I think that the way it is written seems to imply that if I don’t believe in objective collapse, that the only alternative is to believe the equally insane idea that I live in a world that is constantly locally branching into millions of alternative worlds, only one of which is, even in principle, observable to me.

There are many alternatives. I think Qubism is a mix of a genuine solution, and a kind of a slight-of-hand that hides the problem. Maybe future iterations of it will be better.

Personally I think “negative probability” is an option worthy of more exploration. Quantum physics can be viewed as a classical probability theory that admittes some negative (quasi-)probabilities. Many of the problems of negative probability are dealt with by the limited measurement precision (the theory states that their is a negative probability of some specific position/momentum combination, but you cannot measure both, so no observable outcome occurs with negative frequency). A reasonable interpretation of negative quasi-probability, I think, would constitute a solution.

At the very least, the negative probability issue lets us frame the problem like this: does normal (classical) probability theory make you want to accept a branching worlds view of the universe? Does adding negative quasiprobabilities change your answer?

Wigner sorted out how to do everything the wavefunction does with negative quais-probability in the 1930s.

Why is it called Many-Worlds? AIUI (which may be completely wrong, as I know nothing of quantum mechanics) there is a single, deterministically evolving wave function. No splitting. The puzzle is why no-one ever perceives a system to be in a superposition. Suppose that a particle is flying towards a pair of detectors, in a superposition of states that will trigger one of them or the other. I will only perceive one of the detectors firing, never a superposition of both of them firing. From the point of view of the universal wave function, I am in a superposition along with the particle and the detectors, but such superpositions are never part of my experience. The part of the wave function representing my consciousness has a part with the experience of seeing one detector fire, and a part with an experience of seeing the other fire.

Perhaps it should be called the Many-Minds theory.

Many minds is a thing

https://en.wikipedia.org/wiki/Many-minds_interpretation

...But appealing to special properties of of observers is one of the main things many worlders are trying to getaway from them.

Many worlders are pointing at something in the physics and saying “that’s a world”… but whether it qualifies as a world is a separate question , and a separate kind of question, from whether it is really there in the physics. A successful MWI needs to jump both hurdles.

The issue is not whether superpositions exist, but whether they qualify as worlds.. it’s a conceptual issue.

There is an approach to MWI based on coherent superpositions, and a version based on decoherence. These are incompatible opposites, but are treated as interchangeable in Yudkowsky’s writings.

Deutsch uses the coherence based approach, while most other many worlders use the decoherence based approach. He absolutely does establish that quantum computing is superior to classical computing, and therefore that underlying reality is not classical. But showing that reality is not a single classical world is not the same as showing that it consists of a number of quasi classical worlds in superposition.

Superposed states lack a number of features that one would typically associate with a world: objectivity, size, causal isolation, and permanence.

Decoherence has the opposite problem: decoherent worlds can be large, can be objective, can be permanent for all practical, purposes, are causally isolated by definition.

But there is no obvious mechanism for decoherence within core QM. A coherent quantum state that evolves according to the Schrodinger wave equation remains coherent. Some additional mechanism for decoherence is required, and the complexity of the that mechanism must be factored into an assessment of which theory is simplest. (Why assume a coherent starting state? Well, maybe the universe started in a decohered state.. that’s actually a popular suggestion … but it requires its own explanation and adds own complexities).

They’re not opposites, they’re two different ways of analyzing the same situation. Examining the local density matrices at various places, we may find decoherence has occurred, even while the global state is in a coherent superposition.

Local decoherence with global coherence is hardly many worlds. Global decoherence with local coherence would be a much better fit.

No, he didn’t call it “many worlds”, and he didn’t base it on decoherence.

About a few of the violations of the collapse postulate: this wouldn’t be the only phenomenon with a preferred reference frame of simultaneity—the CMB also has that. Maybe a little less fundamental, but nonetheless a seemingly general property of our universe. This next part I’m less sure about, but locality implies that Nature also has a preferred basis for wavefunctions, i.e. the position basis (as opposed to, say, momentum). Acausal—since nothing here states that the future affects the past, I assume it’s a rehash of the special relativity violation. Not that I’m a fan of collapse, but we shouldn’t double-count the evidence.

Also, to quote you, models that are surprised by facts do not gain points by this—neither does Mr. Nohr as he fails to imagine the parallel world that actually is.