The Dilemma: Science or Bayes?
“Eli: You are writing a lot about physics recently. Why?”
—Shane Legg (and several other people)“In light of your QM explanation, which to me sounds perfectly logical, it seems obvious and normal that many worlds is overwhelmingly likely. It just seems almost too good to be true that I now get what plenty of genius quantum physicists still can’t. [...] Sure I can explain all that away, and I still think you’re right, I’m just suspicious of myself for believing the first believable explanation I met.”
—Recovering irrationalist
RI, you’ve got no idea how glad I was to see you post that comment.
Of course I had more than just one reason for spending all that time posting about quantum physics. I like having lots of hidden motives, it’s the closest I can ethically get to being a supervillain.
But to give an example of a purpose I could only accomplish by discussing quantum physics...
In physics, you can get absolutely clear-cut issues. Not in the sense that the issues are trivial to explain. But if you try to apply Bayes to healthcare, or economics, you may not be able to formally lay out what is the simplest hypothesis, or what the evidence supports. But when I say “macroscopic decoherence is simpler than collapse” it is actually strict simplicity; you could write the two hypotheses out as computer programs and count the lines of code. Nor is the evidence itself in dispute.
I wanted a very clear example—Bayes says “zig”, this is a zag—when it came time to break your allegiance to Science.
“Oh, sure,” you say, “the physicists messed up the many-worlds thing, but give them a break, Eliezer! No one ever claimed that the social process of science was perfect. People are human; they make mistakes.”
But the physicists who refuse to adopt many-worlds aren’t disobeying the rules of Science. They’re obeying the rules of Science.
The tradition handed down through the generations says that a new physics theory comes up with new experimental predictions that distinguish it from the old theory. You perform the test, and the new theory is confirmed or falsified. If it’s confirmed, you hold a huge celebration, call the newspapers, and hand out Nobel Prizes for everyone; any doddering old emeritus professors who refuse to convert are quietly humored. If the theory is disconfirmed, the lead proponent publicly recants, and gains a reputation for honesty.
This is not how things do work in science; rather it is how things are supposed to work in Science. It’s the ideal to which all good scientists aspire.
Now many-worlds comes along, and it doesn’t seem to make any new predictions relative to the old theory. That’s suspicious. And there’s all these other worlds, but you can’t see them. That’s really suspicious. It just doesn’t seem scientific.
If you got as far as RI—so that many-worlds now seems perfectly logical, obvious and normal—and you also started out as a Traditional Rationalist, then you should be able to switch back and forth between the Scientific view and the Bayesian view, like a Necker Cube.
So now put on your Science Goggles—you’ve still got them around somewhere, right? Forget everything you know about Kolmogorov complexity, Solomonoff induction or Minimum Message Lengths. That’s not part of the traditional training. You just eyeball something to see how “simple” it looks. The word “testable” doesn’t conjure up a mental image of Bayes’s Theorem governing probability flows; it conjures up a mental image of being in a lab, performing an experiment, and having the celebration (or public recantation) afterward.
Science-Goggles on: The current quantum theory has passed all experimental tests so far. Many-Worlds doesn’t make any new testable predictions—the amazing new phenomena it predicts are all hidden away where we can’t see them. You can get along fine without supposing the other worlds, and that’s just what you should do. The whole thing smacks of science fiction. But it must be admitted that quantum physics is a very deep and very confusing issue, and who knows what discoveries might be in store? Call me when Many-Worlds makes a testable prediction.
Science-Goggles off, Bayes-Goggles back on:
Bayes-Goggles on: The simplest quantum equations that cover all known evidence don’t have a special exception for human-sized masses. There isn’t even any reason to ask that particular question. Next!
Okay, so is this a problem we can fix in five minutes with some duct tape and superglue?
No.
Huh? Why not just teach new graduating classes of scientists about Solomonoff induction and Bayes’s Rule?
Centuries ago, there was a widespread idea that the Wise could unravel the secrets of the universe just by thinking about them, while to go out and look at things was lesser, inferior, naive, and would just delude you in the end. You couldn’t trust the way things looked—only thought could be your guide.
Science began as a rebellion against this Deep Wisdom. At the core is the pragmatic belief that human beings, sitting around in their armchairs trying to be Deeply Wise, just drift off into never-never land. You couldn’t trust your thoughts. You had to make advance experimental predictions—predictions that no one else had made before—run the test, and confirm the result. That was evidence. Sitting in your armchair, thinking about what seemed reasonable… would not be taken to prejudice your theory, because Science wasn’t an idealistic belief about pragmatism, or getting your hands dirty. It was, rather, the dictum that experiment alone would decide. Only experiments could judge your theory—not your nationality, or your religious professions, or the fact that you’d invented the theory in your armchair. Only experiments! If you sat in your armchair and came up with a theory that made a novel prediction, and experiment confirmed the prediction, then we would care about the result of the experiment, not where your hypothesis came from.
That’s Science. And if you say that Many-Worlds should replace the immensely successful Copenhagen Interpretation, adding on all these twin Earths that can’t be observed, just because it sounds more reasonable and elegant—not because it crushed the old theory with a superior experimental prediction—then you’re undoing the core scientific rule that prevents people from running out and putting angels into all the theories, because angels are more reasonable and elegant.
You think teaching a few people about Solomonoff induction is going to solve that problem? Nobel laureate Robert Aumann—who first proved that Bayesian agents with similar priors cannot agree to disagree—is a believing Orthodox Jew. Aumann helped a project to test the Torah for “Bible codes”, hidden prophecies from God—and concluded that the project had failed to confirm the codes’ existence. Do you want Aumann thinking that once you’ve got Solomonoff induction, you can forget about the experimental method? Do you think that’s going to help him? And most scientists out there will not rise to the level of Robert Aumann.
Okay, Bayes-Goggles back on. Are you really going to believe that large parts of the wavefunction disappear when you can no longer see them? As a result of the only non-linear non-unitary non-differentiable non-CPT-symmetric acausal faster-than-light informally-specified phenomenon in all of physics? Just because, by sheer historical contingency, the stupid version of the theory was proposed first?
Are you going to make a major modification to a scientific model, and believe in zillions of other worlds you can’t see, without a defining moment of experimental triumph over the old model?
Or are you going to reject probability theory?
Will you give your allegiance to Science, or to Bayes?
Michael Vassar once observed (tongue-in-cheek) that it was a good thing that a majority of the human species believed in God, because otherwise, he would have a very hard time rejecting majoritarianism. But since the majority opinion that God exists is simply unbelievable, we have no choice but to reject the extremely strong philosophical arguments for majoritarianism.
You can see (one of the reasons) why I went to such lengths to explain quantum theory. Those who are good at math should now be able to visualize both macroscopic decoherence, and the probability theory of simplicity and testability—get the insanity of a global single world on a gut level.
I wanted to present you with a nice, sharp dilemma between rejecting the scientific method, or embracing insanity.
Why? I’ll give you a hint: It’s not just because I’m evil. If you would guess my motives here, think beyond the first obvious answer.
PS: If you try to come up with clever ways to wriggle out of the dilemma, you’re just going to get shot down in future posts. You have been warned.
- The Quantum Physics Sequence by 11 Jun 2008 3:42 UTC; 72 points) (
- Science Doesn’t Trust Your Rationality by 14 May 2008 2:13 UTC; 67 points) (
- E.T. Jaynes Probability Theory: The logic of Science I by 27 Dec 2023 23:47 UTC; 61 points) (
- Why Quantum? by 4 Jun 2008 5:34 UTC; 36 points) (
- A note on the description complexity of physical theories by 9 Nov 2010 16:25 UTC; 28 points) (
- 30 Dec 2011 13:10 UTC; 14 points) 's comment on Stupid Questions Open Thread by (
- Development of Compression Rate Method by 20 May 2010 17:11 UTC; 12 points) (
- 17 Apr 2012 23:22 UTC; 11 points) 's comment on How accurate is the quantum physics sequence? by (
- Introducing Simplexity by 15 Sep 2012 3:34 UTC; 9 points) (
- 19 Jul 2010 15:29 UTC; 8 points) 's comment on (One reason) why capitalism is much maligned by (
- Rationality Reading Group: Part T: Science and Rationality by 10 Feb 2016 23:40 UTC; 8 points) (
- 9 Jun 2012 22:44 UTC; 8 points) 's comment on Ask an experimental physicist by (
- [SEQ RERUN] The Dilemma: Science or Bayes? by 4 May 2012 6:05 UTC; 7 points) (
- 14 Aug 2011 12:45 UTC; 6 points) 's comment on Take heed, for it is a trap by (
- 3 Sep 2010 7:55 UTC; 4 points) 's comment on Less Wrong: Open Thread, September 2010 by (
- Optimising Scientific Research by 2 Nov 2017 19:52 UTC; 4 points) (
- 29 Nov 2019 19:58 UTC; 3 points) 's comment on Order and Chaos by (
- 26 Jul 2013 11:58 UTC; 2 points) 's comment on [Link] Cosmological Infancy by (
- 9 Oct 2019 20:20 UTC; 2 points) 's comment on Open & Welcome Thread—October 2019 by (
- 26 Oct 2018 10:15 UTC; 1 point) 's comment on Schools Proliferating Without Practicioners by (
- 3 Nov 2008 7:28 UTC; 1 point) 's comment on Building Something Smarter by (
- 2 Jul 2012 9:54 UTC; 1 point) 's comment on New Singularity.org by (
- 11 Nov 2011 0:42 UTC; 1 point) 's comment on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by (
- 17 Nov 2009 19:28 UTC; 1 point) 's comment on The Academic Epistemology Cross Section: Who Cares More About Status? by (
- 17 Jun 2012 5:30 UTC; 1 point) 's comment on Open Problems Related to Solomonoff Induction by (
- 2 Sep 2010 8:39 UTC; 1 point) 's comment on Less Wrong: Open Thread, September 2010 by (
- 5 Sep 2021 15:18 UTC; 1 point) 's comment on Erratum for “From AI to Zombies” by (
- 5 Apr 2013 3:53 UTC; 1 point) 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (
- 1 Jan 2012 6:20 UTC; 0 points) 's comment on Stupid Questions Open Thread by (
- 16 Sep 2011 16:50 UTC; 0 points) 's comment on Your inner Google by (
- 3 May 2012 19:34 UTC; 0 points) 's comment on Welcome to Less Wrong! (2012) by (
- 29 Aug 2023 14:38 UTC; 0 points) 's comment on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong by (
- 24 Sep 2010 4:06 UTC; 0 points) 's comment on Open Thread, September, 2010-- part 2 by (
- 29 Sep 2008 22:52 UTC; 0 points) 's comment on Friedman’s “Prediction vs. Explanation” by (
- 10 Jul 2012 17:08 UTC; -1 points) 's comment on An Intuitive Explanation of Solomonoff Induction by (
- 10 Jul 2012 6:31 UTC; -5 points) 's comment on An Intuitive Explanation of Solomonoff Induction by (
- 12 Jun 2012 6:34 UTC; -6 points) 's comment on Raising safety-consciousness among AGI researchers by (
- 10 Jun 2012 22:39 UTC; -6 points) 's comment on Teaching Bayesianism by (
- 8 Jul 2012 1:54 UTC; -11 points) 's comment on Should you try to do good work on LW? by (
According to:
http://www.hedweb.com/everett/everett.htm#believes
...most “leading cosmologists and other quantum field theorists” thought that the “Many-Worlds Interpretation was correct 10 years ago.
Supporters tend to cite not Solomonoff induction, but simply Occam’s razor.
Solomonoff induction is simply an attempt to formalise Occam’s razor around an impractical theoretical model of serial computation.
Collapse theories can do something many worlds can’t do: they can make the predictions! As can Bohmian theories.
Many worlds, like at least one other prominent interpretation (temporal zigzag), is all promise and no performance. Maybe Robin Hanson’s idea will make it work? Well, maybe Mark Hadley’s idea will make the zigzag work. Hadley’s picture is relativistic, too.
Many worlds deserves its place in the gallery of possible explanations of quantum theory, but that is all.
Re: “Collapse theories can do something many worlds can’t do: they can make the predictions”.
Uh, the MWI is a “shut-up-and-calculate” interpretation. It mostly makes the same predictions as the other QP interpretations—except when it comes to interference patterns involving interfering observers and the like.
Whichever helps me win in the situation I am currently in. Since I don’t currently need to create more advanced physics formulas, and they have the same dollar value otherwise it doesn’t make much difference.
If I believe in collapse rather than decoherence is Inspector Darwin going to come and declare me bankrupt?
In fact worrying about the difference is a negative point for me anyway, I should be doing other things and devoting time, memory and processing power to this, reduces the amount I have to apply to my problems.
Tim, I thought there was only one “shut up and calculate” interpretation, and that’s the one where you shut up and calculate—rather than talking about many worlds. Perhaps you mean it’s a “talk rather than calculate” interpretation?
Suppose I announce the Turtles-All-The-Way-Down “interpretation” of quantum mechanics, is it fair to say that the TATWDI “makes the same predictions” if I can’t actually show how to get a number or two out of this postulated tower of turtles, but just say it’s a way of thinking about QM? If MWI makes predictions, show me how it does it.
If one uses these metrics to judge the simplicity of hypotheses, any probability judgements based on them will ultimately depend strongly on this parameter choice. Given that, what’s the best way to choose these parameters? The only two obvious ways I see are to either 1) Make an intuitive judgement, which means the resulting complexity ratings might not turn out any more reliable than if you intuitively judged the simplicity of each individual hypothesis, or 2) Figure out which of the resulting choices can be implemented cheaper in this universe; i.e. try to build the smallest/least-energy-using computer for each reasonably-seeming language, and see which one turns out cheapest. Since resource use at runtime doesn’t matter for kolmogorov complexity, it would probably be appropriate to consider how well the designs would work if scaled up to include immense amounts of working memory, even if they’re never actually built at that scale.
Neither of those is particularly elegant. I think 2) might work out, but unfortunately is quite sensitive to parameter choice, itself.
Eliezer,
I think you are too harsh with the Science-goggles.
I was taught that, when first proposed, the Copernican theory did not explain the then available data any better than the Ptolemaic system.
It’s main attraction (to Science-goggles-wearing types, though not to Bible-goggles-wearing ones) was simplicity: it just had to be true!
I don’t know if Copernicus ever invoked Ockham’s name in defense of its theory, but the latter triumphed much before Rev. Bayes’s (or Solomonoff’s) birth.
So maybe “simplicity”—like many other concepts—has always been one element of the Science-goggles, even before a formal mathematical definition of it was available.
Um… there really aren’t any extremely strong arguments for majoritarianism. That position confuses conclusions with evidence.
Just as there really aren’t any good reasons to abandon the scientific methodology just because you’ve declared ‘Bayesianism’ to diverge from it. Given that the scientific methodology has been extremely successful and is extraordinarily widely adopted among people who count, if we accept your contention that Bayesian thinking diverges from its requirements, shouldn’t that cause us to be suspicious of Bayesianism?
Eli: Nice post. I think your dichotomy between “rejecting scientific method” or “embracing insanity” is a bit excessive. I can see how some people feel that having all these multiple worlds around doesn’t seem like the “simplest” explanation. They accept Bayesian reasoning and Occam’s razor, but the notion of simplicity that they use is intuitive. Thus, I would view the essence of this post to be: if one views complexity in terms of minimum effective description length then WMI is a better explanation than Copenhagen.
I would also note that asking physicists to be strict Solomonoff/Bayesian/Occamists is asking for rather a lot considering that something like half the statisticians in the world are not Bayesian, and of those who are relatively few know of Solomonoff induction.
Finally, while this went part of the way to answering my question, the connection to AGI safety isn’t yet obvious to me.
Tim: “impractical theoretical model of serial computation”. Just because a theory isn’t practical doesn’t make it wrong. For example, should we define randomness in a way that is easy to test for? No, if we did it would break the very concept of what randomness means. Also, what does “serial” have to do with it? There is no concept of time in Kolmogorov complexity and a serial machine can emulate a parallel one, thus this distinction isn’t relevant.
I love this. Science is a bias :-)
Hi Eliezer,
Have you ever read about the so-called Bayesian approach to quantum mechanics promoted by Caves, Fuchs, and Schack? These three are the most radical Bayesians I know, and they all reject many worlds. If you really care about overcoming bias, you should seek out their papers and give them a read.
“Comparisons have also been made between QBism and the relational quantum mechanics espoused by Carlo Rovelli and others” (WP)
;-)
I’d like to know what you’re implying with this post, but I’m unable to make a confident guess. Are you claiming that this WP quotation has something to do with many worlds?
Eliezer_Yudkowsky: You discuss whether training in the art of Bayes would produce scientists who don’t make these errors. What do you make of (as per Robin_Hanson’s account) how in the movie Expelled, Richard_Dawkins places a >1% probability on earth life having been designed? Is this an instance of a major not “getting” Bayesian inference, since he doesn’t also advocate diverting research funds to that idea?
(Incidentally, when I corrected, here, Richard_Dawkins’s definition of a “good theory” on edge.org, his entry there was shortly thereafter changed. If you passed on my correction to him, I would be interested in knowing why you didn’t tell him it was me.)
Eliezer, I guess the answer you want is that “science” as we know it has at least one bias: a bias to cling to pragmatic pre-existing explanations, even when they embody confused thinking and unnecessary complications. This bias appears to produce major inefficiencies in the process.
Viewing science as a search algorithm, it follows multiple alternate paths but it only prunes branches when the sheer bulk of experimental evidence clearly favours another branch, not when an alternate path provides a lower cost explanation for the same evidence. For efficiency, science should instead prune (or at least allocate resources) based on a fair comparison of current competing explanations.
Science has a nostalgic bias.
The science world, as much as the rest of the “worlds” comprised by people who share something which everybody cherishes, has to have the status quo bias. (the enigmatic add-on: One cannot escape the feeling that there is such thing as time)
This may be nitpicking and I agree with your overarching point, but I think you’re drawing a false dichotomy between Science and Bayes. Science is the process of constructing theories to explain data. The theory must optimize a tradeoff between two terms:
1) ability to explain data 2) compactness of the theory
If one is willing to ignore or gloss over the second requirement, the process becomes nonsense. One can easily construct a theory of astrology which explains the motion of the planets, the weather, the fates of lovers, and violence in the Middle East. It just won’t be a compact theory. So Science and Bayes are one and the same.
Eli—As you said in an earlier post, it is not the testability part of MWI that poses a problem for most people with a scientific viewpoint, it is the fact that MWI came after Collapse. So the core part of the scientific method—testability/falsifiability—gives no more weight to Collapse than to MWI.
As to the “Bayesian vs. Science” question (which is really a “Metaphysics vs. Science” question), I’ll go with Science every time. The scientific method has trounced logical argument time and time again.
Even if there turns out to be cases where the “logical” answer to a problem is correct, who cares if it does not make any predictions? If it is not testable, than it also follows you can’t do anything useful with it, like cure cancer, or make better heroin.
I also think you are taking the MWI vs. Copenhagen too literally. The reason why they are called interpretations is that they don’t literally say anything about the actual underlying wave function. Perhaps, as Goofus in your earlier posts, some physicists have gotten confused and started to think of the interpretations as reality. But the idea that the wave function “collapses” only makes sense as a metaphor to help us understand its behavior. That is all that a theory that makes no predictions can be—a metaphor.
MWI and Copenhagen are different perspectives on the same process. Copenhagen looks at the past behavior of the wave function from the present, and in such cases the wave function behaves AS IF it had previously collapsed. MWI looks at the future behavior of the wave function, where it behaves AS IF it is going to branch. If you look at it that way, the simplest explanation depends on what you are describing: if you are trying to talk about YOUR past history in the wave function, you have no choice but to add in information about each individual branch that was taken from t_0 to t, but if you are talking about the future in general, it is simplest to just include ALL the possible branches.
If there is a “very convincing philosophical argument” that we should go with the majority, and yet we see the majority holding countless silly beliefs that even a little bit of primary evidence and some cursory examination show as being invalid, what does that tell us?
It tells us that the very convincing argument has at least one fatal error. It tells us that our ability to be convinced is falliable. And it tells us that our argumental-validity-checking has some bugs.
This dilemma feels forced. I see where you’re coming from, and I do feel that a waveform disappearing spontaneously is a massive, unwarranted detail, but I don’t see how this sets up a contradiction.
The further a scientific prediction feels from our intuitive human experience, the harder it is to internalise. Physicists wanted an explanation for why we only see one world. They postulated that the waveform collapses into the world we see. And fair enough, it’s not difficult, on the face of it, to feel that that must be true, even if it isn’t. But how is that any different from saying the Sun goes round the Earth, because that’s what we ‘see’? The former is no more ‘science’ than the latter—it’s just wearing a flashier lab coat. Learnt that reading this blog.
Eliezer, you’ve spent so much time showing us that only experiment is admissible in science. Well then collapse is not science as you would define the term, right? Sure, you can demonstrate that our single world is there. But if our (consistently verified) theory predicts extra worlds as well, saying ‘they must disappear, since we only see one’ is adding a Cosmological Constant (Quantum Constant?). Collapse now sets the ‘Anthropocentrism, Not Science!’ warning light off in my head, for this reason. Hence I don’t feel your dilemma.
Am I getting this wrong somewhere?
P.S. Dave—dangerous attitude. It’s impossible to know whether or not your theory will or won’t make a prediction at some point. Better to work out what’s correct and bear it in mind as you go than consign it to the dustbin of irrelevance if it doesnt prove itself right away.
Surely “science” as a method is indifferent to interpretations with no observable differences.
Your point seems to be that “science” as a social phenomenon resists new untestable interpretations. Scientists will wander all over the place in unmappable territory (despite your assertion that “science” rejects MWI, it doesn’t look like that to me).
If Bayesianism trumps science only in circumstances where there are no possible testable consequences, that’s a pretty weak reason to care, and a very long tortured argument to achieve so little.
Rational agents should WIN. Not lose scientifically, or socially acceptably, WIN. :-)
I hope you talk about normative implications eventually, address bambi’s point, so we know just why this mistake matters. (Well, actually, implications of multiverse theories generally, so MWI doesn’t practically matter if we live in a multiverse for some other reason.)
Humans need empiricism as a check because we’re, in absolute terms, pretty bad reasoners. Eliezer’s “Science” (which is a bit of a strawman, but excusable) goes too far in the right direction from overconfident pure rationalism. (I believe this is the point of the Aumann example, maybe even of the whole post.) This should diminish confidence in pure logical argument, even where experiment is silent, but the case for MWI still looks strong to this non-physicist.
This confuses me as well.
I’m trying to comprehend how this is a dilemma… Science supposedly teaches that for any two theories that explain the same data, the simplest one is correct. Bayes can’t talk about explaining data without invoking the science that collected the data… Can he?
It would seem that the theory of science includes Bayesian theory.
On the other hand, the practice of science requires either exhibiting evidence for theories or testing falsifiable theories. Many Worlds can trivially be falsified by actually finding a collapse, while its main distinguishing feature cannot be directly demonstrated. Thus, science focuses on searching for a collapse.
So… I still don’t see the contradiction.
I also have to speak up in favor of metaphysics—one poster claimed he’d take Science over Metaphysics anytime. Does he realize that that statement is itself metaphysical? Science cannot determine whether Science has priority over other things, and metaphysics by definition has priority over physics.
“Computer programs in which language? The kolmogorov complexity of a given string depends on the choice of description language (or programming language, or UTM) used.”
They only depend to within a constant factor. That’s not the problem; the REAL problem is that K-complexity is uncomputable, meaning that you cannot in any way prove that the program you’re proposing is, or is NOT, the shortest possible program to express the law.
Well you can obviously prove that it isn’t the shortest program by stating a another, shorter program, but I suppose you mean that there is no shortcut to this shorter one.
I don’t believe you.
I don’t believe most scientists would make such huge mistakes. I don’t believe you have shown all the evidence. This is the only explaination of QM I’ve been able to understand—I would have a hard time checking. Either you are lying for some higher purpose or you’re honestly mistaken, since you’re not a physicist.
Now, if you have really presented all the relevant evidence, and you have not explained QM in a way which makes some interpretation sound more reasonable than it is (what is an amplitude exactly?), then the idea of a single world is preposterous, and I really need to work out the implications.
Tim Tyler: According to Hedweb, most “leading cosmologists and other quantum field theorists” thought that the “Many-Worlds Interpretation was correct 10 years ago.
Ah, but did they have to depart from the scientific method in order to believe it? The question isn’t what scientists believe; scientists don’t always follow the scientific method. The physicists who embrace MWI are acting rationally; the physicists who reject it are acting scientifically—that’s the theme of this post.
Manon: I don’t believe you.
Then you certainly understood me. This is another comment that makes me want to cheer, because it means you really got it.
This is the only explaination of QM I’ve been able to understand—I would have a hard time checking.
Go back and look at other explanations of QM and see if they make sense now. Check a textbook. Alternatively, check Feynman’s QED. Find a physicist you trust, ask them if I got it wrong, if I did post a comment. Bear in mind that a lot of physicists do believe MWI.
I myself am mostly relying on the fact that neither Scott Aaronson nor Robin Hanson nor any of several thousand readers have said anything like “Wrong physics” or “Well, that’s sort of right, but wrong in the details...”
(what is an amplitude exactly?)
It’s always treated as a complex number with a real and imaginary part, though I prefer Feynman’s calling it a “little arrow”, since that makes it clear there’s no preferred direction of the little arrows.
then the idea of a single world is preposterous, and I really need to work out the implications.
The most important implication is that the scientific method can break down. There are some minor ethical implications of many-worlds itself (e.g., average utilitarianism suddenly becomes a lot more appealing) but mostly, it all adds up to normality.
Maybe it’s because I was hung-over during most of my undergrad math lectures, but one of the things I”m having trouble coming to terms with is the fact that imaginary (rather, complex)numbers turn out to be so real. I can deal with the idea of amplitude have two dimensions and rotating through them, and with amplitudes canceling each other out because of this, but it’s not clear to me, if there is “no preferred direction” for amplitudes, why the square root of minus one is involved.
Are there other mathematical formulations possible? Or a good source for this that could re-align my intuitions with reality?
Matrices of the form ((a, -b), (b, a))---that is, compositions of a scaling and a rotation in the plane—are isomorphic to complex numbers, which makes sense because when we multiply complex numbers in polar form, we multiply their magnitudes and add their arguments.
Re: “Tim, I thought there was only one “shut up and calculate” interpretation, and that’s the one where you shut up and calculate—rather than talking about many worlds. Perhaps you mean it’s a “talk rather than calculate” interpretation?”.
No, I mean those interpretations are functionally equivalent—in that they make the same predictions. That is not true of CI, or other collapse theories—e.g. see: http://www.hedweb.com/everett/everett.htm#detect
“Um… there really aren’t any extremely strong arguments for majoritarianism. That position confuses conclusions with evidence.”
What’s more, it implies that human beliefs are normally distributed. I posit they are not, with extra weight being given to concepts that are exciting/emotional or arousing. We have a built in bias in the direction of things that are evolutionarily important (ie—babies, scarey stuff).
“I’m trying to comprehend how this is a dilemma… Science supposedly teaches that for any two theories that explain the same data, the simplest one is correct. Bayes can’t talk about explaining data without invoking the science that collected the data… Can he?”
That’s Occam’s razor, not Science. The scientific method >is taken to suggest< that an untestable theory is of no use. This isn’t the case, since every theory starts out untestable, until someone devises a test for it. What’s more, Occam’s razor isn’t some unmutable natural law: it’s just a probability—the simplest explanation is >usually< the right one, and so why not start there and move up the ladder of complexity as required: that way, you can cover the most likely (all other aspects being equal) explanations with the minimum amount of work.
I call false dichotomy on this Bayes vs Science lark. It’s perfectly reasonable to work with untestable theories, even ones that remain implicitly untestable, and even ones that go against observed phenomenon, as long as one recognizes that, somewhere, there is hole in the grand equation. “Spooky action at a distance”, anyone?
The uncomputability is unfortunate, but hardly fatal. You can just spend some finite effort trying to find the shortest program that produces the each string, using the best heuristics available for this job, and use that as an approximation and upper bound. If you wanted to turn this into a social process, you could reward people for discovering shorter programs than the shortest-currently-known for existing theories (proving that they were simpler than known up to that point), as well as for collecting new evidence to discriminate between them.
That doesn’t logically follow. Neither acceptance nor rejection implies correct comprehension of your claims.
“The most important implication is that the scientific method can break down.”
I don’t understand how this is a consequence of MW. We’ve always known that the scientific community can and does break down. The scientific method breaks down even theoretically (if you use K-complexity to assess it). And I’m not even sure that the MWI situation is a breakdown… It seems there are more than two interpretations (it’s not just collapse versus many-worlds).
“There are some minor ethical implications of many-worlds itself (e.g., average utilitarianism suddenly becomes a lot more appealing) but mostly, it all adds up to normality.”
Question: Does every event with two possible/plausible outcomes result in two distinct worlds? I don’t think that’s the case—it seems that multiple plausible outcomes also result from an ambiguous problem description (even if the situation is actually completely deterministic). It seems that the primary source of multiple outcomes in ethics is not the same as the source of multiple worlds in quantum theory—therefore you can’t sum across the multiple worlds of quantum theory to get an ethical probability of 1.
-Wm
If the exact physical state is underdetermined by the problem description, then there will be separate branches of the wavefunction for each possible state, although they might have diverged arbitrarily long ago. So, yes.
“I disagree; I think the underspecification is a more serious issue than the uncomputability. There are constant factors that outweigh, by a massive margin, all evidence ever collected by our species.”
Agreed. The constant factors really are a problem. If one has taken a few information theory courses, it’s easy to disregard it as one usually uses Kolmogorov on e.g. symbol sequences in the infinite limit. When comparing two theories though, they have finite size and thus constants does matter. It is probably possible to find two Turing machines such that two competing models have equal length on their respective best machines even if they differ greatly when tested on one of them.
It may be possible to construct an argument that favors an interpreter over all others, Sebastian Hagen gave a few ideas above, but it is highly non-trivial.
I’m not a physicist, I’m a programmer. If I tried to simulate the Many-Worlds Interpretation on a computer, I would rapidly run out of memory keeping track of all of the different possible worlds. How does the universe (or universe of universes) keep track of all of the many worlds without violating a law of conservation of some sort?
This comment is old, but I think it indicates a misunderstanding about quantum theory and the MWI so I deemed it worth replying to. I believe the confusion lies in what “World” means, and to whom. In my opinion Everrett’s original “Relative-State Formalism” is a much better descriptor of the interpretation, but no matter.
The distinct worlds which are present after a quantum-conditional operation are only distinct worlds according to the perspective of an observer who has engaged in the superposition. To an external observer, the system is still in a single state, albeit a state which is a superposition of “classical” states. For example, consider Schrodinger’s cat. What MWI suggests is that quantum superposition extends even to the macroscopic level of an entire cat. However, the evil scientist standing outside the box considers the cat to be in state (Dead + Alive) / sqrt(2), which is a single pure state of the Cat System. Now consider the wavefunction of the universe, which I suppose must exist if we take MWI to its logical end. The universe has many subsystems, each of which may be in superpositions of states according to external observers. But no matter how subsystems might divide into superpositions of states, the overall state of the universe is a single pure state.
In sum: for the universe to “keep track of worlds” requires no more work than for there to exist a wavefunction which describes to state of the universe.
And you can simulate the single worlds interpretation on a computer without running out of resources?
Infinity squared=Infinity, and if the universe is continuous, it can be said (in a mathematical sense), that it takes no more processing power to do one universe than multiples. Besides for the fact that you have to calculate all of the worlds anyway just to get a single world.
I am interested in the answer to John Maxwell’s question as well.
In that vein, let me re-ask a question I had in a previous post but was not answered:
How does MWI not violate no-faster-than-light-travel itself?
That is, if a decoherence happens with a particle/amplitude, requiring at that point a split universe in order to process everything so both possibilities actually happen, how do all particles across the entire universe know that at that point they must duplicate/superposition/whatever, in order to maintain the entegrity of two worlds where both posibilities happen?
Eliezer,
This is the main doubt I was expressing in my comment you quoted. I withdraw it.
Physicists are susceptable to irrational thinking too, but I went and stuck a “High Arcane Knowledge” label on QM. So while I didn’t mind understanding things many doctors don’t about mammographies, or things many biologists don’t about evolution, thinking I knew anything fundamental about QM many physicists hadn’t figured out set off a big “Who do you think you are?” alarm.
I hereby acknowledge quantum physicists as human beings with part-lizard-inherited, spaghetti-coded-hacky brains just like me, and will try to be more rational about my sources of doubt in future.
William_Tanksley et al: Correct me if I’m wrong, but I had immediately assumed that the framing “science or Bayes” means “the scientific world as it exists today, or Bayes”, not “ideal scientific research vs. Bayes”. Eliezer_Yudkowsky presumably equates ideal scientific research with following Bayes.
Even if we think of MWI as ‘everything duplicating’, Wiseman, it doesn’t have to happen faster than information about the event’s outcome travels, which is at the speed of light.
So if an electron on Earth causes a branching to take place, Alpha Centauri B wouldn’t have to split for about four years—because that’s how long it would take the information about the electron’s behavior to reach the star.
Re: “a serial machine can emulate a parallel one, thus this distinction isn’t relevant.”
Kolmogorov complexity/Solomonoff induction are language-specific. Not all languages are equivalent, and descriptions in different languages may be totally different lengths. It is true that any universal machine can simulate any other—but it takes a description of that simulator to do so, and that takes up space, which is a big deal, if the simulation is not tiny.
Re: They only depend to within a constant factor. That’s not the problem [...]
That /is/ a problem, when the “constant factor” is of the same order of magnitude as the things you are measuring. How complex it PI? “Print PI;” is not very complex, but its complexity grows if you have to implement a whole interpreter first.
The issue of what language to use /is/ a real issue. IMO, the case for using Turing machines is poor, because they are crappy one-dimensional serial computers, which were never intended for actual use.
My position is that we ought to use real small machines to measure complexity if we are doing practical things, such as judging scientific laws. But this is a moving target—and it means that we don’t know how to measure complexity properly yet, because we cannot yet construct molecular computers.
...but even in our current state of ignorance, we can do better than use a Turing machine. Almost anything is better than using a Turing machine.
Re: “I don’t believe most scientists would make such huge mistakes”—the CI really is pretty stupid, retrospectively, IMHO.
“That’s Occam’s razor, not Science. The scientific method >is taken to suggest< that an untestable theory is of no use.”
Watch that passive voice—unless you’re going to actually claim that the scientific method suggests that, I don’t care what someone somewhere took it to suggest.
The scientific method doesn’t suggest anything. It’s a method, not a philosophy. As a method, it gives you steps to follow. A hypothesis is untestable; a theory’s been tested. A model integrates theories. MW is a model.
“What’s more, Occam’s razor isn’t some unmutable natural law: it’s just a probability—the simplest explanation is >usually< the right one, and so why not start there and move up the ladder of complexity as required: that way, you can cover the most likely (all other aspects being equal) explanations with the minimum amount of work.”
Occam’s razor is part of science, not to be distinguished from the rest. Without it, there’s absolutely no way to distinguish experimental results from lab noise—without it the “best explanation” for an unexpected but reproduced experimental result might be “sorry, I must have messed something up, and the guy attempting to reproduce my results must have messed the same thing up in the same way to get the same result.”
You’re right that it has to be applied as a rule of thumb, but it’s also fundamental to science as a reductionist pursuit.
Sebastian_Hagen: Specifying a language with all the data already specified as one of the symbols doesn’t help, because with the MML standard, you’d have to include that, AND the data you’re explaining, which makes it longer than any theory that can find regularity.
William_Tanksley: The fact that K-complexity isn’t computable doesn’t matter for determining which scientific theory is superior; you only need to know the maximum K-complexity across all known algorithms. Then, if our theories are equally good at predicting, but your max. K-complexity is longer, we don’t throw up our arms and say, “hey, I guess I can’t prove mine is better”; rather, we see that with the current state of knowledge mine is best. If at a later point you discover an algorithm that can generate your data and theory with less code, THEN yours becomes better—but to do that, you had to find a regularity we didn’t see before! which itself advances our knowledge!
Well, the ideal simplicity prior you should use for Solomonoff computation, is the simplicity prior our own universe was drawn from.
Since we have no idea, at this present time, why the universe is simple to begin with, we have no idea what Solomonoff prior we should be using. We are left with reflective renormalization—learning about things like, “The human prior says that mental properties seem as simply as physical ones, and that math is complicated; but actually it seems better to use a prior that’s simpler than the human-brain-as-interpreter, so that Maxwell’s Equations come out simpler than Thor.” We look for simple explanations of what kinds of “simplicity” our universe prefers; that’s renormalization.
Does the underspecification of the Solomonoff prior bother me? Yes, but it simply manifests the problem of induction in another form—there is no evasion of this issue, anyone who thinks they’re avoiding induction is simply hiding it somewhere else. And the good answer probably depends on answering the wrong question, “Why does anything exist in the first place?” or “Why is our universe simple rather than complicated?” Until then, as said, we’re left with renormalization.
Silas, in this post I’m contrasting ideal but traditional Science—the idealized version of what Max Planck might have believed in, long before Solomonoff induction—with Bayes. (Also, I’ve never communicated with Dawkins.)
RI, be suspicious if you think you understand something most evolutionary biologists don’t know about evolution. (I don’t know about biologists who just sit around looking at cells all day.)
“If the exact physical state is underdetermined by the problem description, then there will be separate branches of the wavefunction for each possible state, although they might have diverged arbitrarily long ago. So, yes.”
Are you seriously proposing that my use of ambiguous language splits the universe? This is unbelievable. I understand how incoherency would split the universe, but how can ambiguous language do that? How about false information—if my bank tells me that my paycheck came in, is there an alternate world where my paycheck in fact DIDN’T come in?
I just don’t buy this. I think I get the quantum MW model; it makes a certain kind of sense. What I don’t get is how it enables you to claim that there are any number of worlds that you want! I think you can only claim a quantum split where there is actually decoherence, and that the splits will contain only events which had a nonzero “quantum probability” in that decoherence.
There may be a world in which my paycheck didn’t actually come in to my bank—but the explanation for that lack is NOT “because I just imagined it”, or “because it’s the negation of something that did happen”; rather, it’s because of some specific quantum decoherence which could eventually result EITHER in my paycheck arriving or NOT arriving.
What am I missing?
But you still need to pick a language to express (language+data) in. Infinite regress.
It doesn’t split the universe; your language doesn’t have any effect on the world; as I said, it means that you have to consider a larger set of microstates, which means a larger set of outcomes. The better specified the initial conditions are, the better you can predict.
Yes, there is such a world. Yes, there is a causal explanation for it, based on some decoherent split that occurred a long time ago and caused a neurotransmitter to jump one way (or the other) in some secretary’s brain, causing ver to get distracted (or not), and to forget to note one deposit (or not). All asymmetry in the universe results from quantum ‘randomness’ at some point, even if it was back at the Big Bang, so any physically possible world is in the wavefunction somewhere (as are physically impossible but logically possible worlds in simulations).
So, in a sense, I was wrong to affirm “Does every event with two possible/plausible outcomes result in two distinct worlds?”, because only subjectively does the event itself split the world; the objective split might have occurred billions of years ago. But, yes, every outcome is real in some world.
Let me pull the money quote from the article:
“The tradition handed down through the generations says that a new physics theory comes up with new experimental predictions that distinguish it from the old theory.”
This is superficially correct, but I think it’s irrelevant. Quantum theory is already a theory with well-established laws. None of the contending interpretations of those laws—many-worlds, collapse, hidden-variables, and so on—are theories, and none of them propose new laws (suggesting that there might be a law we don’t know doesn’t count). They’re all attempts at models, and they all suck. Models should have explanatory power; none of these add any explanatory power.
The real reason some people don’t care about Many Worlds isn’t that they’re irrationally wedded to Copenhagen (although some people are). It’s that both Copenhagen and MW suck so badly that the only way to stick to one is to be irrationally wedded to it. Back when all we had was Lorentz’ equations there were tons of possible ways to explain them; as soon as Einstein proposed a fruitful model all of the other explanations vanished (well, as soon as the fruitfulness became obvious).
I feel that I’m equivocating, though. I used the term ‘fruitful’, which hides the meaning “producing experimental results”. I suppose that makes me a devotee of Scientism as opposed to Bayesianism. I have much reading to do on this site, and I’m thrilled to have the opportunity. Thanks for interacting with us.
“Of course I had more than just one reason for spending all that time posting about quantum physics. I like having lots of hidden motives, it’s the closest I can ethically get to being a supervillain.”
Your work on FAI is still pretty supervillain-esque to most SL0 and SL1 people. You are, essentially, talking about a human-engineered end to all of civilization.
“I wanted to present you with a nice, sharp dilemma between rejecting the scientific method, or embracing insanity. Why? I’ll give you a hint: It’s not just because I’m evil. If you would guess my motives here, think beyond the first obvious answer.”
The obvious answer is that the scientific method is simply an imperfect approximation of ideal rationality, and it was developed before Bayes’ Theorem was even proposed, so we should expect it to have some errors. So far as I know, it was never even defined mathematically. I haven’t thought of any non-obvious answers yet.
“I don’t believe you. I don’t believe most scientists would make such huge mistakes.”
It took thirty years between the original publication of Maxwell’s laws (1865) and Einstein’s discovery of their inconsistency with classical mechanics (~1895). It took another ten years before he published (1905). In the meantime, so far as I know, nobody else realized the fundamental incompatibility of the two main theories of classical physics.
I can think of a few reasons why you would do this, although I’m not sure which one you had in mind.
Primarily, it’s to evaluate the extent to which we commenters accept what you say on face value, particularly when we’re not well informed to begin with. I don’t mean picking at the specifics of examples, but whether we’re evaluating what you’re saying for internal consistency between posts.
For instance, the ‘many worlds’ argument you’ve presented DOES seem more plausible that collapse, but it certainly still seems mysterious. Having universes sprouting in all directions is bad enough, but something like ‘mangled worlds’ whereby there are arbitrary cutoffs that make a world disappear is even worse. It may be an improvement, but it sure doesn’t feel like the final word, even though it’s presented as such.
I think this in part gets to the heart of why the mistake was unspotted for so long. Because Bohr and Shrodinger and the rest said that collapse was what was going on, and people tend to take these things on face value. Who wants to be the first guy to publicly disagree with Bohr? We didn’t have 30 years of physicists forming bad judgments, we had a couple of early physicists with bad judgments and 30 years of people taking their word on face value because they didn’t understand the problem exactly themselves.
Various people:
The reference machine is chosen to be simple, usually by limiting its state x symbol complexity. That’s not perfect and various people have tried to come up with something better but as yet none of these efforts have succeeded. In terms of prediction it’s not a problem as Solomonoff’s predictor converges so amazingly fast—faster than 1/n where n is the number of bits of input data (this isn’t quite true and Li and Vitanyi don’t quite get it right either, see Hutter: On universal prediction and Bayesian confirmation. Theoretical Computer Science, 384, 33-48, 2007 for the gory details). Anyway, the convergence is so fast that the compiler constant for a simple UTM (and we know they can be just a few hundred bits) quickly becomes irrelevant.
In terms of determining the generating semi-measure (“true hypothesis”) things are a bit more difficult. If two hypotheses assign the same probability to the observed data (same likelihood value) then the data will not cause the relative probability of these in the posterior to change, they will simply be scaled by the evidence factor. Of course this is no problem for prediction as they agree, but it does mean that Solomonoff induction is somewhat crude in deciding between different “interpretations”. In other words, if the data can’t help you decide then all you have left is the prior. The best we can do is to tie down the reference machine complexity.
In any case I think these things are a bit of a side issue. In my view, for anybody who doesn’t have a “we humans are special” bias, MWI is simpler in terms of description complexity because we don’t need to add a bunch of special cases and exceptions. Common sense be damned!
Change, not end. (The general point still stands.)
Nick, thank you for the post. It almost answered my question—I just need to make sure I’m not totally misreading it.
“But, yes, every outcome is real in some world.”
When you say “outcome” do you mean “every outcome of quantum processes”, or do you mean “every event”? Do you mean “every possible result of physical processes” or do you mean “every configuration regardless of physical antecedents”?
As a specific example, is there a world where one human was born of a virgin, performed miraculous healings, prophesied his own death and resurrection, rose from the dead after being buried in a tomb for three days and three nights, was seen by 500 people, and ascended some 40 days thereafter—and do you attribute the existence of this world entirely to quantum decoherences splitting the worlds (i.e. if you believed in God you’d have to answer ‘no’)?
The latter, since there are no Garden of Eden patterns in physics. (I’m not sure how the Big Bang fits in.)
There is such a world, with unfathomably tiny measure (and a tiny portion of your and my measure is in that world*). Or, rather, there is an ensemble of such worlds, that came into being in different ways. I suspect the overwhelmingly vast majority (by measure) involve local “gods”, but a small portion do arise through pure coincidence.
*If you consider “William Tanksley” to refer to “the set of all processes implementing William Tanksley’s conscious experience”, as I do.
“The latter, since there are no Garden of Eden patterns in physics.”
Thank you for your excellent job of communicating (and the GoE link decreased possible ambiguities, too).
How do we know that there are no Garden of Eden patterns? That is a very interesting claim. In attempting to reverse-engineer it, I remembered that according to quantum theory, each wavefunction is nowhere zero. Thus, any collection of particles could tunnel into place over any distance in any organization you could possibly specify. Is that the key to the proof?
At any rate, I’m “sticking to my guns” about many-worlds not affecting the desirability of average utilitarianism. The sum over all direct results of my actions over all worlds is still overwhelmingly determined by already determined macroscopic causes not known to me, NOT by as-yet undetermined quantum decoherences splitting worlds; my probability computations are dominated by unknowns, not by unknowables.
I’m not arguing that average utilitarianism is wrong; I’m just saying that MW doesn’t seem to appreciably affect its desirability.
Among all these comments, I see no appreciation of the fact that the version of many worlds we have just been given CANNOT MAKE PREDICTIONS, whereas “collapse theories” DO.
Yes, Schrödinger evolution plus collapse is more complicated than just Schrödinger evolution. But the former makes the predictions, and the latter does not. We have been given the optimistic assertion that maybe the predictions are already somewhere inside the theory without collapse, but this remains to be shown. That’s what the meaning of this whole “quest for the Born probabilities” is about! It is, precisely, the quest to restore the predictive capacity of quantum mechanics after you’ve taken collapse away. And the fact that it’s a quest tells you that this is a research program whose goal is not accomplished.
Give me a single example of a successful use of Bayesianism that is not predicated on its being an explanation of the success of the scientific method and maybe then I’ll consider choosing Bayes over science.
“Among all these comments, I see no appreciation of the fact that the version of many worlds we have just been given CANNOT MAKE PREDICTIONS, whereas “collapse theories” DO.”
So far as I know, MWI and collapse both make the exact same predictions, although Eliezer has demonstrated that MWI is much cleaner in theoretical terms. If there’s any feasible experiment which can distinguish between the two, I’m sure quantum physicists would already have tried it.
William, I’ve replied off-site.
As has been pointed out before, a feature of MWI that produces the Born probabilities is a priori no less likely than a collapse postulate that produces the Born probabilities. I think.
Tom, Nick, MWI does not make predictions! Well, there is a version of MWI that does, but it is not the one being advocated here.
What makes predictions is a calculational procedure, like sum-over-histories. That procedure has an interpretation in a collapse theory: the theory explains why the procedure works. The version of MWI that Eliezer has expounded cannot do that. He has said so himself, repeatedly—that the recuperation of the Born probabilities is a hope, not an existing achievement.
Is that clear? I feel like I had better say it again. The bare minimum that all quantum physicists share is an algorithm for making predictions. An objective collapse theory offers an explanation as to why that algorithm works. It is a theory about what is actually there, and how it actually behaves, from which that algorithm can be derived.
Many worlds is also a theory (or a class of theories) about what is actually there. But when you count the worlds, the numbers come out wrong, badly wrong. So something has to change. Robin Hanson has suggested a different approach to the problem; but as I have objected, it remains vague on the crucial detail of exactly when the transition between one world and many worlds takes place. In any case, this brand of many worlds simply cannot yet offer an exact justification of the predictive algorithm in the way that a collapse theory can. It’s not true that MWI and collapse make the same predictions; rather, the hope is that MWI will predict what collapse already predicts, once we understand it properly.
Ah, but Mitchell, the collapse interpretation doesn’t explain why the Born probabilities are what they are.
So the version of many-worlds that I believe in, as a predictive theory, is:
(1) The wavefunction is real and evolves unitarily.
+
(2) For some unknown reason, experimental statistics match the Born probabilities.
In combination, these statements constitute a predictive theory.
As for the objection that (2) hasn’t been explained, collapse “explains” it by tacking on, “And the reason for (2) is that parts of the wavefunction spontaneously vanish faster than light for some unknown reason, leaving only one survivor because we like it that way, and in the lone survivor, for some unknown reason, experimental statistics match the Born probabilities.” If you look closely, this explains (2) by strictly containing it.
This is a stupid analogy, but:
Suppose we have a software package, UnitaryQM, of predefined functions. There is a competition, the Kolmogorov Challenge, in which you have to implement a new function, Born(). There are two development teams, Collapse and MWI. Collapse does the job by handcoding a new primitive function, collapse(), and adding it to the library. The MWI team really wants to use just the existing functions, but MWI 1.0 actally gives the wrong answers. The current hope for MWI 2.0 is a function called mangle(), but mangle only exists as pseudocode. The MWI team know how they want it to behave in certain limits, but to completely specify it requires an arbitrary parameter, objectiveDecoherenceThreshold.
Now interestingly, there have been two generations of Collapse as well. In Collapse 1.0, collapse has to be called directly by the user, from a command line. In Collapse 2.0, collapse can be called from within another function; but of course, now you can’t rely on the user to determine when it happens. So instead there is an adjustable parameter, objectiveCollapseFrequency.
The Collapse 2.0 team are at peace with the fact that their implementation of Born requires a free parameter, and they have consequently submitted some actual code to the Kolmogorov Challenge. The MWI 2.0 team are not. As a result, they haven’t submitted any code, just pseudocode. Obviously Collapse 2.0 wins by default.
That’s a rather good explanation of the issue at hand.
Tim: “Solomonoff induction is simply an attempt to formalise Occam’s razor around
an impractical theoretical model of serial computation.”
This is distinct from a Universal Turing Machine how?
Something just clicked for me. I mean, regarding the subject of the original post. There is a true dilemma, and in that dilemma, the choices of a pure Bayesian will look crazy to a Scientist, and vice versa.
The hard difference between Science and Bayes is that Bayes does not require a model; Science does. Bayes simply predicts probabilities; Science attempts to establish a model that explains the probabilities.
Thus, a Bayesian won’t care about the quality of the model he’s given, EXCEPT that it must not be complex (a nonexistent model will work just fine).
MW (like all the others I’ve seen) is a lousy model, so science is not satisfied with it; but to a Bayesian, the quality of the model is irrelevant, so a Bayesian can accept the model or ignore it and not even ask for something better.
I’m definitely sounding like I disapprove of this pure Bayesian thinking. I’m starting to see that science plus bayes is more complex than bayes alone (which is a win for pure Bayesian thought), but I’m still not sure that not being able to make models is a good tradeoff for pure simplicity.
Two other examples of physicists messing up by not employing Occam’s razor are provided by Fredkin: CPT symmetry should be just T symmetry—and mass, length and time should be bits, length and time—where “bits” have the units of angular momentum.
Re:”So far as I know, MWI and collapse both make the exact same predictions”—nope:
``Many worlds is often referred to as a theory, rather than just an interpretation, by those who propose that many worlds can make testable predictions (such as David Deutsch) or is falsifiable (such as Everett) [...]″
http://en.wikipedia.org/wiki/Many-worlds_interpretation
The issue is laid out in the FAQ:
Q36 What unique predictions does many-worlds make?
http://www.hedweb.com/manworld.htm#unique
Q37 Could we detect other Everett-worlds?
http://www.hedweb.com/manworld.htm#detect
Tim: Could you elaborate on that? How is it that CPT ought just be T?
And how should mass be in units of angular momentum?
Thanks.
Re: Fredkin, for the units issue, see:
“Chapter 20: Units B, L, T, P, D, R, A & I”
http://web.archive.org/web/20060925023734/http://digitalphilosophy.org/digital_philosophy/20_units.htm
For temporal reversibility, see:
“Chapter 30: DM and CPT”
http://web.archive.org/web/20040410145311/www.digitalphilosophy.org/digital_philosophy/30_dm_cpt.htm
I also have a page on the subject:
http://finitenature.com/cpt/
There’s also a synopsis of Fredkin’s ideas (incl. these two) in:
“Five big questions with pretty simple answers”
http://www.research.ibm.com/journal/rd/481/fredkin.html
People who want to get fundamental physics out of cellular automata could be a lot more imaginative than they are. What about small-world networks? Maybe you could get quantum nonlocality. What about networks which are only statistically regular? Maybe you could get rotational symmetry in the continuum limit. And how about trying to do without a universal time coordinate? What about creation and destruction of cells, not just alteration of cell states? Euclidean, gridlike CAs like Fredkin’s should only be a training ground for the intuition, not the template for modeling the real world.
With respect to the topic of this article, though I’ve flamed many-worlds for not really delivering on its promises, cellular-automata physics is not remotely comparable. Even billiard-ball physics is better empirically—at least it can reproduce Newtonian gravity! CAs haven’t even done that. You can’t say “Occam’s razor favors X” if you haven’t actually got X to work yet.
IMHO, Fredkin picked cellular automata for good reason. Regularity helps explain how light travels in straight lines across long distances. There /are/ asynchronous CAs, but asynchrony is mostly an unneeded complication in this context—synchronous CAs are hard enough to work with, thank you. CAs can be universal, so they can do anything any other discrete model can. CAs are low-level modelling tools, that are not easy to build things with—but it seems extremely likely that low-level modelling tools will ultimately be needed to explain physics.
Eliezer: “A little arrow”? Actual little arrows are pieces of wood shot with a bow. Ok, amplitudes are a property of a configuration you can map in a two-dimensional space (with no preferred basis), but what property? I’ll accept “Your poor little brain can’t grok it, you puny human.” and “Dunno—maybe I can tell you later, like we didn’t know what temperature was before Carnot.”, but a real answer would be better.
Suppose we lived in a universe before any quantum decoherance tests were done. And now suppose I (as a scientist with a favourite pet personal theory) put forward the theory that multiple parallel universes exist, and start fleshing it out. One of the predictions this theory would make would be in the way entangled photons probabilities change-at-a-distance. Would not performing the test just described and coming up with a set of probabilities that matched the theories predictions be a valid scientific prediction?
If all that can be observed in a system are probabilities, then why does it make sense to say some probability theorem superset’s science—science surely can make testable predictions about probabilities! (unless I’m reading the post incorrectly, the whole argument appears to hinge on sciece not being able to do this)
Also, for further thought, does that imply the order of prediction, and testing matters, when considering if something should be added to the body of evidence that science uses to aggregate ‘rightness’ across individual scientists? (since in this case, the theory was dreamed up after to match up the evidence and not make much in the way of other predictions… Shouldn’t that be evidence that the theory is fairly specifically talking about a very small subset of ‘possibility’ and that the subset it talks about has been tested, and is correct?
Anyone?
Actually the many-world interpretation misses something very important, which is a kind of theory of mind.
Why does my consciousness follow only one world and at what point do different worlds separate ? Why am I not conscious of the many-worlds but only of one world ?
The one-world interpretation does not fail at this point. It seems to me that many-worlds adepts are so hypnotised by the beauty of mathematics that they forget what reality we have to account for...
It doesn’t. It splits when the worlds split.
Because there’s no communication between consciouses in different worlds, even if both of the consciouses are derived from the same T-x individual.
“T-x”?
′ T ′ means the current, or relevant, specific time. I expected ′ T-x ′ to be understood as ‘at some point in the past, compared to the specific moment in which the consciousnesses you’re contemplating exist’ - x seconds in the past, to be specific.
That does make sense. I had interpreted the “-” as a minus sign, and the only thing Google gave me was http://en.wikipedia.org/wiki/T-X
My consciousness does not split, it follows only one world. That is my experience and what I have to account for.
Let me be more precise.
The many-world interpretation is based on a wonderful mathematical description of all possible past and futures, an infinite set of all possibilities.
I am fine with that as long as one consider this as a description of the many possibilities of the world, useful for predictions. But this interpretation is more than that : it’s claiming that this mathematical description is reality and that all these possibilities actually exist. This is not a scientific claim but a metaphysical claim.
In a sense, it claims that everything exists : the future, the past, other futures, other pasts, and so on. My question here is : at this point, what does “exist” exactly mean ?
That’s why I talked about consciousness in my first comment—not to confuse everyone, but because in my opinion, the definition of existence is empirical and must involve consciousness at some point.
I feel that this model lacks a kind of “instantiation” to describe not every possibilities but what actually exists… And thus it should not be considered as a reality, but rather as a potentiality of which existence is the instantiation.
One often sees talk here of something called the “Mind Projection Fallacy”. In essence, it is the error of taking something subjective and treating it as objective. A canonical example would be saying “X is mysterious”, rather than “I don’t understand X”. That is, what ought to be handled as a two-place predicate IsMysteriousTo(X,N) (topic X is mysterious to person N) is erroneously handled as a one-place predicate IsMysterious(X) (topic X is mysterious, full stop).
One criticism that might be made of the position you are taking here is that you are falling into a form of the Mind Projection Fallacy. Try imagining Exists(X) as a fallacious one-place predicate and replace it with ExistsFor(X,N), i.e. X has existence from the viewpoint of observer N.
Similarly, you should say “X was instantiated at a point in time in my past, hence X currently exists from my viewpoint, and I expect X to cease to exist at some point in my future. Any other supposed X, existing on some other branch of reality, is not the X to which I refer.”
This, as I understand it, is the way Kripke says that issues of identity and existence must be handled in modal logics.
If you can accept this viewpoint, then it is just a small step to realizing that the people you are disagreeing with, those who consider past, future, and all branches of reality as equally “real” are actually talking about the model theory for your modal logic.
This is a preaching to the choir kind of argument. I am very very impressed with it, but you should not be surprised that quen_tin is unimpressed.
My point is that this model theory is incomplete, because it does not fully explain my experience. The model lacks a kind of instantiation.
As a “model theory of my modal logic”, it may have an heuristic interest, not an ontological one. In other words, it’s fine as long as you consider it only as a descriptive/predictive model. It’s not if you think it is reality.
But what is it that makes you think that your experience has privileged ontological significance? Is it that you think that instantiation from your viewpoint is isomorphic to instantiation from everyone else’s viewpoint? Why would you believe that with any confidence?
my experience is the only thing I can assume as real. Everything else is derived from my experience. It is thus the only thing that needs to be explained.
Indeed I find it reasonable to assume that everyone else can claim the same for him/herself.
Ah! “Reasonable to assume”. One of my favorite phrases. There are many things which it might be reasonable to assume. Unfortunately for you, the particular thing you have chosen to assume is not one of them. Because you will probably agree that I am a member of the set of people you mean by “everyone else”. But I assert that I do not and can not claim that my experience has a subjectively privileged ontological status.
I do not believe your assertion contradicts quen_tin’s specific claim either as intended or as worded..
I did not claim that my experience has a subjectively priviledged ontological status. This is your interpretation. I meant it has a subjectively priviledged epistemological status.
In the great grandparent, you wrote
Perhaps you use the word “real” differently than I do, but it sounded to me as though an ontological assumption was being made. And that you were then extending that private ontology-of-experience to everyone else by a further assumption.
I’m happy to let you be an empiricist who is epistemologically cautious about what you can know beyond personal experience. I’m less happy to allow you to limit what can exist to that which you can know. As Eliezer argues out in the posting, Occam’s razor, properly understood, does not provide you a justification for this.
Starting with the assumption that this world-branch exists and has conscious people in it, there doesn’t seem to be reason to believe that the different split-off branches would have less-conscious people in them, assuming they exist. For the other branches not to exist, there would need to be something special that privileges this one. I’m not sure what this would be like. The fact that we have ended up in the world-branch we have ended up doesn’t seem very special, if the starting assumption is that all world-branches with people in them exist and have conscious people.
I’m pretty much basing my own intuition in algorithmic information theory. It’s more complicated to define physics with the weird quantum stuff in them and then specify a single “real” world as the result of every quantum interaction, rather than to just define the physics with the weirdness, and not give any single world-branch a privileged status. Simple things are more likely than complex things.
One branch is priviledged : mine. It is an empirical fact.
It’s all about existence.
I will repeat your argument with time, what seems wrong to me may appear more clearly :
“Starting with the assumption that the present instant exists and has conscious people in it, there doesn’t seem to be reason to believe that different instants would have less conscious people in them, assuming they exist. For the other instants not to exist, there would need to be something special that privileges this one. I am not sure what this would be like”.
That is a good argument indeed, but here is an empirical fact : only the present exists, other instants (past or future) do not exist.
That is exactly the same problem to me. The many-world just goes one step further by claiming than not only past and future events “exist” the same way that present exist, but also every alternative past/present/future. It does not account for existence.
Is this really an empirical fact? How do you define the word “exists”?
Maybe it’s not really an empirical fact, but then do you really think that past still “exist” and future already “exist” as well as present does ?
How to define existence ? That is the big question. What is reality ? Why has the past gone, and why isn’t the future already there ? And why am I myself ?
I don’t have answers, I only blame the many-world interpretation adepts to go straight to conclusions without addressing those deep questions, as if you could step into metaphysics starting only from a predictive mathematical model of reality.
My position is very close to the position of Max Tegmark, Gary Drescher and other compatibilist B-theorists, so yes, I really really honestly believe that past and future exist as well as present does. At least in some sense of the word “exists”, but this is not a cop-out, the sense I used it must be very similar to the sense you used it. There is another reasonable sense of the word “exists” (corresponding to Tegmark’s frog’s view), where only some of the past and present exists, and not too much of the future.
The point is, you have several choices about how to consistently formalize your vague statement, but whichever you choose, your “empirical fact” will be factually incorrect.
I really doubt it. how could you factually prove that the past or the future exist ?
Let’s say my position is a very narrow version of Tegmark’s view, and that I call “present” (with a certain thickness) the parts of the past and future that actually “exist”.
First of all, I should have wrote “logically inconsistent”, but “factually incorrect” sounded better. :) More importantly, it is possible that I misinterpreted what you wrote:
I interpreted this as a positive statement that an observer-independent present does actually exist, hundred percent. This is in contradiction with special relativity, as someone else already noted. Reading other comments from you in this thread, it seems like this is not what you meant. It is more like you choose the second option of my dichotomy: the frog’s view instead of the bird’s view. I have no problem with that.
Stepping into metaphysics from a predictive mathematical model of reality sounds like a step backwards to me!
I don’t know if it is a step “backwards”, as metaphysics encompass science. I would say it’s a step outside. Anyway, that’s what the many-world interpretation does in my opinion.
Given this line of reasoning, should I still believe that people other than me in this world-line are conscious beings instead of p-zombies, given that I have the privileged conscious viewpoint?
Also, I wonder if the “only the present exists” notion could be made to get into relativistic trouble in a similar way as the “only this world-line exists” seems to end up in quantum mechanical trouble.
I can communicate with other people, therefore I assume they are conscious. It’s an empirical evidence, which leads to an inter-subjective viewpoint.
From this inter-subjective viewpoint, we can all agree that only the present exists, and only this world.
The troubles we can get with scientific models are always of the same kind: it is an instantiation problem. Scientific models do not say anything about existence, they are only good at predictions.
Sorry for the flip reply, but shouldn’t that rather be “only that”?
I think I get your idea about the difficulty of assuming the reality of unreachable states, but you seem to keep making ungrounded jumps from intuitions to assertions of certainty.
You’re right, there is no certainty, but my jumps are not totally ungrounded. We all experience a flow of time in a single world, and the many-worlds interpretion does not really explains it.
It really does. At the level of everyday life branching explains our experiences exactly as well as a non-quantum explanation. When we happen to be using scientific apparatus our experience is better explained by MW.
It does not explain why we followed that path in the many world, not another one. Our experience is “better explain” > it is a good heuristic interpretation.
It doesn’t explain that because that isn’t what happened.
That is my experience. As far as I can know if something happened, that happened.
You confusing experience itself with your intuitions about experience. Your actual experience makes just as much sense if you perceive yourself to be part of a great tree of branches.
Then I followed a path on this tree.
Evidently.
end loop;
Rather it is consciousness that confuses people.
Precisely, the conscious experience is everything we have to account for at the end.
Dear Eliezer, Could you pick a simpler example of a “traditional” style theory that has been replaced by “Bayesian” interpretation which is more efficient while having equal or better fit with reality. It seems that conjoining two difficult objects in the same example lessens the chance of anyone really getting the point, i..e quantum mechanics and proper epistemology considerations are not simple subjects. I must confess that I get the idea of the post but I cant really get a grip on the reasoning. Thanks, Dispose.
How about E.T. Jaynes’s “Clearing Up Mysteries—The Original Goal”?
Copenhagen doesn’t fail? Wigner’s friend might disagree.
Goodness. Neither interpretation fails. Entangled states look like ordinary probabilities from inside, so you can never be both at once. Correlation with the past preserves memories in both interpretations, so you’ll never remember being in a different “branch” (MWI) or a different column of the matrix (The Matrix! I mean typical linear algebra interpretation).
In fact, now that I think about it, the measurement collapse is probably equivalent to starting with the continuous, deterministic big ol’ wavefunction of the universe, and then asking “what does it look like to an observer from inside?”
Copenhagen fails. W.r.t. Wigner’s friend, isn’t that entire thought experiment highly flawed when one considers timeless physics?
This post is beating a strawman. Beating a strawman is bad.
Just wanted to say that I enjoy your writing a good deal.
I laughed aloud at that.
I started as a many-worlds hater, but I think I can see where I’m heading. (I’m not quite there yet because I got to this article out of sequence, by accident).
Something wrong with this post, which I didn’t appreciate back in 2008, when it was made, is that it misunderstands how quantum mechanics is interpreted by most practicing physicists.
According to the post, physicists believe in wavefunction collapse, and in doing so they follow the rules of Science, but if they followed the rules of Bayes, they would believe that the wavefunction does not collapse, and thus in many worlds.
Now quite apart from the problems of many worlds, which I have pointed out here and at other posts, it is not even true that physicists, as a rule, believe in wavefunction collapse in the way it is represented here, i.e. as an actually occurring physical process.
The cognitive facts about what all the world’s physicists individually believe regarding quantum mechanics would be rather complicated—there is a diversity of opinion among physicists, and an internal inconsistency of opinion within many individual physicists—but the standard view is not wavefunction realism. The wavefunction (or quantum state vector) is like a probability function; it is a mathematical entity from which probabilities of outcomes can be calculated. There is no wavefunction in space, which evolves smoothly when not observed and which jumps discontinuously when it is observed. What’s actually there are particles (fields, strings, whatever), with quantitative properties (“observables”) which take values with probabilities derived according to the projection postulate (or according to some mathematically equivalent rule).
Theoretical physics lacks an agreed-upon picture which specifies which observables take what values and when, at all times. Quantum mechanics only says “if you care, this is what it might be doing right now”; it offers a dynamics for the wavefunction, i.e. for the probabilities, but it doesn’t offer an underlying objective dynamical framework from which wavefunction dynamics can be derived. There are various proposals (e.g. Bohmian mechanics), but they all have problems, and there are well-known difficulties (e.g. Kochen-Specker theorem, Hardy’s theorem) facing the construction of a fully objective theory which reproduces quantum mechanics.
The prevailing attitudes in physics towards quantum foundations may be confused or even deplorable, but nonetheless, my point is that the argument of this article is wrong. In fact it’s wrong twice over. First of all, most physicists do not believe in wavefunction collapse as a physical process—this is what I have just been saying—and so the starting point of the argument only describes the views of a minority. Second, the assertion that many worlds provides a quantitatively simpler theory than objective wavefunction collapse is a highly dubious one, because there is no good derivation within many worlds of the probabilities which contain all of the actual predictive content of quantum mechanics. It’s as if I were to say, “My theory of physics is blah blah blah, and though I can’t explain why in terms of blahs, my theory happens to give exactly the same predictions as orthodox quantum mechanics. Therefore, it is at least as good as orthodox quantum mechanics.” Which is the criticism I was making back in 2008.
My problem with the collapse version of QM—and this may stem from the fact that Eliezer’s explanation is the only one I’ve read that I’ve actually had a decent understanding of it (such that I am relatively confident I could pass along the basic concepts to someone else without becoming “Goofus” in some of EY’s earlier examples) - is that there is no apparent reason for the collapse.
Take a coin toss. We say the probability of a heads or tails on a fair coin is .5 for each outcome. When heads eventually happens, the truth of the matter is that if we had information like the state of the coin pre-flip, the position of the hand flipping the coin, the force of the arm as it moves up and the exact position and force of the thumb on the coin itself, we could raise our estimation of the probability for that flip to be heads up to probably .9+. Given more precise information, we could conceivably get the probability up to .99. Excluding quantum effects, the actual probability that the coin would come up heads in that particular instance was essentially 1.
This does not seem to be the case with quantum mechanics. There does not seem to be any new information that could give any insight as to why the electron went through the first slit instead of the second, or vice versa. It’s not just that the information is hidden, it doesn’t seem to exist at all. Instead, the probability itself appears to be “baked into” reality, with no reason to prefer one outcome over the other. The CI response seems to be “It just does, Born probabilities blah blah blah accept it” without even attempting to explain what seems to me to be a major problem with the way reality works under this interpretation. CI doesn’t actually explain the Born probabilities any better than MWI, as far as I’ve read, they just seem to have “claimed” them. For this reason, I don’t think CI satisfies your criteria of having a “derivation… of the probabilities which contain all of the actual predictive content of quantum mechanics” criteria either. At least not any better than MWI.
If the wave functions aren’t a real property of the universe, then why the hell does reality seem to follow them? And if they are real, why did A happen when there is no reason B didn’t happen? This seems to imply that luck is a fundamental property of the universe!
It’s these two basic questions that I haven’t seen answered satisfactorily from the CI or more general collapse perspective (if such a thing exists separate from mainstream CI). The fact that most physicists believe some variation MWI bolsters my confidence, even though the idea that decoherance effectively produces zillions of universes continuously simply blows my mind.
I’ve read the post about the invisible, untouchable, etc. objects beyond the light cone and I wasn’t convinced. I don’t understand the reasoning which says that it’s more probable that [OBJECTS WHICH I CANNOT SEE OR INTERACT WITH] exist than that they don’t. If it’s outside of my experience I necessarily have no evidence of it. I can’t even build a general rule for interacting with things outside of my experience because to be accurate that general rule would also have no evidence supporting it. Because of this, I’m skeptical of many worlds.
The thing that came closest to convincing me was a claim roughly along the lines of: “Do you think it makes sense that the particle is aware of your existence and then it decides to disappear immediately after you can no longer detect it?” But unless you’re just blatantly extrapolating your current experiences onto things that you by definition cannot directly observe then it is equally nonsense to presume that any other hypothesis has a greater probability. It sounds silly to say that the particle disappears, but silliness isn’t a relevant criterion, and many worlds also sounds silly.
I feel as though perhaps an anthropic argument would convince me, I’ve tried to outline one here. The probability that I find myself in an atypical region of the universe where particles happen to exist independent of observation, and that in the rest of the universe they don’t exist that way, vs. the probability that particles exist independent of observation everywhere, or something like that. But I’m new to anthropics and can’t construct one by myself that I believe is true.
As of now I still don’t think Bayes or Occam should lead to the conclusion that many-worlds is true.
I don’t think it says they exist. Or if it does, it is poorly worded.
Existence is not a predicate that applies when dealing with things you can’t interact with. It’s a type error to say they exist or don’t exist.
Look at it this way: beliefs have to pay rent. How are you going to act differently, what different observations do you expect, if you believe that they “exist” vs. believing they don’t?
That’s exactly what I’m saying here. Believing in many worlds doesn’t pay its rent. It also doesn’t match Bayes or Occam’s Razor. I agree that it doesn’t make sense to privilege either their existence or their nonexistence, that’s exactly what I was contending in the grandparent, that is why I said it doesn’t make sense to say that “it’s more probable that [they] exist than that they don’t”.
You’re confusing my position for the one that I am attacking. Now, of course, I might be mistaken about what EY is saying and actually many worlds has other evidence than an appeal to Bayes Theorem or to Occam’s Razor. But I don’t think so. I even believe that one of the quantum mechanics posts explicitly concedes that due to the nature of many worlds theory it is not falsifiable.
To me it seems that neither “many worlds” nor “magical collapse” theory pay their rent, when compared with each other. The only difference is that “magical collapse” theory came first, therefore it was compared with the ancient theories, and in this comparison it paid rent. Under the same circumstances, “many worlds” theory would pay the same rent; we just don’t judge it the same way, because it came later.
Occam’s razor clearly prefers “many worlds” theory, because both theories agree on the part “at a microscopic level, this and this happens, and the equation shows that on larger scales it becomes less visible...”, only the “many worlds” theory continues by saying ”...and that’s the whole story”, while “magical collapse” theory continues by saying ”...and when it becomes really difficult to see the other parts of the equation (and we refuse to say when exactly that happens, which protects us against possible falsifying), they magically disappear”. One theory is strictly a subset of another, the only difference is the additional unfalsifiable magical collapse postulate.
If you disagree with my analysis, feel free to make yours. Please divide these theories into three parts: what both theories say; what the first says and the second doesn’t; what the second says and the first one doesn’t. Intuitively it feels that “many worlds” theory says there are many worlds, while the “magical collapse” theory does not say it—but this is not true! The “magical collapse” theory also, kind of mysteriously, says that there is something mathematically equivalent to many worlds on the microscopic level, it just somehow disappears later… and by a coincidence, it disappears exactly at the moment when it becomes difficult to measure (so if we improve our measurment, the theory will simply say that it disappears even later), so there is no experimental difference between disappearing and not disappearing.
If many worlds is only:
then I misunderstood what it says. But I don’t think many world says that things become less visible, I think it says that they become inaccessible. That’s a level at which concepts like “existence” don’t make sense. Calling it “many-worlds” seems like a misnomer, in that case, because those words seem to imply that there are many worlds, which implies existence.
Less visible, less accessible, the same thing. In physics, seeing something or touching something is an interaction of particles. (If two spaceships can see each other, they can shoot lasers at each other.)
Yes, “many worlds” is misleading, because it makes us think that there is one set of particles, and another set of particles… but it means that there is one configuration of a set of particles, and another configuration of the same set of particles. Many worlds in not like many planets, but more like many possible futures. It is not possible to travel to a parallel universe, because all your particles are already there, they are just entangled in a different configuration, along with the rest of the universe. It’s almost like saying that a set of equations has two possible solutions, but you can’t mix a part of one solution with a part of another solution.
This analogy is also imprecise (all analogies fail somewhere), because one “world” is not one specific configuration of particles, but more like a set of very similar configurations (having particles in similar places with similar speeds). This set can split—if a particle is flying near another particle, in one part of the set the particles hit, in other part they miss each other, and the futures of these configurations are no longer similar.
The quantum experiments show us that these “worlds” interact, and the interaction is greater if the worlds are more similar (if all their particles have the same position and speed, except for a very few particles). Then probabilities increase and decrease in a ways we would not expect in classical physics, but now we have the equations which describe this. And these equations say that the greater the difference between two “worlds”, the smaller the interaction between them. So if you have a difference greater than 10^10 particles (which is less than a number of particles in one cell, e.g. a neuron), the interaction is almost zero; we are unable to measure it. Therefore in a world A we say: “we can’t measure the world B anymore, so it does not exist; speaking of its existence wouldn’t make sense”, while in a world B we say: “we can’t measure the world A anymore, so it does not exist; speaking of its existence wouldn’t make sense”. Practically speaking, we are right in both worlds; they are separated beyond reach.
In addition to this, the collapse hypothesis says that when the interaction is almost zero, in some unspecified moment the interaction becomes exactly zero (as opposed to staying ever decreasing but non-zero forever, as the equations say). This hypothesis is absolutely unnecessary, it does not predict any experimental outcome, it only serves as a justification for saying (in a world A) that the world B now really really really does not exist… that our seeing of the world A is more than mere saying “in a world A we are in a world A, just like in a world B we are in a world B, and the interaction between these worlds is zero for all practical purposes”… that even a hypothetical non-physical observer outside of our universe would have to agree with us that yes, the world A is real, and the world B is not, because at the moment the interaction dropped to zero, some metaphysical property of the world B was removed, but it wasn’t removed from the world A.
I favor the “shut up and calculate” school, which says any interpretations that don’t make actual predictions are both unnecessary and harmful.
Certainly, if you have to choose an “interpretation” and tell stories about other universes we can’t interact with, MWI is better than collapse, for the reasons Eliezer gives. But I don’t think we should have either of them.
It’s not a matter of privileging. Existence is not an applicable predicate. It’s not that we don’t know whether they exist. Just as other universes are neither sweet nor sour, neither happy nor sad, they neither exist nor not-exist.
I mean the same things that you do, I’m just using different words to try to express them.
I agree that “existence is not an applicable predicate”, I was just trying to roughly express what my thoughts were.
Basically, you assume things still exist when they pass outside your light cone for the same reason you think your friend still exists when he walks behind a wall. While he might teleport away as soon as he walks behind the wall and then teleport back just before he comes back, it’s much more consistent to believe he just keeps existing unless you have a reason to think otherwise.
Edit: just noticed username. Deleting posthaste.
It isn’t really that hard to wriggle this question. Why do I have to choose Science or Bayes, can’t I just choose not to have an opinion until I am more capable of making a decision? It would seem suicidal from the perspective of bias to choose a side, especially when the stakes are currently so low. Question wriggled. I don’t have to choose between Science and Bayes, I can use them both when they are useful, and simply not hold an opinion in the area where they are in some form of conflict.
I would argue that there are plenty of fields of science in which elegance is considered important.
Most prominently, mathematics. Mathematicians do run experiments, courtesy of computers, and it is the very field physics must so closely rely on. If mathematicians do not practice the scientific method, what the heck do they do?
Practice mathematics? It’s a pretty distinct thing unto itself.
That’s nice, but how does the Mathematical Method differ from the scientific one?
What differing insights do the ‘Math Goggles’ offer, as it were.
“Engineers think equations approximate reality, physicists think reality approximates equations, mathematicians don’t care about reality”?
Catchy.
Mathematicians do not test their models against Nature, the ultimate arbiter, only for self-consistency and validity within their own framework. Math will be the same in many different possible worlds. Of course, on their less-sane days they engage in philosophical debates about which axioms are right.
Nature sneaks in the back door from time to time. Math may be the same in different possible worlds, eventually, but the way it unfolds in history is going to change from world to world.
On your view, is there such a thing as a Nature against which models can be tested and which arbitrates among them?
Not mathematical models… These can be motivated by experiment, but they are not bound to make accurate predictions. If that’s what you were asking...
Mathematical theories and constructs aren’t bound by nature...but models? Models are there to model something, surely?
If I could show you an example of mathematicians running ongoing computer simulations in order to test theories (well. Test conjectures for progressively higher values), would that demonstrate otherwise to you?
And it’s not as if proofs and logic are not employed in other fields when the option is available. Isn’t the link between physics and mathematics a long-standing one, and many of the predictions of quantum theory produced on paper before they were tested?
This happens, but the conclusion is different. No matter how many cases of an infinite-case conjecture I test, it’s not going to be accepted as proof or even particularly valid evidence that the conjecture is true. The point of doing this is more to check if there are any easy counter-examples, or to figure out what’s going on in greater detail, but then you go back and prove it.
That is evidence that a weaker conjecture (e.g. that the conjecture holds over some very huge range of numbers) is true.
And the proof verification can be seen as an empirical process. In fact it should be, given that proof verification is an experiment run on a physical machine which has limited reliability and a probability of error.
You go back and prove it if you can—and are mathematicians special in that regard, save that they deal with concepts more easily proven than most? When scientists in any field can prove something with just logic, they do. Evidence is the tiebreaker for exclusive, equivalently proven theories, and elegance the tiebreaker for exclusive, equivalently evident theories.
And that seems true for all fields labeled either a science or a form of mathematics.
Hmmmm}...
I would say that when they do this, they are doing mathematics instead of science. (By the time that scientists can prove something with logic, they necessarily have a mathematical model of their field.)
I’d say it’s a little more complicated than this. In those fields where solid mathematical models have been developed, there’s usually some back-and-forth between experimentalists and theoreticians, and any particular idea is usually considered mildly suspect until both rigorous mathematical models and solid empirical backing exist. New ideas might emerge either from the math or from empirical findings.
These days in physics and chemistry the mathematical models usually seem to emerge first, but that’s not true for all fields; in astronomy, say, it’s common for observations to go unexplained or to clank along with sketchy models for quite a while before the math properly catches up.
If that’s the case, and if it is also the case that scientists prefer to use proofs and logic where available (I can admittedly only speak for myself, for whom the claim is true), then I would argue that all scientists are necessarily also mathematicians (that is to say, they practice mathematics).
And, if it is the case that mathematicians can be forced to seek inherently weaker evidence when proofs are not readily available, then I would argue that all mathematicians are necessarily also scientists (they practice science).
At that point, it seems like duplication of work to call what mathematicians and scientists do different things. Rather, they execute the same methodology on usually different subject matter (and, mind you, behave identically when given the same subject matter). You don’t have to call that methodology “the scientific method”, but what are you gonna call it otherwise?
Their inherently weaker evidence still isnt empirical evidence. Computation isn;t intrinsically emprical, because a smart enough mathematician could do it in their head...they are just offloading the cognitive burden.
Fair enough, but I think that the example of mathematics was brought forth because of the thing mathematicians primarily do.
So we could rather say that “the scientific method” refers to the things scientists (and mathematicians) do when there are no proofs, which is to test ideas through experiment, and “the mathematical method” refers to proving things.
Of course, you don’t have to draw this distinction if you prefer not to; if you claim that both of these things should be called “the scientific method” then that’s also fair. But I’m pretty sure that the “Science vs. Bayes” dilemma refers only to the first thing by “Science”, since “Bayes” and “the mathematical method” don’t really compete in the same playing field.
I think that’s a workable description of the process, but using that you still have the mathematical tendency to appreciate elegance, which, on this process model, doesn’t seem like it’s in the same place as the “mathematical method” proper—since elegance becomes a concern only after things are proven.
You could argue that elegance is informal, and that this aspect of the “Science V Bayes” argument is all about trying to formalize theory elegance (in which case, it could do so across the entire process), and I think that’d be fair, but it’s not like elegance isn’t a concept already in science, though one preexisting such that it was simply not made an “official” part of “Science”.
So to try to frame this in the context of my original point, those quantum theorists who ignore an argument regarding elegance don’t strike me as being scientists limited by the bounds of their field, but scientists being human and ignoring the guidelines of their field when convenient for their biases—it’s not like a quantum physicist isn’t going to know enough math to understand how arguments regarding elegance work.
You have it backwards. Evidence is the only thing that counts. Logic is a tool to make new models, not to test them. Except in mathematics, where there is no way to test things experimentally.
Your claim leads me back to my earlier statement.
Because as Kindly notes, this happens. Mathematicians do sometimes reach for mere necessary-but-not-sufficient evidence for their claims, rather than proof. But obviously, they don’t do so when proof is more accessible—and usually, because of the subject matter mathematicians work with, it is.
There is a difference between checking the internal consistency of a simulation and gathering evidence. Scientists who use simulation calibrate the simulation with empirical measurements, and they generally are running the simulation to make predictions that have to be tested against yet more empirical measurement.
Mathematicians are just running a simulation in a vacuum. Its a very different thing.
What is an example of something a scientist can prove with ‘just logic’?
I wasn’t comparing scientists running a simulation with mathematicians running a simulation. I was comparing scientists collecting evidence that might disprove their theories with mathematicians running a simulation—because such a simulation collects data that might disprove their conjectures.
We’ll need to agree on a subject who is a scientist and not a mathematician. The easiest example for me would be to use a computer scientist, but you may argue that whenever a computer scientist uses logic they’re actually functioning as a mathematician, in which case the dispute comes down to ‘what’s a mathematician’.
In the event you don’t dispute, I’d note that a lot of computer science has involved logic regarding, for instance, the nature of computation.
In the event you do dispute the status of a computer science as science, then we still have an example of scientists performing mathematics when possible, and really physicists do that too (the quantum formulas that don’t mean anything are a fine example, I think). So, to go back to my original point, it’s not like an accusation of non-elegance has to come from nowhere; those physicists are undeniably practicing math, and elegance is important there.
Has anyone come up with a decent model by mechanically applying a logical procedure?
It’s incredibly common practice for mathematicians to believe that the truth of even one special case of a conjecture is evidence that the conjecture is true. For instance, the largest currently-active research program started out this way.
To the extent that the research is worth pursuing, yes; but no mathematician treats such a conjecture in the same way as a proven result.
We must be using different meanings of the word “evidence”, because it seems under your definition consequentialist mathematicians would be completely apathetic.
I’m sorry, that made no sense to me. How about I try to restate what I was trying to say?
Mathematicians believe in mathematical results on a tiered scale: first come all the proven results, and next come all the unproven results they believe are true. Testing the first 10^6 cases of a conjecture meant to apply to all integers puts it pretty high up there in the second tier, but not as high as the first tier.
Arguably, for a result to become a theorem it has to become widely accepted by the mathematical community, at which point the proof is almost certainly correct. If I read a paper that proves a result nobody really cares about, I’m more careful about believing it.
Of course, you can assume a conjecture and use it to prove a theorem; then the theorem you proved is a valid result of the form “if this conjecture is true, then”. However, a stronger result that doesn’t assume the conjecture is better.
So when you said what I quoted in the great-grandparent, that is, “[10^6 cases aren’t] going to be accepted as … particularly valid evidence”, you meant that it would be accepted as evidence, that is, it belongs to your second tier of belief? That’s what’s bothering me.
Sorry, I guess at that point I was just thinking of the first tier of belief when making that comment.
But I think it’s also true that most notable conjectures have, in addition to verification of some cases, some sort of handwavy reason why we would believe them, such as a sketch of a proof but with many holes in it, and this is at least important as the other kind of evidence.
Here is the difference: the superstring theory is a reasonably good mathematical model which predicts a spacetime with 10 or 11-dimensions on purely mathematical grounds. It also predicts that particles should come in pairs (quarks+squarks). Despite its internal self-consistency, it’s not a good model of the world we live in. Whether mathematicians use the scientific method depends on your definition of the scientific method (a highly contested issue on the relevant wikipedia page). Feel free to give your definition and we can go from there.
I feel this reply I made captures the link between proof, evidence, and elegance, in both scientific and mathematical fields.
That is to say, where proof is equivalent for two mutually exclusive theories (because sometimes things are proven logically outside mathematics, and not everything in mathematics are proven), evidence is used as a tiebreaker.
And where evidence is equivalent for two mutually exclusive theories (requiring of course that proof also be equivalent), elegance is used as a tiebreaker.
Not quite. More like abstractly physical gorunds...combining various symmetry principles from preceding theories.
Not quite. it doesn’t predict a single world that is different. It predicts a landscape in which our world may be located with difficulty.
chuckles… I wrote a whole bunch about string theory, but I’ve decided to simply mention it. I have a TON of mathematical notation to learn before I can subject that glittery...whatever...to analysis.
As for many worlds… I like the way many of the “”paradoxes” of quantum mechanics don’t even LOOK like paradoxes in many worlds. starting with-you don’t need to specify a special exemption to the no-ftl rule. “information” for “collapse” happens because the little pieces of the waves almost-touch and slip past each other when you perform the comparison operation...at least, that’s how I visualize it.
I neither give my allegiance to Science nor to Bayes. I admit that i do not know the answer. The question itself produces Biases.
This is an old article, and it’s possible that this question has already been asked, but I’ve been looking through the comments and I can’t find it anywhere. So, here it is:
Why does it matter? If many-worlds is indistinguishable from the Copenhagen Interpretation by any experiment we can think of to do, how does it matter which model we use? If we ever find ourselves in a scenario where it actually does matter which one we use—one where using the wrong model will result in us making some kind of mistake—then we now have an experiment we can do to determine which model is correct. If we never find ourselves in such a position, it doesn’t matter which model we decided on.
When phrased this way, Science doesn’t seem to have such a serious problem. Saying “Traditional Science can lead to incorrect conclusions, but only about things that have no actual effect on the world” doesn’t sound like such a searing criticism.
Leaving phrasing aside, it’s worth drawing a distinction between things that have no actual effect on the world and things it’s impractical for me to currently observe.
A process that leads more reliably to correct conclusions about the former is perhaps useless.
A process that leads more reliably to correct conclusions about the latter is not.
Unfortunately, this is not what OP argues. There is no hint of suggesting that MWI may be testable some day (which it might be—when done by physicists, not amateurs). The MWI ontology seems to be slowly propagating through the physics community, even Sean Carroll seems to believe it now. Slide 34 basically repeats Eliezer almost verbatim.
Good question—it does not matter. Opinions on untestable questions are about taste, and arguing about taste is a waste of everyone’s time. The “LW consensus” is just wrong to insist on Everett (and about lots of other things, so it should not be too surprising—for example they insist on Bayes, and like EDT).
I know there are lots of people here who argue for EDT or various augmentations of EDT, but I hope that doesn’t count as a LW consensus.
Obviously LW opinion isn’t monolithic, I merely meant that UDT et al seems to be based on EDT, and lots of folks around here are poking around with UDT. I gave a talk recently at Oxford about why I think basing things on EDT is a bad idea.
I want to watch your talk but videos are slow and the sound quality didn’t seem very good. So I’ll just point out that the point of UDT is to improve upon both EDT and CDT, and it’s wildly mischaracterising LW consensus to say that the interest in UDT suggests that people think EDT is good. They don’t even have much in common, technically. (Besides, even I don’t think EDT is good, and as far as I know I’m the only person who’s really bothered arguing for it.)
No, there are other folks who argue for EDT (I think Paul did). To be fair, I have a standing invitation for any proponent of EDT to sit me down and explain a steelman of EDT to me. This is not meant to trap people but to make progress, and maybe teach me something. The worry is that EDT fans actually haven’t quite realized just how tricky a problem confounding is (and this is a fairly “basic” problem that occurs long before we have to worry about Omega and his kin—gotta walk before you fly).
I would be willing to try to explain such to you, but as you know, I was unsuccessful last time :)
I think you have some un-useful preconception about the capabilities of EDT based on the fact that it doesn’t have “causal” in the name, or causal analysis anywhere directly in the math. Are you familiar with the artificial intelligence model AIXI?
AIXI is capable of causal analysis in much the same way that EDT is. Because although neither of them explicitly include the math of causal analysis, since such math is computable, there are some programs in AIXI’s hypothesis space that do causal analysis. Given some certain amount of data we can expect AIXI to start zooming in on those models and use them for prediction, effectively “learning about causality”.
If we wrote a hypothetical artificially intelligent EDT agent, it could certainly take a similar approach, given a large enough prior space—including, perhaps, all programs, some of which do causal analysis. Of course, in practice we don’t have an infinite amount of time to wait for our math to evaluate every possible program.
It’s slightly more practical to simply furnish your EDT calculation (when trying to calculate such things as your HAART example by hand) with a prior that contains all the standard “causal-ish” conclusions, such as “if there is a data set showing an intervention of type X against subject of type Y results in effect Z, a similar intervention against a similar object probably results in a similar effect”. But even that is extremely impractical, since we are forced to work at the meta-level, with hypothesis spaces including all possible data sets, (current) interventions, subjects and effects.
In real life we don’t really do the above things, we do something much more reasonable, but I hope the above breaks your preconception.
Too high level, what is your actual algorithm for solving decision problems? That is, if I give you a problem, can you give me an answer? An actual problem, not a hypothetical problem. I could even give you actual data if you want, and ask what specific action you will choose. I have a list of problems right here.
If there is some uncomputable theoretical construct that’s not really a serious competitor for doing decision theory. There is no algorithm! We want to build agents that actually act well, remember?
In real life I’m not actually in a position to decide whether to give HAART to a patient. That would be a hypothetical problem. In hypothetical real life, if I were to use EDT, what I would do is use pearl’s causal analysis and some highly unproven assumptions (ie. common sense) to derive a probability distribution over my hypothetical actual situation, and pick the the action with the highest conditional expected utility. This is the “something much more reasonable” that I was alluding to.
The reason I explained all the impractical models above is because you need to understand that using common sense isn’t cheating, or anything illegal. It’s just an optimization, more or less equivalent to actually “furnishing your EDT calculation with a prior that contains all the standard “causal-ish” conclusions”. This is something real decision theorists do every day, because it’s not practical, even for a causal decision theorist, to work at the meta-level including all possible datasets, interventions, effects, etc.
Ok, thanks. I understand your position now.
No, you don’t. You’ve pattern matched it to the nearest wrong thing — you’re using causal analysis, you must be secretly using CDT!
If I was using CDT, I would use pearl’s causal analysis and common sense to derive a causal graph over my hypothetical actual situation, and pick the action with the highest interventional expected utility.
This is in fact something decision theorists do every day, because the assumption that a dataset about applying HAART to certain patients has anything at all to say about applying a similar treatment to a similar patient is underlied by lots of commonsense causal reasoning, such as the fact that HAART works by affecting the biology of the human body (and therefore should work the same way in two humans), that it is unaffected by the positions of the stars, because they are not very well causally connected, and so on.
When I read the philosophy literature, the way decision theory problems are presented is via examples. For example, smoking lesion is one such example, newcomb’s problem is another. So when I ask you what your decision algorithm is, I am asking for something that (a) you can write down and I can follow step by step (b) that takes these examples as input and (c) produces an output action.
What is your preferred algorithm that satisfies (a), (b), and (c)? Can you write it down for me in a follow up post? If (a) is false, it’s not really an algorithm, if (b) is false, it’s not engaging with the problems people in the literature are struggling with, and if (c) is false, it’s not answering the question! So, for instance, anything based on AIXI is a non-starter because you can’t write it down. Anything that you have not formalized in your head enough to write down is a non-starter also.
I have been talking with you for a long time, and in all this time, never have you actually written down what it is you are using to solve decision problems. I am not sure why—do you actually have something specific in mind or not? I can write down my algorithm, no problem.
Here is the standard causal graph for Newcomb’s problem (note that this is a graph of the agent’s actual situation, not a graph of related historical data):
Given that graph, my CDT solution is to return the action A with highest
sum_payoff { U(payoff) P(payoff | do(A), observations) }
. Given that graph (you don’t need a causal graph of course), my EDT solution is to return the action A with highestsum_payoff { U(payoff) P(payoff | A, observations) }
.That’s the easy part. Are you asking me for an algorithm to turn a description of Newcomb’s problem in words into that graph? You probably know better than me how to do that.
If I’ve understood this sequence correctly, Eliezer would disagree with you: “Traditional Science can lead to incorrect conclusions, but only about things that have no actual effect on the world” is a serious criticism. He calls out the “let me know when you’ve got a testable prediction” attitude as explicitly wrong.