Does the book talk about schizophrenia? I’m a bit skeptical that coherence therapy and IFS can be used to heal it but I’m quite interested in hearing your thoughts about schizophrenia in relation to subagent models.
paragonal
I don’t think that the QM example is like the others. Explaining this requires a bit of detail.
From section V.:
My understanding of the multiverse debate is that it works the same way. Scientists observe the behavior of particles, and find that a multiverse explains that behavior more simply and elegantly than not-a-multiverse.
That’s not an accurate description of the state of affairs.
In order to calculate correct predictions for experiments, you have to use the probabilistic Born rule (and the collapse postulate for sequential measurements). That these can be derived from the Many Worlds interpretation (MWI) is a conjecture which hasn’t been proved an a universally accepted way.
So we have an interpretation which works but is considered unelegent by many and we have an interpretation which is simple and elegant but is only conjectured to work. Considering the nature of the problems with the proofs, it is questionable whether the MWI can retain its elegant simplicity if it is made to work (see below).
One (doubtless exaggerated) way I’ve heard multiverse proponents explain their position is like this: in certain situations the math declares two contradictory answers – in the classic example, Schrodinger’s cat will be both alive and dead. But when we open the box, we see only a dead cat or an alive cat, not both. Multiverse opponents say “Some unknown force steps in at the last second and destroys one of the possibility branches”. Multiverse proponents say “No it doesn’t, both possibility branches happen exactly the way the math says, and we end up in one of them.”
What I find interesting is that Copenhagen-style interpretations looked ugly to me at first but got more sensible the more I learned about them. With most other interpretations it is the reverse: initially, the looked very compelling but the intuitive pictures are often hard to make rigorous. For example, if you try to describe the branching process mathematically, it isn’t possible to say when exactly the branches are splitting or even that they are splitting in an unambiguous way at all. Without introducing something like the observer who sets a natural scale for when it is okay to approximate certain values by zero, it is very difficlt to way to speak of different worlds consistently. But then the simplicity of the MWI is greatly reduced and the difference to a Copenhagenish point of view is much more subtle.
Generally, regarding the interpretation of QM, there are two camps: realists who take the wave function as a real physical object (Schrödinger, Bohm, Everett) and people who take the wavefunction as an object of knowledge (Bohr, Einstein, Heisenberg, Fuchs).
If the multiverse opponent describes the situation involving “some unknown force” he is also in the realist camp and not a proponent of a Copenhagenish position. The most modern Copenhagenish position would be QBism which asserts “whenever I learn something new by means of a measurement, I update”. From this point of view, QM is a generalization of probability theory, the wavefunction (or probability amplitude) is the object of knowledge which replaces ordinary probabilities, and the collapse rule is a generalized form of Bayesian updating. That doesn’t seem less sensible to me than your description of the multiverse proponent. Of course, there’s also a bullet to bite here: the abandonment of a mathematical layer below the level of (generalized) probabilities.
The important point is that this is not about which position is simpler than the other but about a deep divide in the philosophical underpinnings of science.
Taking this exaggerated dumbed-down account as exactly right, this sounds about as hard as the dinosaurs-vs-Satan example, in terms of figuring out which is more Occam’s Razor compliant. I’m sure the reality is more nuanced, but I think it can be judged by the same process. Perhaps this is the kind of reasoning that only gets us to a 90% probability there is a multiverse, rather than a 99.999999% one. But I think determining that theories have 90% probability is a reasonable scientific thing to do.
As per what I have written above I think that there’s a crucial difference between the examples of the fossils and the sphinx on the one hand and the interpretation of QM on the other hand. Which interpretation of QM one prefers is connected to one’s position on deep philosophical questions like “Is reductionism true?”, “Is Nature fundamentally mathematical?”, “What’s consiousness?”, etc. So the statement “[there’s a] 90% probability there is a multiverse” is connected to statements of the form “there’s a 90% probability that reductionism is true”. Whether such statements are meaningful seems much more questionable to me than in the case of your other examples.
Einstein was a realist who was upset that the only interpretation available to him was anti-realist. Saying that he took the wavefunction as object of knowledge is technically true, ie, false.
I agree that my phrasing was a bit misleading here. Reading it again, it sounds like Einstein wasn’t a realist, which of course is false. For him, QM was a purely statistical theory which needed to be supplemented by a more fundamental realistic theory (a view which has been proven to be untenable only in 2012 by Pusey, Barrett and Rudolph).
Thanks for conceding that the Copenhagen interpretation has meant many things. Do you notice how many people deny that? It worries me.
I don’t know how many people really deny this. Sure, people often talk about “the” Copenhagen interpretation but most physicists use it only as a vague label because they don’t care much about interpretations. Who do you have in mind denying this and what exactly worries you?
I think so, too, but I don’t know it (Eliezer’s Sequence on QM is still on my reading list). Given the importance people around here put on Bayes theorem, I find it quite surprising that the idea of a quantum generalization -which is what QBism is about- isn’t discussed here apart from a handful of isolated comments. Two notable papers in this direction are
In Smolin’s view, the scientific establishment is good at making small iterations to existing theories and bad at creating radically new theories.
I agree with this.
It’s therefore not implausible that the solution to quantum gravity could come from a decade of solitary amateur work by someone totally outside the scientific establishment.
For me, this sounds very implausible. Although the scientific establishment isn’t geared towars creating radically new theories, I think it is even harder to create such ideas from the outside. I agree that most researchers in acadamia are narrowly specialized and not interested in challenging widely shared assumptions but the people who do are also in acadamia. I think that you focus too much on the question-the-orthodoxy part. In order to come up with something useful you need to develop a deep understanding and to bounce around ideas in a fertile environment. I think that both have become increasingly difficult for people outside of acadamia because of the complexity of the concepts involved.
The evidence you cite doesn’t seem to support your assertion: Although Rovelli holds some idiosynratic ideas, his career path led him through typical prestigous institutions. So he certainly cannot be considered to stand “totally outside the scientific establishment”.
Smolin’s book has inspired me to begin working on a theory of quantum gravity. I’ll need to learn new things like quantum field theory.
If you don’t know Quantum Field Theory, I don’t see how you can possibly understand why General Relativity and Quantum Theory are difficult to reconcile. If true, how are you able to work on the solution to a problem you don’t understand?
No. The property which you are describing is not “mixedness” (technical term: “purity”). That the state vector in question can’t be written as a tensor product of state vectors makes it an *entangled* state.
Mixed states are states which cannot be represented by *any* state vector. You need a density matrix in order to write them down.
Do you see any technical or conceptual challenges which the MWI has yet to address or do you think it is a well-defined interpretation with no open questions?
What’s your model for why people are not satisfied with the MWI? The obvious ones are 1) dislike for a many worlds ontology and 2) ignorance of the arguments. Do you think there are other valid reasons?
I mean I could accept that the Schrödinger equation gives the evolution of the wave-function, but why care about its eigenfunctions so much?
I’m not sure if this will be satisfying to you but I like to think about it like this:
Experiments show that the order of quantum measurements matters. The mathematical representation of the physical quantities needs to take this into account. One simple kind of non-commutative objects are matrices.
If physical quantities are represented by matrices, the possible measurement outcomes need to be encoded in there somehow. They also need to be real. Both conditions are satisfied by the eigenvalues of self-adjoint matrices.
Experiments show that if we immediately repeat a measurement, we get the same outcome again. So if eigenvalues represent measurement outcomes the state of the system after the measurement must be related to them somehow. The eigenvectors of the matrix representing this state is a simple realization of this.
This isn’t a derivation but it makes the mathematical structure of QM somewhat plausible to me.
This
Ok, now comes the trick: we assume that observation doesn’t change the system
and this
I think the basic point is that if you start by distinguishing your eigenfunctions, then you naturally get out distinguished eigenfunctions.
doesn’t sound correct to me.
The basis in which the diagonalization happens isn’t put in at the beginning. It is determined by the nature of the interaction between the system and its environment. See “environment-induced superselection” or short “einselection”.
Right, but (before reading your post) I had assumed that the eigenvectors somehow “popped out” of the Everett interpretation.
This is a bit of a tangent but decoherence isn’t exclusive to the Everett interpretation. Decoherence is itself a measurable physical process independent of the interpretation one favors. So explanations which rely on decoherence are part of all interpretations.
I mean in the setup you describe there isn’t any reason why we can’t call the “state space” the observer space and the observer “the system being studied” and then write down the same system from the other point of view...
In the derivations of decoherence you make certain approximations which loosely speaking depend on the environment being big relative to the quantum system. If you change the roles these approximations aren’t valid any more. I’m not sure if we are on the same page regarding decoherence, though (see my other reply to your post).
What goes wrong if we just take our “base states” as discrete objects and try to model QM as the evolution of probability distributions over ordered pairs of these states?
You might be interested in Lucien Hardy’s attempt to find a more intuitive set of axioms for QM compared to the abstractness of the usual presentation: https://arxiv.org/abs/quant-ph/0101012
There isn’t a sharp line for when the cross-terms are negligible enough to properly use the word “branch”, but there are exponential effects such that it’s very clearly appropriate in the real-world cases of interest.
I agree that it isn’t a problem for practical purposes but if we are talking about a fundamental theory about reality shouldn’t questions like “How many worlds are there?” have unambiguous answers?
I agree that the question “how many worlds are there” doesn’t have a well-defined answer in the MWI. I disagree that it is a meaningless question.
From the bird’s-eye view, the ontology of the MWI seems pretty clear: the universal wavefunction is happily evolving (or is it?). From the frog’s-eye view, the ontology is less clear. The usual account of an experiment goes like this:
The system and the observer come together and interact
This leads to entanglement and decoherence in a certain basis
In the final state, we have a branch for each measurement outcome. i.e. there are now multiple versions of the observer
This seems to suggest a nice ontology: first there’s one observer, then the universe splits and afterwards we have a certain number of versions of the observer. I think questions like “When does the split happen?” and “How many versions?” are important because they would have well-defined answers if the nice ontology was tenable.
Unfortunately it isn’t, so the ontology is muddled. We have to use terms like “approximately zero” and “for all practical purposes” which takes us most of the way back to give the person who determines which approximations are appropriate and what is practical—aka the observer—an important part in the whole affair.
I think it makes more sense to think of MWI as “first many, then even more many,” at which point questions of “when does the split happen?” feel less interesting, because the original state is no longer as special. [...] If time isn’t quantized, then this has to be spread across continuous space, and so thinking of there being a countable number of worlds is right out.
What I called the “nice ontology” isn’t so much about the number of worlds or even countability but about whether the worlds are well-defined. The MWI gives up a unique reality for things. The desirable feature of the “nice ontology” is that the theory tells us what a “version” of a thing is. As we all seem to agree, the MWI doesn’t do this.
If it doesn’t do this, what’s the justification for speaking of different versions in the first place? I think pure MWI makes only sense as “first one, then one”. After all, there’s just the universal wave function evolving and pure MWI doesn’t give us any reason to take a part of this wavefunction and say there are many versions of this.
There are remaining open questions concerning quantum mechanics, certainly, but I don’t really see any remaining open questions concerning the Everett interpretation.
“Valid” is a strong word, but other reasons I’ve seen include classical prejudice, historical prejudice, dogmatic falsificationism, etc.
Thanks for answering. I didn’t find a better word but I think you understood me right.
So you basically think that the case is settled. I don’t agree with this opinion.
I’m not convinced of the validity of the derivations of the Born rule (see IV.C.2 of this for some critcism in the literature). I also see valid philosophical reasons for preferring other interpretations (like quantum bayesianism aka QBism).
I don’t have a strong opinion on what is the “correct” interpretation myself. I am much more interested in what they actually say, in their relationships, and in understanding why people hold them. After all, they are empirically indistinguishable.
Honestly, though, as I mention in the paper, my sense is that most big name physicists that you might have heard of (Hawking, Feynman, Gell-Mann, etc.) have expressed support for Everett, so it’s really only more of a problem among your average physicist that probably just doesn’t pay that much attention to interpretations of quantum mechanics.
There are other big name physicists who don’t agree (Penrose, Weinberg) and I don’t think you are right about Feynman (see “Feynman said that the concept of a “universal wave function” has serious conceptual difficulties.” from here). Also in the actual quantum foundations research community, there’s a great diversity of opinion regarding interpretations (see this poll).
You could also turn around this question. If you find it somewhat plausible that that self-adjoint operators represent physical quantities, eigenvalues represent measurement outcomes and eigenvectors represent states associated with these outcomes (per the arguments I have given in my other post) one could picture a situation where systems hop from eigenvector to eigenvector through time. From this point of view, continuous evolution between states is the strange thing.
The paper by Hardy I cited in another answer to you tries to make QM as similar to a classical probabilistic framework as possible and the sole difference between his two frameworks is that there are continuous transformations between states in the quantum case. (But notice that he works in a finite-dimensional setting which doesn’t easily permit important features of QM like the canonical commutation relations).
We have reason to believe that peptide vaccines will work particularly well here, because we’re targeting a respiratory infection, and the peptide vaccine delivery mechanism targets respiratory tissue instead of blood.
Just a minor point: by delivery mechanism, are you talking about inserting the peptides through the nose à la RadVac? If I understand correctly, Werner Stöcker injects his peptide-based vaccine.
At least for me, I think the question of whether I’m buying too much for myself in a situation of limited supplies was more important for the decision than the fear of being perceived as weird. This depends of course on how limited the supplies actually were at the time of buying but I think it is generally important to distinguish between the shame because one might profit at the expense of others, and the “pure” weirdness of the action.
Thanks!