At least for me, I think the question of whether I’m buying too much for myself in a situation of limited supplies was more important for the decision than the fear of being perceived as weird. This depends of course on how limited the supplies actually were at the time of buying but I think it is generally important to distinguish between the shame because one might profit at the expense of others, and the “pure” weirdness of the action.
paragonal
From the article:
At this point, I think I am somewhat below Nate Silver’s 60% odds that the virus escaped from the lab, and put myself at about 40%, but I haven’t looked carefully and this probability is weakly held.
Quite off-topic: what does it mean from a Bayesian perspective to hold a probability weakly vs. confidently? Likelihood ratios for updating are independent of the prior so a weakly-held probability should update exactly as a confidently-held one. Is there a way to quantifiy the “strongness” with which one holds a probability?
We have reason to believe that peptide vaccines will work particularly well here, because we’re targeting a respiratory infection, and the peptide vaccine delivery mechanism targets respiratory tissue instead of blood.
Just a minor point: by delivery mechanism, are you talking about inserting the peptides through the nose à la RadVac? If I understand correctly, Werner Stöcker injects his peptide-based vaccine.
Note you can still get massive updates if B’ is pretty independent of B. So if someone brings in camera footage of the crime, that has no connection with the previous witness’s trustworthiness, and can throw the odds strongly in one direction or another (in equation, independence means that P(B’|H,B)/P(B’|¬H,B) = P(B’|H)/P(B’|¬H)).
Thanks, I think this is the crucial point for me. I was implicitly operating under the assumption that the evidence is uncorrelated which is of course not warranted in most cases.
So if we have already updated on a lot of evidence, it is often reasonable to expect that part of what future evidence can tell us is already included in these updates. I think I wouldn’t say that the likelihood ratio is independent of the prior anymore. In most cases, they have a common dependency on past evidence.
I’m reluctant to engage with extraordinarily contrived scenarios in which magical 2nd-law-of-thermodynamics-violating contraptions cause “branches” to interfere.
Agreed. Roland Omnes tries to calculate how big the measurement apparatus of Wigner needs to be in order to measure his friend and gets 10 to the power of 10E18 degrees of freedom (“The Interpretation of Quantum Mechanics”, section 7.8).
But if we are going to engage with those scenarios anyway, then we should never have referred to them as “branches” in the first place, …
Well, that’s one of the problems of the MWI: how do we know when we should speak of branches? Decoherence works very well for all practical purposes but it is a continuous process so there isn’t a point in time where a single branch actually splits into two. How can we claim ontology here?
You might be also be interested in “General Bayesian Theories and the Emergence of the Exclusivity Principle” by Chiribella et al. which claims that quantum theory is the most general theory which satisfies Bayesian consistency conditions.
By now, there are actually quite a few attempts to reconstruct quantum theory from more “reasonable” axioms besides Hardy’s. You can track the refrences in the paper above to find some more of them.
Thanks for your answer. Part of the problem might have been that I wasn’t that proficient with vim. When I reconfigured the clashing key bindings of the IDE I sometimes unknowingly overwrote a vim command which turned out to be useful later on. So I had to reconfigure numerous times which annoyed me so much that I abandoned the approach at the time.
Right, but (before reading your post) I had assumed that the eigenvectors somehow “popped out” of the Everett interpretation.
This is a bit of a tangent but decoherence isn’t exclusive to the Everett interpretation. Decoherence is itself a measurable physical process independent of the interpretation one favors. So explanations which rely on decoherence are part of all interpretations.
I mean in the setup you describe there isn’t any reason why we can’t call the “state space” the observer space and the observer “the system being studied” and then write down the same system from the other point of view...
In the derivations of decoherence you make certain approximations which loosely speaking depend on the environment being big relative to the quantum system. If you change the roles these approximations aren’t valid any more. I’m not sure if we are on the same page regarding decoherence, though (see my other reply to your post).
What goes wrong if we just take our “base states” as discrete objects and try to model QM as the evolution of probability distributions over ordered pairs of these states?
You might be interested in Lucien Hardy’s attempt to find a more intuitive set of axioms for QM compared to the abstractness of the usual presentation: https://arxiv.org/abs/quant-ph/0101012
In Smolin’s view, the scientific establishment is good at making small iterations to existing theories and bad at creating radically new theories.
I agree with this.
It’s therefore not implausible that the solution to quantum gravity could come from a decade of solitary amateur work by someone totally outside the scientific establishment.
For me, this sounds very implausible. Although the scientific establishment isn’t geared towars creating radically new theories, I think it is even harder to create such ideas from the outside. I agree that most researchers in acadamia are narrowly specialized and not interested in challenging widely shared assumptions but the people who do are also in acadamia. I think that you focus too much on the question-the-orthodoxy part. In order to come up with something useful you need to develop a deep understanding and to bounce around ideas in a fertile environment. I think that both have become increasingly difficult for people outside of acadamia because of the complexity of the concepts involved.
The evidence you cite doesn’t seem to support your assertion: Although Rovelli holds some idiosynratic ideas, his career path led him through typical prestigous institutions. So he certainly cannot be considered to stand “totally outside the scientific establishment”.
Thanks, I see we already had a similar argument in the past.
I think there’s a bit of motte and bailey going on with the MWI. The controversy and philosophical questions are about multiple branches / worlds / versions of persons being ontological units. When we try to make things rigorous, only the wave function of the universe remains as a coherent ontological concept. But if we don’t have a clear way from the latter to the former, we can’t really say clear things about the parts which are philosophically interesting.
Specifically, I would love to see a better argument for it being ahead of Helion (if it is actually ahead, which would be a surprise and a major update for me).
I agree with Jeffrey Heninger’s response to your comment. Here is a (somewhat polemical) video which illustrates the challenges for Helion’s unusual D-He3 approach compared to the standard D-T approach which CFS follows. It illustrates some of Jeffrey’s points and makes other claims like Helion’s current operational poc reactor Trenta being far from adequate for scaling to a productive reactor when considering safety and regulatory demands (though I haven’t looked into whether CFS might be affected by this just the same).
Many characteristics have been proposed as significant, for example:
It’s better if fingers have less traveling to do.
It’s better if consecutive taps are done with different fingers or, better yet, different hands.
It’s better if common keys are near the fingers’ natural resting places.
It’s better to avoid stretching and overusing the pinky finger, which is the weakest of the five.
Just an anecdotal experience: I, too, have wrist problems. I have tried touch typing with 10 fingers a couple of times and my problems got worse each time. My experience agrees with the point about the pinky above but many consecutive taps with non-pinky fingers on the same hand also make my wrist problems worse. If traveling less means more of those, I prefer traveling more. (But consecutive taps on different hands are good for me.)
Since many consecutive taps with different fingers on the same hand seem to part of the idea behind all keyboard layouts, I expect the costs of switching from the standard layout to an idiosyncratic one to outweigh the benefits.
For now, I have given up on using all 10 fingers. My current typing system is a 3+1 finger system with a little bit of hawking. I’d like to be able to touch type perfectly but this seems to be quite hard without using 10 fingers. I don’t feel very limited by my typing speed, though.
As you learn more about most systems, the likelihood ratio should likely go down for each additional point of evidence.
I’d be interested to see the assumptions which go into this. As Stuart has pointed out, it’s got to do with how correlated the evidence is. And for fat-tailed distributions we probably should expect to be surprised at a constant rate.
There are remaining open questions concerning quantum mechanics, certainly, but I don’t really see any remaining open questions concerning the Everett interpretation.
“Valid” is a strong word, but other reasons I’ve seen include classical prejudice, historical prejudice, dogmatic falsificationism, etc.
Thanks for answering. I didn’t find a better word but I think you understood me right.
So you basically think that the case is settled. I don’t agree with this opinion.
I’m not convinced of the validity of the derivations of the Born rule (see IV.C.2 of this for some critcism in the literature). I also see valid philosophical reasons for preferring other interpretations (like quantum bayesianism aka QBism).
I don’t have a strong opinion on what is the “correct” interpretation myself. I am much more interested in what they actually say, in their relationships, and in understanding why people hold them. After all, they are empirically indistinguishable.
Honestly, though, as I mention in the paper, my sense is that most big name physicists that you might have heard of (Hawking, Feynman, Gell-Mann, etc.) have expressed support for Everett, so it’s really only more of a problem among your average physicist that probably just doesn’t pay that much attention to interpretations of quantum mechanics.
There are other big name physicists who don’t agree (Penrose, Weinberg) and I don’t think you are right about Feynman (see “Feynman said that the concept of a “universal wave function” has serious conceptual difficulties.” from here). Also in the actual quantum foundations research community, there’s a great diversity of opinion regarding interpretations (see this poll).
I think it makes more sense to think of MWI as “first many, then even more many,” at which point questions of “when does the split happen?” feel less interesting, because the original state is no longer as special. [...] If time isn’t quantized, then this has to be spread across continuous space, and so thinking of there being a countable number of worlds is right out.
What I called the “nice ontology” isn’t so much about the number of worlds or even countability but about whether the worlds are well-defined. The MWI gives up a unique reality for things. The desirable feature of the “nice ontology” is that the theory tells us what a “version” of a thing is. As we all seem to agree, the MWI doesn’t do this.
If it doesn’t do this, what’s the justification for speaking of different versions in the first place? I think pure MWI makes only sense as “first one, then one”. After all, there’s just the universal wave function evolving and pure MWI doesn’t give us any reason to take a part of this wavefunction and say there are many versions of this.
This
Ok, now comes the trick: we assume that observation doesn’t change the system
and this
I think the basic point is that if you start by distinguishing your eigenfunctions, then you naturally get out distinguished eigenfunctions.
doesn’t sound correct to me.
The basis in which the diagonalization happens isn’t put in at the beginning. It is determined by the nature of the interaction between the system and its environment. See “environment-induced superselection” or short “einselection”.
Do you see any technical or conceptual challenges which the MWI has yet to address or do you think it is a well-defined interpretation with no open questions?
What’s your model for why people are not satisfied with the MWI? The obvious ones are 1) dislike for a many worlds ontology and 2) ignorance of the arguments. Do you think there are other valid reasons?
Smolin’s book has inspired me to begin working on a theory of quantum gravity. I’ll need to learn new things like quantum field theory.
If you don’t know Quantum Field Theory, I don’t see how you can possibly understand why General Relativity and Quantum Theory are difficult to reconcile. If true, how are you able to work on the solution to a problem you don’t understand?
Einstein was a realist who was upset that the only interpretation available to him was anti-realist. Saying that he took the wavefunction as object of knowledge is technically true, ie, false.
I agree that my phrasing was a bit misleading here. Reading it again, it sounds like Einstein wasn’t a realist, which of course is false. For him, QM was a purely statistical theory which needed to be supplemented by a more fundamental realistic theory (a view which has been proven to be untenable only in 2012 by Pusey, Barrett and Rudolph).
Thanks for conceding that the Copenhagen interpretation has meant many things. Do you notice how many people deny that? It worries me.
I don’t know how many people really deny this. Sure, people often talk about “the” Copenhagen interpretation but most physicists use it only as a vague label because they don’t care much about interpretations. Who do you have in mind denying this and what exactly worries you?
I’m not sure if this will be satisfying to you but I like to think about it like this:
Experiments show that the order of quantum measurements matters. The mathematical representation of the physical quantities needs to take this into account. One simple kind of non-commutative objects are matrices.
If physical quantities are represented by matrices, the possible measurement outcomes need to be encoded in there somehow. They also need to be real. Both conditions are satisfied by the eigenvalues of self-adjoint matrices.
Experiments show that if we immediately repeat a measurement, we get the same outcome again. So if eigenvalues represent measurement outcomes the state of the system after the measurement must be related to them somehow. The eigenvectors of the matrix representing this state is a simple realization of this.
This isn’t a derivation but it makes the mathematical structure of QM somewhat plausible to me.