One of the last greatest open questions in quantum mechanics, and the only one that seems genuinely mysterious, is where the Born statistics come from—why our probability of seeing a particular result of a quantum experiment, ending up in a particular decoherent blob of the wavefunction, goes as the squared modulus of the complex amplitude.
Is it the case that the Born probabilities are necessarily explained—can only be explained—by some hidden component of our brain which says that we care about the alternatives in proportion to their squared modulus?
Since (after all) if we only cared about results that went a particular way, then, from our perspective, we would always anticipate seeing the results go that way? And so what we anticipate seeing, is entirely and only dependent on what we care about?
Or is there a sense in which we end up seeing results with a certain probability, a certain measure of ourselves going into those worlds, regardless of what we care about?
If you look at it closely, this is really about an instantaneous measure of the weight of experience, not about continuity between experiences. But why don’t the same arguments on continuity work on measure in general?
Is it the case that the Born probabilities are necessarily explained—can only be explained—by some hidden component of our brain which says that we care about the alternatives in proportion to their squared modulus?
I have been thinking about quite a bit in the last few days and I have to say, I find this close to impossible.
The solution must be much more fundamental: Assumptions like the above ignore that the Born rule is also necessary for almost everything to work: For example the working of our most basic building blocks are tied to this rule. It is much more then just our psychological “caring”. Everything in our “hardware” and environment would immediately cease to exist if the rule would be different
Based on this, I think that attempts (like that of David Wallace, even if it would be correct) trying to prove the Born rule based on rationality and decision theory have no chance to be conclusive or convincing. A good theory to explain the rule should also explain why we see the reality as we see it, even if never really make conscious measurements on particles.
In our lives, we (may) see different type of apparent randomnesses:
incomplete information
inherent (quantum) randomness
To some extent these two type of randomness are connected and look isomorphic on surface (in the macro-world).
The real question is: “Why are they connected?”
Or more specifically: “Why does the amplitude of the wave function result in (measured) probabilities that resembles to those of random geometric perturbations of the wave function?”
If you flip a real coin, for you it does not look very different from flipping a quantum coin. However the 50⁄50 chance of heads and tails can be explained purely by considering the geometric symmetry of the object. If you assume that the random perturbing events are distributed in a geometrically uniform way, you will immediately deduce the necessity of even chance. I think the clue of the Born rule will be to relate similar geometric considerations to relate perturbation based probability to quantum probability.
Quantum probability is only “inherent” because by default you are looking at it from the system that only includes one world. With a coin, the probability is merely “epistemic” because there is a definite answer (heads or tails) in the system that includes one world, but this same probability is as inherent for the system that only includes you, the person who is uncertain, and doesn’t include the coin. The difference between epistemic and inherent randomness is mainly in the choice of the system for which the statement is made, with epistemic probability meaning the same thing as inherent probability with respect to the system that doesn’t include the fact in question. (Of course, this doesn’t take into account the specifics of QM, but is right for the way “quantum randomness” is usually used in thought experiments.)
I don’t dispute this. Still, my posting implicitly assumed the MWI.
My argument is that the brain as an information processing unit has a generic way of estimating probabilities based on a single-worldline of the Multiverse. This world both contains randomness stemming from missing information and quantum branching, but our brain does not differentiate between these two kind of randomnesses.
The question is how to calibrate our brain’s expectation of the quantum branch it will end up. What I speculate is that the quantum randomness to some extent approximates an “incomplete information” type of randomness on the large scale. I don’t know the math (if I’d knew I’d be writing a paper :)), but I have a very specific intuitive idea, that could be turned into a concrete mathematical argument:
I expect the calibration to be performed based on geometric symmetries of our 3 dimensional space: if we construct a sufficiently symmetric but unstable physical process (e.g. throwing a coin) than we can deduce a probability for the outcome to be 50⁄50 assuming a uniform geometric distribution of possible perturbations. Such a process must somehow be related to the magnitudes of wave function and has to be shown to behave similarly on the macro level.
Admitted, this is just a speculation, but it is not really philosophical in nature, rather an intuitive starting point on what I think has a fair chance ending up in a concrete mathematical explanation of the Born probabilities in a formal setting.
Does your notion of “incomplete information” take into account Bell’s Theorem? It seems pretty hard to make the Born probabilities represent some other form of uncertainty than indexical uncertainty.
I don’t suggest hidden variables. The idea is that quantum randomness should resemble incomplete information type of randomness on the large scale and the reason that we perceive the world according to the Born rule is that our brain can’t distinguish between the two kind of randomnesses.
There are beings out there in other parts of Reality, who either anticipate seeing results with non-Born probabilities, or care about future alternatives in non-Born proportions. But (as I speculated earlier) those beings have much less measure under a complexity-based measure than us.
Or is there a sense in which we end up seeing results with a certain probability, a certain measure of ourselves going into those worlds, regardless of what we care about?
In other words, what you’re asking is, is there is an objective measure over Reality, or is it just a matter of how much we care about about each part of it. I’ve switched positions on this several times, and I’m still undecided now. But here are my current thoughts.
First, considerations from algorithmic complexity suggest that the measure we use can’t be completely arbitrary. For example, we certainly can’t use one that takes an infinite amount of information to describe, since that wouldn’t fit into our brain.
Next, it doesn’t seem to make sense to assign zero measure to any part of Reality. Why should there be a part of it that we don’t care about at all?
So that seems to narrow down the possibilities quite a bit, even if there is no objective measure. Maybe we can find other considerations to further narrow down the list of possibilities?
If you look at it closely, this is really about an instantaneous measure of the weight of experience, not about continuity between experiences.
I’d say that “continuity between experiences” is a separate problem. Even if the measure problem is solved, I might still be afraid to step into a transporter based on destructive scanning and reconstruction, and need to figure out whether I should edit that fear away, tell the FAI to avoid transporting me that way, or do something else.
But why don’t the same arguments on continuity work on measure in general?
I don’t understand this one. What “arguments on continuity” are you referring to?
We know it is a dumb idea to attempt (quantum) suicide. We’re pretty confident it is a dumb idea to do simple algorithms increasing one’s redundancy before pleasant realizations and reducing it afterward.
It sounds as if you are refusing to draw inferences from normal experience regarding (the correct interpretation of) QM. There is no “Central Dogma” that inferences can only go from micro-scale to macro-scale.
From the macro-scale values that we do hold (e.g. we care about macro-scale probable outcomes), we can derive the micro-scale values that we should hold (e.g. care about Born weights).
I don’t have an explanation why Born weights are nonlinear—but the science is almost completely irrelevant to the decision theory and the ethics. The mysterious, nonintuitive nature of QM doesn’t percolate up that much. That is why we have different fields called “physics”, “decision theory”, and “ethics”.
Since (after all) if we only cared about results that went a particular way, then, from our perspective, we would always anticipate seeing the results go that way? And so what we anticipate seeing, is entirely and only dependent on what we care about?
I read that part several times, and I’m still not quite following. Mind elaborating or rephrasing that bit? Thanks.
Okay, let me try another tack.
One of the last greatest open questions in quantum mechanics, and the only one that seems genuinely mysterious, is where the Born statistics come from—why our probability of seeing a particular result of a quantum experiment, ending up in a particular decoherent blob of the wavefunction, goes as the squared modulus of the complex amplitude.
Is it the case that the Born probabilities are necessarily explained—can only be explained—by some hidden component of our brain which says that we care about the alternatives in proportion to their squared modulus?
Since (after all) if we only cared about results that went a particular way, then, from our perspective, we would always anticipate seeing the results go that way? And so what we anticipate seeing, is entirely and only dependent on what we care about?
Or is there a sense in which we end up seeing results with a certain probability, a certain measure of ourselves going into those worlds, regardless of what we care about?
If you look at it closely, this is really about an instantaneous measure of the weight of experience, not about continuity between experiences. But why don’t the same arguments on continuity work on measure in general?
I have been thinking about quite a bit in the last few days and I have to say, I find this close to impossible.
The solution must be much more fundamental: Assumptions like the above ignore that the Born rule is also necessary for almost everything to work: For example the working of our most basic building blocks are tied to this rule. It is much more then just our psychological “caring”. Everything in our “hardware” and environment would immediately cease to exist if the rule would be different
Based on this, I think that attempts (like that of David Wallace, even if it would be correct) trying to prove the Born rule based on rationality and decision theory have no chance to be conclusive or convincing. A good theory to explain the rule should also explain why we see the reality as we see it, even if never really make conscious measurements on particles.
In our lives, we (may) see different type of apparent randomnesses:
incomplete information
inherent (quantum) randomness
To some extent these two type of randomness are connected and look isomorphic on surface (in the macro-world).
The real question is: “Why are they connected?”
Or more specifically: “Why does the amplitude of the wave function result in (measured) probabilities that resembles to those of random geometric perturbations of the wave function?”
If you flip a real coin, for you it does not look very different from flipping a quantum coin. However the 50⁄50 chance of heads and tails can be explained purely by considering the geometric symmetry of the object. If you assume that the random perturbing events are distributed in a geometrically uniform way, you will immediately deduce the necessity of even chance. I think the clue of the Born rule will be to relate similar geometric considerations to relate perturbation based probability to quantum probability.
Quantum probability is only “inherent” because by default you are looking at it from the system that only includes one world. With a coin, the probability is merely “epistemic” because there is a definite answer (heads or tails) in the system that includes one world, but this same probability is as inherent for the system that only includes you, the person who is uncertain, and doesn’t include the coin. The difference between epistemic and inherent randomness is mainly in the choice of the system for which the statement is made, with epistemic probability meaning the same thing as inherent probability with respect to the system that doesn’t include the fact in question. (Of course, this doesn’t take into account the specifics of QM, but is right for the way “quantum randomness” is usually used in thought experiments.)
I don’t dispute this. Still, my posting implicitly assumed the MWI.
My argument is that the brain as an information processing unit has a generic way of estimating probabilities based on a single-worldline of the Multiverse. This world both contains randomness stemming from missing information and quantum branching, but our brain does not differentiate between these two kind of randomnesses.
The question is how to calibrate our brain’s expectation of the quantum branch it will end up. What I speculate is that the quantum randomness to some extent approximates an “incomplete information” type of randomness on the large scale. I don’t know the math (if I’d knew I’d be writing a paper :)), but I have a very specific intuitive idea, that could be turned into a concrete mathematical argument:
I expect the calibration to be performed based on geometric symmetries of our 3 dimensional space: if we construct a sufficiently symmetric but unstable physical process (e.g. throwing a coin) than we can deduce a probability for the outcome to be 50⁄50 assuming a uniform geometric distribution of possible perturbations. Such a process must somehow be related to the magnitudes of wave function and has to be shown to behave similarly on the macro level.
Admitted, this is just a speculation, but it is not really philosophical in nature, rather an intuitive starting point on what I think has a fair chance ending up in a concrete mathematical explanation of the Born probabilities in a formal setting.
Does your notion of “incomplete information” take into account Bell’s Theorem? It seems pretty hard to make the Born probabilities represent some other form of uncertainty than indexical uncertainty.
I don’t suggest hidden variables. The idea is that quantum randomness should resemble incomplete information type of randomness on the large scale and the reason that we perceive the world according to the Born rule is that our brain can’t distinguish between the two kind of randomnesses.
There are beings out there in other parts of Reality, who either anticipate seeing results with non-Born probabilities, or care about future alternatives in non-Born proportions. But (as I speculated earlier) those beings have much less measure under a complexity-based measure than us.
In other words, what you’re asking is, is there is an objective measure over Reality, or is it just a matter of how much we care about about each part of it. I’ve switched positions on this several times, and I’m still undecided now. But here are my current thoughts.
First, considerations from algorithmic complexity suggest that the measure we use can’t be completely arbitrary. For example, we certainly can’t use one that takes an infinite amount of information to describe, since that wouldn’t fit into our brain.
Next, it doesn’t seem to make sense to assign zero measure to any part of Reality. Why should there be a part of it that we don’t care about at all?
So that seems to narrow down the possibilities quite a bit, even if there is no objective measure. Maybe we can find other considerations to further narrow down the list of possibilities?
I’d say that “continuity between experiences” is a separate problem. Even if the measure problem is solved, I might still be afraid to step into a transporter based on destructive scanning and reconstruction, and need to figure out whether I should edit that fear away, tell the FAI to avoid transporting me that way, or do something else.
I don’t understand this one. What “arguments on continuity” are you referring to?
QM has to add up to normality.
We know it is a dumb idea to attempt (quantum) suicide. We’re pretty confident it is a dumb idea to do simple algorithms increasing one’s redundancy before pleasant realizations and reducing it afterward.
It sounds as if you are refusing to draw inferences from normal experience regarding (the correct interpretation of) QM. There is no “Central Dogma” that inferences can only go from micro-scale to macro-scale.
From the macro-scale values that we do hold (e.g. we care about macro-scale probable outcomes), we can derive the micro-scale values that we should hold (e.g. care about Born weights).
I don’t have an explanation why Born weights are nonlinear—but the science is almost completely irrelevant to the decision theory and the ethics. The mysterious, nonintuitive nature of QM doesn’t percolate up that much. That is why we have different fields called “physics”, “decision theory”, and “ethics”.
I read that part several times, and I’m still not quite following. Mind elaborating or rephrasing that bit? Thanks.