Decoherence is Pointless

Previously in series: On Being Decoherent

Yesterday’s post argued that continuity of decoherence is no bar to accepting it as an explanation for our experienced universe, insofar as it is a physicist’s responsibility to explain it. This is a good thing, because the equations say decoherence is continuous, and the equations get the final word.

Now let us consider the continuity of decoherence in greater detail...

On Being Decoherent talked about the decoherence process,

(Human-BLANK) * (Sensor-BLANK) * (Atom-LEFT + Atom-RIGHT)
=>
(Human-BLANK) * ((Sensor-LEFT * Atom-LEFT) + (Sensor-RIGHT * Atom-RIGHT))
=>
(Human-LEFT * Sensor-LEFT * Atom-LEFT) + (Human-RIGHT * Sensor-RIGHT * Atom-RIGHT)

At the end of this process, it may be that your brain in LEFT and your brain in RIGHT are, in a technical sense, communicating—that they have intersecting, interfering amplitude flows.

But the amplitude involved in this process, is the amplitude for a brain (plus all entangled particles) to leap into the other brain’s state. This influence may, in a quantitative sense, exist; but it’s exponentially tinier than the gravitational influence upon your brain of a mouse sneezing on Pluto.

By the same token, decoherence always entangles you with a blob of amplitude density, not a point mass of amplitude. A point mass of amplitude would be a discontinuous amplitude distribution, hence unphysical. The distribution can be very narrow, very sharp—even exponentially narrow—but it can’t actually be pointed (nondifferentiable), let alone a point mass.

Decoherence, you might say, is pointless.

If a measuring instrument is sensitive enough to distinguish 10 positions with 10 separate displays on a little LCD screen, it will decohere the amplitude into at least 10 parts, almost entirely noninteracting. In all probability, the instrument is physically quite a bit more sensitive (in terms of evolving into different configurations) than what it shows on screen. You would find experimentally that the particle was being decohered (with consequences for momentum, etc.) more than the instrument was designed to measure from a human standpoint.

But there is no such thing as infinite sensitivity in a continuous quantum physics: If you start with blobs of amplitude density, you don’t end up with point masses. Liouville’s Theorem, which generalizes the second law of thermodynamics, guarantees this: you can’t compress probability.

What about if you measure the position of an Atom using an analog Sensor whose dial shows a continuous reading?

Think of probability theory over classical physics:

When the Sensor’s dial appears in a particular position, that gives us evidence corresponding to the likelihood function for the Sensor’s dial to be in that place, given that the Atom was originally in a particular position. If the instrument is not infinitely sensitive (which it can’t be, for numerous reasons), then the likelihood function will be a density distribution, not a point mass. A very sensitive Sensor might have a sharp spike of a likelihood distribution, with density falling off rapidly. If the Atom is really at position 5.0121, the likelihood of the Sensor’s dial ending up in position 5.0123 might be very small. And so, unless we had overwhelming prior knowledge, we’d conclude a tiny posterior probability that the Atom was so much as 0.0002 millimeters from the Sensor’s indicated position. That’s probability theory over classical physics.

Similarly in quantum physics:

The blob of amplitude in which you find yourself, where you see the Sensor’s dial in some particular position, will have a sub-distribution over actual Atom positions that falls off according to (1) the initial amplitude distribution for the Atom, analogous to the prior; and (2) the amplitude for the Sensor’s dial (and the rest of the Sensor!) to end up in our part of configuration space, if the Atom started out in that position. (That’s the part analogous to the likelihood function.) With a Sensor at all sensitive, the amplitude for the Atom to be in a state noticeably different from what the Sensor shows, will taper off very sharply.

(All these amplitudes I’m talking about are actually densities, N-dimensional integrals over dx dy dz..., rather than discrete flows between discrete states; but you get the idea.)

If there’s not a lot of amplitude flowing from initial particle position 5.0150 +/​- 0.0001 to configurations where the sensor’s LED display reads ‘5.0123’, then the joint configuration of (Sensor=5.0123 * Atom=5.0150) ends up with very tiny amplitude.

Part of The Quantum Physics Sequence

Next post: “Decoherent Essences

Previous post: “The Conscious Sorites Paradox