When you say there’s “no such thing as a state,” or “we live in a density matrix,” these are statements about ontology: what exists, what’s real, etc.
Density matrices use the extra representational power they have over states to encode a probability distribution over states. If we regard the probabilistic nature of measurements as something to be explained, putting the probability distribution directly into the thing we live in is what I mean by “explain with ontology.”
Epistemology is about how we know stuff. If we start with a world that does not inherently have a probability distribution attached to it, but obtain a probability distribution from arguments about how we know stuff, that’s “explain with epistemology.”
In quantum mechanics, this would look like talking about anthropics, or what properties we want a measure to satisfy, or solomonoff induction and coding theory.
What good is it to say things are real or not? One useful application is predicting the character of physical law. If something is real, then we might expect it to interact with other things. I do not expect the probability distribution of a mixed state to interact with other things.
One person’s “occam’s razor” may be description length, another’s may be elegance, and a third person’s may be “avoiding having too much info inside your system” (as some anti-MW people argue). I think discussions like “what’s real” need to be done thoughtfully, otherwise people tend to argue past each other, and come off overconfident/ underinformed.
To be fair, I did use language like this so I shouldn’t be talking—but I used it tongue-in-cheek, and the real motivation given in the above is not “the DM is a more fundamental notion” but “DM lets you make concrete the very suggestive analogy between quantum phase and probability”, which you would probably agree with.
For what it’s worth, there are “different layers of theory” (often scale-dependent), like classical vs. quantum vs. relativity, etc., where there I think it’s silly to talk about “ontological truth”. But these theories are local conceptual optima among a graveyard of “outdated” theories, that are strictly conceptually inferior to new ones: examples are heliocentrism (and Ptolemy’s epycycles), the ether, etc.
Interestingly, I would agree with you (with somewhat low confidence) that in this question there is a consensus among physicists that one picture is simply “more correct” in the sense of giving theoretically and conceptually more elegant/ precise explanations. Except your sign is wrong: this is the density matrix picture (the wavefunction picture is genuinely understood as “not the right theory”, but still taught and still used in many contexts where it doesn’t cause issues).
I also think that there are two separate things that you can discuss.
Should you think of thermodynamics, probability, and things like thermal baths as fundamental to your theory or incidental epistemological crutches to model the world at limited information?
Assuming you are studying a “non-thermodynamic system with complete information”, where all dynamics is invertible over long timescales, should you use wave functions or density matrices?
Note that for #1, you should not think of a density function as a probability distribution on quantum states (see the discussion with Optimization Process in the comments), and this is a bad intuition pump. Instead, the thing that replaces probability distributions in quantum mechanics is a density matrix.
I think a charitable interpertation of your criticism would be a criticism of #1 (putting limited-info dynamics—i.e., quantum thermodynamics) as primary to “invertible dynamics”. Here there is a debate to be had.
I think there is not really a debate in #2: even in invertible QM (no probability), you need to use density matrices if you want to study different subsystems (e.g. when modeling systems existing in an infinite, but not thermodynamic universe you need this language, since restricting a wavefunction to a subsystem makes it mixed). There’s also a transposed discussion, that I don’t really understand, of all of this in field theory: when do you have fields vs. operators vs. other more complicated stuff, and there is some interesting relationship to how you conceptualize “boundaries”—but this is not what we’re discussing. So you really can’t get away from using density matrices even in a nice invertible universe, as soon as you want to relate systems to subsystems.
For question #1 is reasonable (though I don’t know how productive) to discuss what is “primary”. I think (but here I am really out of my depth) that people who study very “fundamental” quantum phenomena increasingly use a picture with a thermal bath (e.g. I vaguely remember this happening in some lectures here). At the same time, it’s reasonable to say that “invertible” QM phenomena are primary and statistical phenomena are ontological epiphenomena on top of this. While this may be a philosophical debate, I don’t think it’s a physical one, since the two pictures are theoretically interchangeable (as I mentioned, there is a canonical way to get thermodynamics from unitary QM as a certain “optimal lower bound on information dynamics”, appropriately understood).
Still, as soon as you introduce the notion of measurement, you cannot get away from thermodynamics. Measurement is an inherently information-destroying operation, and iiuc can only be put “into theory” (rather than being an arbitrary add-on that professors tell you about) using the thermodynamic picture with nonunitary operators on density matrices.
people who study very “fundamental” quantum phenomena increasingly use a picture with a thermal bath
Maybe talking about the construction of pointer states? That linked paper does it just as you might prefer, putting the Boltzmann distribution into a density matrix. But of course you could rephrase it as a probability distribution over states and the math goes through the same, you’ve just shifted the vibe from “the Boltzmann distribution is in the territory” to “the Boltzmann distribution is in the map.”
Still, as soon as you introduce the notion of measurement, you cannot get away from thermodynamics. Measurement is an inherently information-destroying operation, and iiuc can only be put “into theory” (rather than being an arbitrary add-on that professors tell you about) using the thermodynamic picture with nonunitary operators on density matrices.
Sure, at some level of description it’s useful to say that measurement is irreversible, just like at some level of description it’s useful to say entropy always increases. Just like with entropy, it can be derived from boundary conditions + reversible dynamics + coarse-graining. Treating measurements as reversible probably has more applications than treating entropy as reversible, somewhere in quantum optics / quantum computing.
Thanks for the reference—I’ll check out the paper (though there are no pointer variables in this picture inherently).
I think there is a miscommunication in my messaging. Possibly through overcommitting to the “matrix” analogy, I may have given the impression that I’m doing something I’m not. In particular, the view here isn’t a controversial one—it has nothing to do with Everett or einselection or decoherence. Crucially, I am saying nothing at all about quantum branches.
I’m now realizing that when you say map or territory, you’re probably talking about a different picture where quantum interpretation (decoherence and branches) is foregrounded. I’m doing nothing of the sort, and as far as I can tell never making any “interpretive” claims.
All the statements in the post are essentially mathematically rigorous claims which say what happens when you
start with the usual QM picture, and posit that
your universe divides into at least two subsystems, one of which you’re studying
one of the subsystems your system is coupled to is a minimally informative infinite-dimensional environment (i.e., a bath).
Both of these are mathematically formalizable and aren’t saying anything about how to interpret quantum branches etc. And the Lindbladian is simply a useful formalism for tracking the evolution of a system that has these properties (subdivisions and baths). Note that (maybe this is the confusion?) subsystem does not mean quantum branch, or decoherence result. “Subsystem” means that we’re looking at these particles over here, but there are also those particles over there (i.e. in terms of math, your Hilbert space is a tensor product Sytem1⊗System2.
Also, I want to be clear that we can and should run this whole story without ever using the term “probability distribution” in any of the quantum-thermodynamics concepts. The language to describe a quantum system as above (system coupled with a bath) is from the start a language that only involves density matrices, and never uses the term “X is a probability distribution of Y”. Instead you can get classical probability distributions to map into this picture as a certain limit of these dynamics.
As to measurement, I think you’re once again talking about interpretation. I agree that in general, this may be tricky. But what is once again true mathematically is that if you model your system as coupled to a bath then you can set up behaviors that behave exactly as you would expect from an experiment from the point of view of studying the system (without asking questions about decoherence).
There are some non-obvious issues with saying “the wavefunction really exists, but the density matrix is only a representation of our own ignorance”. Its a perfectly defensible viewpoint, but I think it is interesting to look at some of its potential problems:
A process or machine prepares either |0> or |1> at random, each with 50% probability. Another machine prepares either |+> or |-> based on a coin flick, where |+> = (|0> + |1>)/root2, and |+> = (|0> - |1>)/root2. In your ontology these are actually different machines that produce different states. In contrast, in the density matrix formulation these are alternative descriptions of the same machine. In any possible experiment, the two machines are identical. Exactly how much of a problem this is for believing in wavefuntions but not density matrices is debatable—“two things can look the same, big deal” vs “but, experiments are the ultimate arbiters of truth, if experiemnt says they are the same thing then they must be and the theory needs fixing.”
There are many different mathematical representations of quantum theory. For example, instead of states in Hilbert space we can use quasi-probability distributions in phase space, or path integrals. The relevance to this discussion is that the quasi-probability distributions in phase space are equivalent to density matrices, not wavefunctions. To exaggerate the case, imagine that we have a large number of different ways of putting quantum physics into a mathematical language, [A, B, C, D....] and so on. All of them are physically the same theory, just couched in different mathematics language, a bit like say, [“Hello”, “Hola”, “Bonjour”, “Ciao”...] all mean the same thing in different languages. But, wavefunctions only exist as an entity separable from density matrices in some of those descriptions. If you had never seen another language maybe the fact that the word “Hello” contains the word “Hell” as a substring might seem to possibly correspond to something fundamental about what a greeting is (after all, “Hell is other people”). But its just a feature of English, and languages with an equal ability to greet don’t have it. Within the Hilbert space language it looks like wavefunctions might have a level of existence that is higher than that of density matrices, but why are you privileging that specific language over others?
In a wavefunction-only ontology we have two types of randomness, that is normal ignorance and the weird fundamental quantum uncertainty. In the density matrix ontology we have the total probability, plus some weird quantum thing called “coherence” that means some portion of that probability can cancel out when we might otherwise expect it to add together. Taking another analogy (I love those), the split you like is [100ml water + 100ml oil], (but water is just my ignorance and doesn’t really exist), and you don’t like the density matrix representation of [200ml fluid total, oil content 50%]. Their is no “problem” here per se but I think it helps underline how the two descriptions seem equally valid. When someone else measures your state they either kill its coherence (drop oil % to zero), or they transform its oil into water. Equivalent descriptions.
All of that said, your position is fully reasonable, I am just trying to point out that the way density matrices are usually introduced in teaching or textbooks does make the issue seem a lot more clear cut than I think it really is.
A process or machine prepares either |0> or |1> at random, each with 50% probability. Another machine prepares either |+> or |-> based on a coin flick, where |+> = (|0> + |1>)/root2, and |+> = (|0> - |1>)/root2. In your ontology these are actually different machines that produce different states.
I wonder if this can be resolved by treating the randomness of the machines quantum mechanically, rather than having this semi-classical picture where you start with some randomness handed down from God. Suppose these machines use quantum mechanics to do the randomization in the simplest possible way—they have a hidden particle in state |left>+|right> (pretend I normalize), they mechanically measure it (which from the outside will look like getting entangled with it) and if it’s on the left they emit their first option (|0> or |+> depending on the machine) and vice versa.
So one system, seen from the outside, goes into the state |L,0>+|R,1>, the other one into the state |L,0>+|R,0>+|L,1>-|R,1>. These have different density matrices. The way you get down to identical density matrices is to say you can’t get the hidden information (it’s been shot into outer space or something). And then when you assume that and trace out the hidden particle, you get the same representation no matter your philosophical opinion on whether to think of the un-traced state as a bare state or as a density matrix. If on the other hand you had some chance of eventually finding the hidden particle, you’d apply common sense and keep the states or density matrices different.
Anyhow, yeah, broadly agree. Like I said, there’s a practical use for saying what’s “real” when you want to predict future physics. But you don’t always have to be doing that.
You are completely correct in the “how does the machine work inside?” question. As you point out that density matrix has the exact form of something that is entangled with something else.
I think its very important to be discussing what is real, although as we always have a nonzero inferential distance between ourselves and the real the discussion has to be a little bit caveated and pragmatic.
A process or machine prepares either |0> or |1> at random, each with 50% probability. Another machine prepares either |+> or |-> based on a coin flick, where |+> = (|0> + |1>)/root2, and |+> = (|0> - |1>)/root2. In your ontology these are actually different machines that produce different states. In contrast, in the density matrix formulation these are alternative descriptions of the same machine. In any possible experiment, the two machines are identical. Exactly how much of a problem this is for believing in wavefuntions but not density matrices is debatable—“two things can look the same, big deal” vs “but, experiments are the ultimate arbiters of truth, if experiemnt says they are the same thing then they must be and the theory needs fixing.”
I like “different machines that produce different states”. I would bring up an example where we replace the coin by a pseudorandom number generator with seed 93762. If the recipient of the photons happens to know that the seed is 93762, then she can put every photon into state |0> with no losses. If the recipient of the photons does not know that the random seed is 93762, then she has to treat the photons as unpolarized light, which cannot be polarized without 50% loss.
So for this machine, there’s no getting away from saying things like: “There’s a fact of the matter about what the state of each output photon is. And for any particular experiment, that fact-of-the-matter might or might not be known and acted upon. And if it isn’t known and acted upon, then we should start talking about probabilistic ensembles, and we may well want to use density matrices to make those calculations easier.”
I think it’s weird and unhelpful to say that the nature of the machine itself is dependent on who is measuring its output photons much later on, and how, right?
Yes, in your example a recipient who doesn’t know the seed models the light as unpolarised, and one who does as say, H-polarised in a given run. But for everyone who doesn’t see the random seed its the same density matrix.
Lets replace that first machine with a similar one that produces a polarisation entangled photon pair, |HH> + |VV> (ignoring normalisation). If you have one of those photons it looks unpolarised (essentially your “ignorance of the random seed” can be thought of as your ignorance of the polarisation of the other photon).
If someone else (possibly outside your light cone) measures the other photon in the HV basis then half the time they will project your photon into |H> and half the time into |V>, each with 50% probability. This 50⁄50 appears in the density matrix, not the wavefunction, so is “ignorance probability”.
In this case, by what I understand to be your position, the fact of the matter is either (1) that the photon is still entangled with a distant photon, or (2) that it has been projected into a specific polarisation by a measurement on that distant photon. Its not clear when the transformation from (1) to (2) takes place (if its instant, then in which reference frame?).
So, in the bigger context of this conversation, OP: “You live in the density matrices (Neo)” Charlie :”No, a density matrix incorporates my own ignorance so is not a sensible picture of the fundamental reality. I can use them mathematically, but the underlying reality is built of quantum states, and that randomness when I subject them to measurements is fundamentally part of the territory, not the map. Lets not mix the two things up.” Me: “Whether a given unit of randomness is in the map (IE ignorance), or the territory is subtle. Things that randomly combine quantum states (my first machine) have a symmetry over which underlying quantum states are being mixed that looks meaningful. Plus (this post), the randomness can move abruptly from the territory to the map due to events outside your own light cone (although the amount of randomness is conserved), so maybe worrying too much about the distinction isn’t that helpful.
When you say there’s “no such thing as a state,” or “we live in a density matrix,” these are statements about ontology: what exists, what’s real, etc.
Density matrices use the extra representational power they have over states to encode a probability distribution over states. If we regard the probabilistic nature of measurements as something to be explained, putting the probability distribution directly into the thing we live in is what I mean by “explain with ontology.”
Epistemology is about how we know stuff. If we start with a world that does not inherently have a probability distribution attached to it, but obtain a probability distribution from arguments about how we know stuff, that’s “explain with epistemology.”
In quantum mechanics, this would look like talking about anthropics, or what properties we want a measure to satisfy, or solomonoff induction and coding theory.
What good is it to say things are real or not? One useful application is predicting the character of physical law. If something is real, then we might expect it to interact with other things. I do not expect the probability distribution of a mixed state to interact with other things.
One person’s “occam’s razor” may be description length, another’s may be elegance, and a third person’s may be “avoiding having too much info inside your system” (as some anti-MW people argue). I think discussions like “what’s real” need to be done thoughtfully, otherwise people tend to argue past each other, and come off overconfident/ underinformed.
To be fair, I did use language like this so I shouldn’t be talking—but I used it tongue-in-cheek, and the real motivation given in the above is not “the DM is a more fundamental notion” but “DM lets you make concrete the very suggestive analogy between quantum phase and probability”, which you would probably agree with.
For what it’s worth, there are “different layers of theory” (often scale-dependent), like classical vs. quantum vs. relativity, etc., where there I think it’s silly to talk about “ontological truth”. But these theories are local conceptual optima among a graveyard of “outdated” theories, that are strictly conceptually inferior to new ones: examples are heliocentrism (and Ptolemy’s epycycles), the ether, etc.
Interestingly, I would agree with you (with somewhat low confidence) that in this question there is a consensus among physicists that one picture is simply “more correct” in the sense of giving theoretically and conceptually more elegant/ precise explanations. Except your sign is wrong: this is the density matrix picture (the wavefunction picture is genuinely understood as “not the right theory”, but still taught and still used in many contexts where it doesn’t cause issues).
I also think that there are two separate things that you can discuss.
Should you think of thermodynamics, probability, and things like thermal baths as fundamental to your theory or incidental epistemological crutches to model the world at limited information?
Assuming you are studying a “non-thermodynamic system with complete information”, where all dynamics is invertible over long timescales, should you use wave functions or density matrices?
Note that for #1, you should not think of a density function as a probability distribution on quantum states (see the discussion with Optimization Process in the comments), and this is a bad intuition pump. Instead, the thing that replaces probability distributions in quantum mechanics is a density matrix.
I think a charitable interpertation of your criticism would be a criticism of #1 (putting limited-info dynamics—i.e., quantum thermodynamics) as primary to “invertible dynamics”. Here there is a debate to be had.
I think there is not really a debate in #2: even in invertible QM (no probability), you need to use density matrices if you want to study different subsystems (e.g. when modeling systems existing in an infinite, but not thermodynamic universe you need this language, since restricting a wavefunction to a subsystem makes it mixed). There’s also a transposed discussion, that I don’t really understand, of all of this in field theory: when do you have fields vs. operators vs. other more complicated stuff, and there is some interesting relationship to how you conceptualize “boundaries”—but this is not what we’re discussing. So you really can’t get away from using density matrices even in a nice invertible universe, as soon as you want to relate systems to subsystems.
For question #1 is reasonable (though I don’t know how productive) to discuss what is “primary”. I think (but here I am really out of my depth) that people who study very “fundamental” quantum phenomena increasingly use a picture with a thermal bath (e.g. I vaguely remember this happening in some lectures here). At the same time, it’s reasonable to say that “invertible” QM phenomena are primary and statistical phenomena are ontological epiphenomena on top of this. While this may be a philosophical debate, I don’t think it’s a physical one, since the two pictures are theoretically interchangeable (as I mentioned, there is a canonical way to get thermodynamics from unitary QM as a certain “optimal lower bound on information dynamics”, appropriately understood).
Still, as soon as you introduce the notion of measurement, you cannot get away from thermodynamics. Measurement is an inherently information-destroying operation, and iiuc can only be put “into theory” (rather than being an arbitrary add-on that professors tell you about) using the thermodynamic picture with nonunitary operators on density matrices.
Maybe talking about the construction of pointer states? That linked paper does it just as you might prefer, putting the Boltzmann distribution into a density matrix. But of course you could rephrase it as a probability distribution over states and the math goes through the same, you’ve just shifted the vibe from “the Boltzmann distribution is in the territory” to “the Boltzmann distribution is in the map.”
Sure, at some level of description it’s useful to say that measurement is irreversible, just like at some level of description it’s useful to say entropy always increases. Just like with entropy, it can be derived from boundary conditions + reversible dynamics + coarse-graining. Treating measurements as reversible probably has more applications than treating entropy as reversible, somewhere in quantum optics / quantum computing.
Thanks for the reference—I’ll check out the paper (though there are no pointer variables in this picture inherently).
I think there is a miscommunication in my messaging. Possibly through overcommitting to the “matrix” analogy, I may have given the impression that I’m doing something I’m not. In particular, the view here isn’t a controversial one—it has nothing to do with Everett or einselection or decoherence. Crucially, I am saying nothing at all about quantum branches.
I’m now realizing that when you say map or territory, you’re probably talking about a different picture where quantum interpretation (decoherence and branches) is foregrounded. I’m doing nothing of the sort, and as far as I can tell never making any “interpretive” claims.
All the statements in the post are essentially mathematically rigorous claims which say what happens when you
start with the usual QM picture, and posit that
your universe divides into at least two subsystems, one of which you’re studying
one of the subsystems your system is coupled to is a minimally informative infinite-dimensional environment (i.e., a bath).
Both of these are mathematically formalizable and aren’t saying anything about how to interpret quantum branches etc. And the Lindbladian is simply a useful formalism for tracking the evolution of a system that has these properties (subdivisions and baths). Note that (maybe this is the confusion?) subsystem does not mean quantum branch, or decoherence result. “Subsystem” means that we’re looking at these particles over here, but there are also those particles over there (i.e. in terms of math, your Hilbert space is a tensor product Sytem1⊗System2.
Also, I want to be clear that we can and should run this whole story without ever using the term “probability distribution” in any of the quantum-thermodynamics concepts. The language to describe a quantum system as above (system coupled with a bath) is from the start a language that only involves density matrices, and never uses the term “X is a probability distribution of Y”. Instead you can get classical probability distributions to map into this picture as a certain limit of these dynamics.
As to measurement, I think you’re once again talking about interpretation. I agree that in general, this may be tricky. But what is once again true mathematically is that if you model your system as coupled to a bath then you can set up behaviors that behave exactly as you would expect from an experiment from the point of view of studying the system (without asking questions about decoherence).
There are some non-obvious issues with saying “the wavefunction really exists, but the density matrix is only a representation of our own ignorance”. Its a perfectly defensible viewpoint, but I think it is interesting to look at some of its potential problems:
A process or machine prepares either |0> or |1> at random, each with 50% probability. Another machine prepares either |+> or |-> based on a coin flick, where |+> = (|0> + |1>)/root2, and |+> = (|0> - |1>)/root2. In your ontology these are actually different machines that produce different states. In contrast, in the density matrix formulation these are alternative descriptions of the same machine. In any possible experiment, the two machines are identical. Exactly how much of a problem this is for believing in wavefuntions but not density matrices is debatable—“two things can look the same, big deal” vs “but, experiments are the ultimate arbiters of truth, if experiemnt says they are the same thing then they must be and the theory needs fixing.”
There are many different mathematical representations of quantum theory. For example, instead of states in Hilbert space we can use quasi-probability distributions in phase space, or path integrals. The relevance to this discussion is that the quasi-probability distributions in phase space are equivalent to density matrices, not wavefunctions. To exaggerate the case, imagine that we have a large number of different ways of putting quantum physics into a mathematical language, [A, B, C, D....] and so on. All of them are physically the same theory, just couched in different mathematics language, a bit like say, [“Hello”, “Hola”, “Bonjour”, “Ciao”...] all mean the same thing in different languages. But, wavefunctions only exist as an entity separable from density matrices in some of those descriptions. If you had never seen another language maybe the fact that the word “Hello” contains the word “Hell” as a substring might seem to possibly correspond to something fundamental about what a greeting is (after all, “Hell is other people”). But its just a feature of English, and languages with an equal ability to greet don’t have it. Within the Hilbert space language it looks like wavefunctions might have a level of existence that is higher than that of density matrices, but why are you privileging that specific language over others?
In a wavefunction-only ontology we have two types of randomness, that is normal ignorance and the weird fundamental quantum uncertainty. In the density matrix ontology we have the total probability, plus some weird quantum thing called “coherence” that means some portion of that probability can cancel out when we might otherwise expect it to add together. Taking another analogy (I love those), the split you like is [100ml water + 100ml oil], (but water is just my ignorance and doesn’t really exist), and you don’t like the density matrix representation of [200ml fluid total, oil content 50%]. Their is no “problem” here per se but I think it helps underline how the two descriptions seem equally valid. When someone else measures your state they either kill its coherence (drop oil % to zero), or they transform its oil into water. Equivalent descriptions.
All of that said, your position is fully reasonable, I am just trying to point out that the way density matrices are usually introduced in teaching or textbooks does make the issue seem a lot more clear cut than I think it really is.
I wonder if this can be resolved by treating the randomness of the machines quantum mechanically, rather than having this semi-classical picture where you start with some randomness handed down from God. Suppose these machines use quantum mechanics to do the randomization in the simplest possible way—they have a hidden particle in state |left>+|right> (pretend I normalize), they mechanically measure it (which from the outside will look like getting entangled with it) and if it’s on the left they emit their first option (|0> or |+> depending on the machine) and vice versa.
So one system, seen from the outside, goes into the state |L,0>+|R,1>, the other one into the state |L,0>+|R,0>+|L,1>-|R,1>. These have different density matrices. The way you get down to identical density matrices is to say you can’t get the hidden information (it’s been shot into outer space or something). And then when you assume that and trace out the hidden particle, you get the same representation no matter your philosophical opinion on whether to think of the un-traced state as a bare state or as a density matrix. If on the other hand you had some chance of eventually finding the hidden particle, you’d apply common sense and keep the states or density matrices different.
Anyhow, yeah, broadly agree. Like I said, there’s a practical use for saying what’s “real” when you want to predict future physics. But you don’t always have to be doing that.
You are completely correct in the “how does the machine work inside?” question. As you point out that density matrix has the exact form of something that is entangled with something else.
I think its very important to be discussing what is real, although as we always have a nonzero inferential distance between ourselves and the real the discussion has to be a little bit caveated and pragmatic.
I like “different machines that produce different states”. I would bring up an example where we replace the coin by a pseudorandom number generator with seed 93762. If the recipient of the photons happens to know that the seed is 93762, then she can put every photon into state |0> with no losses. If the recipient of the photons does not know that the random seed is 93762, then she has to treat the photons as unpolarized light, which cannot be polarized without 50% loss.
So for this machine, there’s no getting away from saying things like: “There’s a fact of the matter about what the state of each output photon is. And for any particular experiment, that fact-of-the-matter might or might not be known and acted upon. And if it isn’t known and acted upon, then we should start talking about probabilistic ensembles, and we may well want to use density matrices to make those calculations easier.”
I think it’s weird and unhelpful to say that the nature of the machine itself is dependent on who is measuring its output photons much later on, and how, right?
Yes, in your example a recipient who doesn’t know the seed models the light as unpolarised, and one who does as say, H-polarised in a given run. But for everyone who doesn’t see the random seed its the same density matrix.
Lets replace that first machine with a similar one that produces a polarisation entangled photon pair, |HH> + |VV> (ignoring normalisation). If you have one of those photons it looks unpolarised (essentially your “ignorance of the random seed” can be thought of as your ignorance of the polarisation of the other photon).
If someone else (possibly outside your light cone) measures the other photon in the HV basis then half the time they will project your photon into |H> and half the time into |V>, each with 50% probability. This 50⁄50 appears in the density matrix, not the wavefunction, so is “ignorance probability”.
In this case, by what I understand to be your position, the fact of the matter is either (1) that the photon is still entangled with a distant photon, or (2) that it has been projected into a specific polarisation by a measurement on that distant photon. Its not clear when the transformation from (1) to (2) takes place (if its instant, then in which reference frame?).
So, in the bigger context of this conversation,
OP: “You live in the density matrices (Neo)”
Charlie :”No, a density matrix incorporates my own ignorance so is not a sensible picture of the fundamental reality. I can use them mathematically, but the underlying reality is built of quantum states, and that randomness when I subject them to measurements is fundamentally part of the territory, not the map. Lets not mix the two things up.”
Me: “Whether a given unit of randomness is in the map (IE ignorance), or the territory is subtle. Things that randomly combine quantum states (my first machine) have a symmetry over which underlying quantum states are being mixed that looks meaningful. Plus (this post), the randomness can move abruptly from the territory to the map due to events outside your own light cone (although the amount of randomness is conserved), so maybe worrying too much about the distinction isn’t that helpful.