This is the semantic problem that you dismissed. When I talk about the refrigerator, it’s clear that I mean to draw an imaginary boundary around the refrigerator only and pretend for a second that that is all there is anywhere. Then the entropy is decreasing. If I talk about the process by which I acquired that knowledge, then I have to expand my imaginary boundary to include the source of the photons that bounced off the refrigerator, for instance, and the waste heat my brain produced to acquire this knowledge. That process, the acquiring of the knowledge, was entropy increasing even if what it revealed to me was a less entropic distribution over states of the refrigerator.
The refrigerator is the two gas system with a pump attached. Learning anything about either system is an entropy increasing proposition (if the boundary is drawn around me plus the system). As it happens, if you want to draw the boundary to exclude me, then the two-gas-system-without-pump also happens to be entropy increasing, while drawing a boundary around the refrigerator is entropy decreasing.
As it happens, if you want to draw the boundary to exclude me, then the two-gas-system-without-pump also happens to be entropy increasing...
This is what I’m disputing you can get if you treat entropy as subjective uncertainty, while also assuming that the only way to update subjective uncertainty is Bayesian conditionalization. Perhaps you can explain how the two-gas system turns out to be entropy increasing on that viewpoint if you draw the boundary to exclude the observer. How does the entropy of the probability distribution describing the system increase?
“The entropy of the probability distribution describing the system” only has meaning if there is an observer to actually hold that probability distribution. Since probability is in the mind, there is no fixed external thing that just “is” the probability distribution of the system.
There are two distinct things; one is “the system” and the other is “the probability distribution over states of the system.” If you make an idealization and do math just on “the system” then the distributions in those idealizations are entropy increasing (if you exclude any observer or external stuff to the system). That does not correspond to reality (because the system’s not truly closed), but is often a useful approximation for describing “the arrow of time.”
If you want to talk about “the probability distribution over states of the system” then you must also be including some observer with a mind of some sort, or else the notion of there being a probability distribution (as opposed to just whatever the deterministic eventuality of whatever does in fact occur) doesn’t make semantic sense.
So, to speak about the “probability distribution of the system” there has to be a Maxwell’s demon sitting there having that distribution in its mind (e.g. some observer), and whatever entity it is that is dissipating waste heat while doing physical processes to update its beliefs must be increasing entropy.
Now I’m thoroughly confused about your position. Here are some claims to which you appear to have committed yourself:
(1) You can only talk about probability distribution over the microstates of a system if you treat that system as a sub-system of some larger system that includes an observer.
(2) Entropy is just a measure of subjective uncertainty, which means it is (presumably) a property of a probability distribution.
(3) You can talk about the entropy of a system without including the observer but this is just an idealization and it does not involve a probability distribution over the microstates of the system.
To me, this third claim is just flat-out in contradiction with the first and second claims. How can you talk about the entropy of something from a stat. mech. point of view without it being a property of a probability distribution? Is there really some completely different concept of entropy that comes into play when you exclude the observer from your analysis?
I will also note that the approach I talked about in my original comment does not deny that probability is in the mind. Probability can be “in the mind” without just being subjective uncertainty. Furthermore, accepting that probability is in the mind does not mean that one cannot attribute probability distributions to systems without explicitly representing the system as a subsystem of a supersystem containing an observer.
I appreciate your patience with me and for the help in getting me to confront my confusions about the topic. Your answer still is unsatisfying to me, and this could totally be my own ignorance at work. However, I cannot understand how your answer is sustainable given the comments at both the Stack Exchange post and at the John Baez link.
I think you’ve misunderstood me when you articulated the 3 positions listed above, but you’ve definitely hit upon my confusion so I need to think about it more carefully and do a better job saying what I want to say. I will think on it and write again when I get a chance this weekend.
Again, I do appreciate the patience in helping me understand it.
This is the semantic problem that you dismissed. When I talk about the refrigerator, it’s clear that I mean to draw an imaginary boundary around the refrigerator only and pretend for a second that that is all there is anywhere. Then the entropy is decreasing. If I talk about the process by which I acquired that knowledge, then I have to expand my imaginary boundary to include the source of the photons that bounced off the refrigerator, for instance, and the waste heat my brain produced to acquire this knowledge. That process, the acquiring of the knowledge, was entropy increasing even if what it revealed to me was a less entropic distribution over states of the refrigerator.
The refrigerator is the two gas system with a pump attached. Learning anything about either system is an entropy increasing proposition (if the boundary is drawn around me plus the system). As it happens, if you want to draw the boundary to exclude me, then the two-gas-system-without-pump also happens to be entropy increasing, while drawing a boundary around the refrigerator is entropy decreasing.
This seriously is just Maxwell’s demon.
This is what I’m disputing you can get if you treat entropy as subjective uncertainty, while also assuming that the only way to update subjective uncertainty is Bayesian conditionalization. Perhaps you can explain how the two-gas system turns out to be entropy increasing on that viewpoint if you draw the boundary to exclude the observer. How does the entropy of the probability distribution describing the system increase?
“The entropy of the probability distribution describing the system” only has meaning if there is an observer to actually hold that probability distribution. Since probability is in the mind, there is no fixed external thing that just “is” the probability distribution of the system.
There are two distinct things; one is “the system” and the other is “the probability distribution over states of the system.” If you make an idealization and do math just on “the system” then the distributions in those idealizations are entropy increasing (if you exclude any observer or external stuff to the system). That does not correspond to reality (because the system’s not truly closed), but is often a useful approximation for describing “the arrow of time.”
If you want to talk about “the probability distribution over states of the system” then you must also be including some observer with a mind of some sort, or else the notion of there being a probability distribution (as opposed to just whatever the deterministic eventuality of whatever does in fact occur) doesn’t make semantic sense.
So, to speak about the “probability distribution of the system” there has to be a Maxwell’s demon sitting there having that distribution in its mind (e.g. some observer), and whatever entity it is that is dissipating waste heat while doing physical processes to update its beliefs must be increasing entropy.
Now I’m thoroughly confused about your position. Here are some claims to which you appear to have committed yourself:
(1) You can only talk about probability distribution over the microstates of a system if you treat that system as a sub-system of some larger system that includes an observer.
(2) Entropy is just a measure of subjective uncertainty, which means it is (presumably) a property of a probability distribution.
(3) You can talk about the entropy of a system without including the observer but this is just an idealization and it does not involve a probability distribution over the microstates of the system.
To me, this third claim is just flat-out in contradiction with the first and second claims. How can you talk about the entropy of something from a stat. mech. point of view without it being a property of a probability distribution? Is there really some completely different concept of entropy that comes into play when you exclude the observer from your analysis?
I will also note that the approach I talked about in my original comment does not deny that probability is in the mind. Probability can be “in the mind” without just being subjective uncertainty. Furthermore, accepting that probability is in the mind does not mean that one cannot attribute probability distributions to systems without explicitly representing the system as a subsystem of a supersystem containing an observer.
I appreciate your patience with me and for the help in getting me to confront my confusions about the topic. Your answer still is unsatisfying to me, and this could totally be my own ignorance at work. However, I cannot understand how your answer is sustainable given the comments at both the Stack Exchange post and at the John Baez link.
I think you’ve misunderstood me when you articulated the 3 positions listed above, but you’ve definitely hit upon my confusion so I need to think about it more carefully and do a better job saying what I want to say. I will think on it and write again when I get a chance this weekend.
Again, I do appreciate the patience in helping me understand it.