Demystifying the Second Law of Thermodynamics

(Crossposted from my personal website)

Thermodynamics is really weird. Most people have probably encountered a bad explanation of the basics at some point in school, but probably don’t remember more than

  • Energy is conserved

  • Entropy increases

  • There’s something called the ideal gas law/​ideal gas equation.

Energy conservation is not very mysterious. Apart from some weirdness around defining energy in general, it’s just a thing you can prove from whatever laws of motion you’re using.

But entropy is very weird. You’ve heard that it measures “disorder” in some vague sense. Maybe you’ve heard that it’s connected to the Shannon entropy of a probability distribution . Probably the weirdest thing about it is the law it obeys: It’s not conserved, but rather it increases with time. This is more or less the only law like that in physics.

It gets even weirder when you consider that at least classical Newtonian physics is time-symmetric. Roughly speaking, this means if you have a movie of things interacting under the laws of Newton, and you play it backwards, they’re still obeying the laws of Newton. An orbiting moon just looks like it’s orbiting in the other direction, which is perfectly consistent. A stone which is falling towards earth and accelerating looks like it’s flying away from earth and decelerating—exactly as gravity is supposed to do.

But if there’s some “entropy” quality out there that only increases, then that’s obviously impossible! When you played the movie backwards, you’d be able to tell that entropy was decreasing, and if entropy always increases, some law is being violated. So what, is entropy some artefact of quantum mechanics? No, as it turns out. Entropy is an artefact of the fact that you can’t measure all the particles in the universe at once. And the fact that it seems to always increase is a consequence of the fact that matter is stable at large scales.

The points in this post are largely from E.T. Jaynes’ Macroscopic Prediction.

A proof that entropy doesn’t always increase

Let be the set of states of some physical system. Here I will assume that there is a finite number of states and time advances in discrete steps—there is some function which steps time forward one step. We assume that these dynamics are time-reversible in the weak sense that is a bijection—every state is the future of exactly one “past” state. Let be some function. Assume - in other words, can never decrease. Then is constant, i.e .

Proof: Assume for contradiction for some . Since is finite, let be the sum of over all states. Then clearly , since just ranges over all the s. But on the other hand, we have for all , and in at least one case. So we must have - contradiction.

This proof can be generalized to the continuous time and space case without too much trouble, for the types of dynamics that actually show up in physics (using Liouville’s Theorem). The proof above still requires a bounded phase volume (corresponding to the finiteness of ). To generalize to other situations we need some more assumptions—the easiest thing is to assume that the dynamics are time-reversible in a stronger sense, and that this is compatible with the entropy in some way.

(You can find easy counterexamples in general, e.g. if and the dynamics are , then obviously we really do have that is increasing. Nothing to do about that.)

Anyways the bounded/​finite versions of the theorems do hold for a toy thermodynamic system like particles in a (finite) box—here the phase volume really is bounded.

The true meaning of entropy

Okay, so what the hell is going on? Did your high school physics textbook lie to you about this? Well, yes. But you’re probably never going to observe entropy going down in your life, so you can maybe rest easy.

Let be the physical system under consideration again. But suppose now that we can’t observe , but only some “high-level description . Maybe is the total microscopic state of every particle in a cloud of gas—their position and momentum—while is just the average energy of the particles (roughly corresponding to the temperature). is called a microstate and is called a macrostate. Then the entropy of is - the logarithm of the number of microstates where . We say these are the microstates that realize the macrostate .

The connection with Shannon entropy is now that this is exactly the Shannon entropy of the uniform distribution over . This is the distribution you should have over microstates if you know nothing except the microstate. In other words, the entropy measures your uncertainty about the microstate given that you know nothing except the macrostate.

There are more sophisticated versions of this definition in general, to account for the fact that

  • In general, your microstates are probably sets of real numbers, and there are probably infinitely many compatible with the macrostate, so we need a notion of “continuous entropy” (usually called differential entropy, I think)

  • Your measurement of the macrostate is probably not that certain (but this turns out to matter surprisingly little for thermodynamic systems),

but this is the basic gist.

Why entropy usually goes up

Okay, so why does entropy go up? Because there are more high-entropy states than low-entropy states. That’s what entropy means. If you don’t know anything about what’s gonna happen to (in reality, you usually understand the dynamics themselves, but have absolutely no information about except the macrostate), it’s more likely that it will transfer to a macrostate with a higher number of representatives than to one with a low number of representatives.

This also lets us defuse our paradox from above. In reality, entropy doesn’t go down for literally every microstate . It’s not true that for all - I proved that impossible above. What can be true is this: given a certain macrostate, it’s more probable that entropy increases than that it decreases.

We can consider an extreme example where we have two macrostates and , corresponding to low and high entropy. Clearly the number of low-entropy states that go to a high-entropy state is exactly the same as the number of high-entropy states that go to a low-entropy state. That’s combinatorics. But the fraction of low-entropy states that go to high-entropy is then necessarily larger than the fraction of high-entropy states that go to low-entropy states.

In other words,

Why entropy (almost) always goes up

Okay, but that’s a lot weaker than “entropy always increases”! How do we get from here to there? I could say some handwavy stuff here about how the properties of thermodynamic systems mean that the differences in the number of representatives between high-entropy and low-entropy states are massive—and that means the right-hand probability above can’t possibly be non-neglible. And that in general this works out so that entropy is almost guaranteed to increase.

But that’s very unsatisfying. It just happened to work out that way? I have a much more satisfying answer: entropy almost always increases because matter is stable at large scales.

Wait, what? What does that mean?

By “matter is stable at large scales”, I mean that the macroscopic behaviour of matter is predictable only from macroscopic observations. When a bricklayer builds a house, they don’t first go over them with a microscope to make sure the microstate of the brick isn’t going to surprise us later. And as long as we know the temperature and pressure of a gas, we can pretty much predict what will happen if we compress it with a piston.

What this means is that, if , then with extremely high probability, . It might not be literally certain, but it’s sure enough.

Now, let’s say we’re in the macrostate . Then there is some macrostate which is extremely likely to be the next one. For very nearly all so that , we have . But this means that must have at least that many microstates representing it, since is a bijection. So the entropy of can at most be a tiny bit smaller than the entropy of - this difference would be as tiny as the fraction of with , so we can ignore it.

So unless something super unlikely happens and , entropy goes up.

By the way, this also explains what goes wrong with time-reversibility, and why in reality, you can easily tell that a video is going backwards. The “highly probably dynamics” , which takes each macrostate the the most probable next state, don’t have to be time-reversible. For instance, let’s return to the two-macrostate system above. Suppose that with 100% certainty, low-entropy states become high-entropy. Let there be low-entropy states and high-entropy states. Then, just because is a bijection, there must be high-entropy states that become low-entropy. Now if , then practically all high-entropy states go to other high-entropy states. So but .

Of course in reality, if you start with a low-entropy state and watch this unfold for a really long time, you’ll eventually see it become a low-entropy state again. It’s just extremely unlikely to happen in a short amount of time.

Entropy is not exactly your uncertainty about the microstate

The entropy of a given macrostate is the uncertainty about the microstate of an observer who knows only the macrostate. In general, you have more information than this. For example, if the system starts in a low-entropy state, and you let it evolve into a high-entropy state, you know that the system is in one of the very small number of high-entropy states which come from low-entropy states! But since you can only interact with the system on macroscales, this information won’t be useful.