# Entropy and Temperature

Eliezer Yudkowsky previously wrote (6 years ago!) about the second law of thermodynamics. Many commenters were skeptical about the statement, “if you know the positions and momenta of every particle in a glass of water, it is at absolute zero temperature,” because they don’t know what temperature is. This is a common confusion.

**Entropy**

To specify the precise state of a classical system, you need to know its location in phase space. For a bunch of helium atoms whizzing around in a box, phase space is the position and momentum of each helium atom. For *N* atoms in the box, that means 6*N* numbers to completely specify the system.

Lets say you know the total energy of the gas, but nothing else. It will be the case that a fantastically huge number of points in phase space will be consistent with that energy.* In the absence of any more information it is correct to assign a uniform distribution to this region of phase space. The entropy of a uniform distribution is the logarithm of the number of points, so that’s that. If you also know the volume, then the number of points in phase space consistent with both the energy and volume is necessarily smaller, so the entropy is smaller.

This might be confusing to chemists, since they memorized a formula for the entropy of an ideal gas, and it’s ostensibly objective. Someone with perfect knowledge of the system will calculate the same number on the right side of that equation, but to them, that number isn’t the entropy. It’s the entropy of the gas if you know nothing more than energy, volume, and number of particles.

**Temperature**

The existence of temperature follows from the zeroth and second laws of thermodynamics: thermal equilibrium is transitive, and entropy is maximum in equilibrium. Temperature is then defined as the thermodynamic quantity that is the shared by systems in equilibrium.

If two systems are in equilibrium then they cannot increase entropy by flowing energy from one to the other. That means that if we flow a tiny bit of energy from one to the other (*δU*_{1} = -*δU*_{2}), the entropy change in the first must be the opposite of the entropy change of the second (*δS*_{1} = -*δS*_{2}), so that the total entropy (*S*_{1} + *S*_{2}) doesn’t change. For systems in equilibrium, this leads to (*∂S*_{1}/*∂U*_{1}) = (*∂S*_{2}/*∂U*_{2}). Define 1/*T* = (*∂S*/*∂U*), and we are done.

Temperature is sometimes taught as, “a measure of the average kinetic energy of the particles,” because for an ideal gas *U*/*N *= (3/2) *k _{B}T*. This is wrong as a definition, for the same reason that the ideal gas entropy isn’t the definition of entropy.

Probability is in the mind. Entropy is a function of probabilities, so entropy is in the mind. Temperature is a derivative of entropy, so temperature is in the mind.

**Second Law Trickery**

With perfect knowledge of a system, it is possible to extract all of its energy as work. EY states it clearly:

So (again ignoring quantum effects for the moment), if you

knowthe states of all the molecules in a glass of hot water, it is cold in a genuinely thermodynamic sense: you can take electricity out of it and leave behind an ice cube.

Someone who doesn’t know the state of the water will observe a violation of the second law. This is allowed. Let that sink in for a minute. Jaynes calls it second law trickery, and I can’t explain it better than he does, so I won’t try:

A physical system always has more macroscopic degrees of freedom beyond what we control or observe, and by manipulating them a trickster can always make us see an apparent violation of the second law.

Therefore the correct statement of the second law is not that an entropy decrease is impossible in principle, or even improbable; rather that it

cannot be achieved reproducibly by manipulating the macrovariables {XAny attempt to write a stronger law than this will put one at the mercy of a trickster, who can produce a violation of it._{1}, …, X_{n}} that we have chosen to define our macrostate.But recognizing this should increase rather than decrease our confidence in the future of the second law, because it means that if an experimenter ever sees an apparent violation, then instead of issuing a sensational announcement, it will be more prudent to search for that unobserved degree of freedom. That is, the connection of entropy with information works both ways; seeing an apparent decrease of entropy signifies ignorance of what were the relevant macrovariables.

**Homework**

I’ve actually given you enough information on statistical mechanics to calculate an interesting system. Say you have *N* particles, each fixed in place to a lattice. Each particle can be in one of two states, with energies 0 and ε. Calculate and plot the entropy if you know the total energy: *S*(*E*), and then the energy as a function of temperature: *E*(*T*). This is essentially a combinatorics problem, and you may assume that *N* is large, so use Stirling’s approximation. What you will discover should make sense using the correct definitions of entropy and temperature.

*: How many combinations of 10^{23 }numbers between 0 and 10 add up to 5×10^{23}?

- Perceptual Entropy and Frozen Estimates by 3 Jun 2015 19:27 UTC; 23 points) (
- [LINK] The Bayesian Second Law of Thermodynamics by 12 Aug 2015 16:52 UTC; 13 points) (
- 19 Jan 2015 6:52 UTC; 3 points) 's comment on Open thread, Jan. 19 - Jan. 25, 2015 by (
- 22 Jan 2015 17:18 UTC; 0 points) 's comment on Entropy and Temperature by (

This is a good article making a valuable point. But this —

— is a confusing way to speak. There is such a thing as “the average kinetic energy of the particles”, and one measure of this thing is called “temperature” in some contexts. There is nothing wrong with this as long as you are clear about what context you are in.

If you fall into the sun, your atoms will be strewn far and wide, and it won’t be because of something “in the mind”. There is a long and perfectly valid convention of calling the relevant feature of the sun its “temperature”.

An alternate phrasing (which I think makes it clearer) would be: “the

distinctionbetween mechanical and thermal energy is in the mind, and because we associate temperature with thermal but not mechanical energy, it follows that two observers of the same system can interpret it as having two different temperatures without inconsistency.”In other words, if you fall into the sun, your atoms will be strewn far and wide, yes, but your atoms will be equally strewn far and wide if you fall into an ice-cold mechanical woodchipper. The distinction between the types of energy used for the scattering process is what is subjective.

The high-school definition of temperature as “a measure of the average kinetic energy of the particles” (see the grandparent comment) actually erases that distinction as it defines temperature through kinetic (mechanical) energy.

I didn’t read your comment carefully enough. Yes, we agree.

Right, but we

don’tthink of a tennis ball falling in a vacuum as gaining thermal energy or rising in temperature. It is “only” gaining mechanical kinetic energy; a high school student would say that “this is not a thermal energy problem,” even though the ball does have an average kinetic energy (kinetic energy, divided by 1 ball). But if temperature of something that wedothink of as hot is just average kinetic energy, then there is a sense in which the entire universe is “not a thermal energy problem.”That’s because temperature is a characteristic of a multi-particle system. One single particle has energy, a large set of many particles has temperature.

And still speaking of high-school physics, conversion between thermal and kinetic energy is trivially easy and happens all the time around us.

A tennis ball is a multi-particle system; however, all of the particles are accelerating more or less in unison while the ball free-falls. Nonetheless, it isn’t usually considered to be increasing in temperature, because the entropy isn’t increasing much as it falls.

I think more precisely, there is such a thing as “the average kinetic energy of the particles”, and this agrees with the more general definition of temperature “1 / (derivative of entropy with respect to energy)” in very specific contexts.

That there is a more general definition of temperature which is

alwaystrue is worth emphasizing.Rather than ‘in very specific contexts’ I would say ‘in any normal context’. Just because it’s not universal doesn’t mean it’s not the overwhelmingly common case.

I am not sure this is true as stated. An omniscient Maxwell demon that would only allow hot molecules out runs into a number of problems, and an experimentally constructed Maxwell’s demon works by converting coherent light (low entropy) into incoherent (high entropy).

Maxwell’s demon, as criticized in your first link, isn’t omniscient. It has to observe incoming particles, and the claim is that this process generates the entropy.

[Spoiler alert: I can’t find any ‘spoiler’ mode for comments, so I’m just going to give the answers here, after a break, so collapse the comment if you don’t want to see that]

.

.

.

.

.

.

.

.

.

.

For the entropy (in natural units), I get

%20=%20N%20\ln%20N%20-%20\frac{E}{\epsilon}%20\ln%20\frac{E}{\epsilon}%20-%20\left(%20N%20-%20\frac{E}{\epsilon}%20\right)%20\ln%20\left(%20N%20-%20\frac{E}{\epsilon}%20\right))and for the energy, I get

%20=%20\frac{\epsilon%20N}{e%5E{\epsilon%20/%20T}%20-%201})Is this right? (upon reflection and upon consulting graphs, it seems right to me, but I don’t trust my intuition for statistical mechanics)

Not quite, but close. It should be a + instead of a—in the denominator. Nice work, though.

You have the right formula for the entropy. Notice that it is nearly identical to the Bernoulli distribution entropy. That should make sense: there is only one state with energy 0 or Nε, so the entropy should go to 0 at those limits. It’s maximum is at Nε/2. Past that point, adding energy to the system actually decreases entropy. This leads to a negative temperature!

But we can’t actually reach that by raising its temperature. As we raise temperature to infinity, energy caps at Nε/2 (specific heat goes to 0). To put more energy in, we have to actually find some particles that are switched off and switch them on. We can’t just put it in equilibrium with a hotter thing.

I made a plot of the entropy and the (correct) energy. Every feature of these plots should make sense.

Note that the exponential turn-on in E(T) is a common feature to any gapped material. Semiconductors do this too :)

Why did you only show the E(T) function for positive temperatures?

This is a good point. The negative side gives good intuition for the “negative temperatures are hotter than any positive temperature” argument.

What gives a better intuition is thinking in inverse temperature.

Regular temperature is, ‘how weakly is this thing trying to grab more energy so as to increase its entropy’.

Inverse temperature is ‘how strongly...’ and when that gets down to 0, it’s natural to see it continue on into negatives, where it’s trying to shed energy to increase its entropy.

No reason. Fixed.

The energy/entropy plot makes total sense, the energy/temperature doesn’t really because I don’t have a good feel for what temperature actually is, even after reading the “Temperature” section of your argument (it previously made sense because Mathematica was only showing me the linear-like part of the graph). Can you recommend a good text to improve my intuition? Bonus points if this recommendation arrives in the next 9.5 hours, because then I can get the book from my university library.

Depends on your background in physics. Landau & Lifshitz Statistical Mechanics is probably the best, but you won’t get much out of it if you haven’t taken some physics courses.

I gave this a shot as well as since your value for E(T) → ∞ as T → ∞, while I would think the system should cap out at εN.

I get a different value for S(E), reasoning:

If E/ε is 1, there are N microstates, since 1 of N positions is at energy ε. If E/ε is 2, there are N(N-1) microstates. etc. etc, giving for E/ε = x that there are N!/(N-x)!

so S = ln [N!/(N-x)!] = ln(N!) - ln((N-x)!) = NlnN - (N-x)ln(N-x)

S(E) = N ln N - (N—E/ε) ln (N—E/ε)

Can you explain how you got your equation for the entropy?

Going on I get E(T) = ε(N—e^(ε/T − 1) )

This also looks wrong, as although E → ∞ as T → ∞, it also doesn’t cap at exactly εN, and E → -∞ for T→ 0...

I’m expecting the answer to look something like: E(T) = εN(1 - e^(-ε/T))/2 which ranges from 0 to εN/2, which seems sensible.

EDIT: Nevermind, the answer was posted while I was writing this. I’d still like to know how you got your S(E) though.

S(E) is the log of the number of states in phase space that are consistent with energy E. Having energy E means that E/ε particles are excited, so we get (N choose E/ε) states. Now take the log :)

This is related to the physics of computation: ultimate physical computers coincides with temperatures approaching need to 0 K - (reversible computing, Landauer Principle) . Heat/entropy is computational stupidity.

Incidentally, this also explains the fermi paradox: post singularity civilizations migrate out away from their hot stars into the cold interstellar spaces, becoming dark matter. (which however, does not imply that all cold dark matter is intelligent)

I think I’ve figured out what’s bothering me about this. If we think of temperature in terms of our uncertainty about where the system is in phase space, rather than how large a region of phase space fits the macroscopic state, then we gain a little in using the second law, but give up a lot everywhere else. Unless I am mistaken, we lose the following:

Heat flows from hot to cold

Momentum distribution can be predicted from temperature

Phase changes can be predicted from temperature

The reading on a thermometer can be predicted from temperature

I’m sure there are others. I realize that if we know the full microscopic state of a system, then we don’t need to use temperature for these things, but then we wouldn’t need to use temperature at all.

If you’re able to do this, I don’t see why you’d be using temperature at all, unless you want to talk about how hot the water is to begin with (as you did), in which case you’re referring to the temperature that the water would be if we had no microscopic information.

We don’t lose those things. Remember, this isn’t my definition. This is the actual definition of temperature used by statistical physicists. Anything statistical physics predicts (all of the things you listed) is predicted by this definition.

You’re right though. If you know the state of the molecules in the water then you don’t need to think about temperature. That’s a feature, not a bug.

Suppose that you boil some water in a pot. You take the pot off the stove, and then take a can of beer out of the cooler (which is filled with ice) and put it in the water. The place where you’re confusing your friends by putting cans of beer in pots of hot water is by the ocean, so when you read the thermometer that’s in the water, it reads 373 K. The can of beer, which was in equilibrium with the ice at a measured 273 K, had some bits of ice stuck to it when you put it in. They melt. Next, you pull out your fancy laser-doppler-shift-based water molecule momentum spread measurer. The result jives with 373 K liquid water. After a short time, you read the thermometer as 360 K (the control pot with no beer reads 371 K). There is no ice left in the pot. You take out the beer, open it, and measure it’s temperature to be 293 K and its momentum width to be smaller than that of the boiling water.

What we observed was:

Heat flowed from 373 K water to 273 K beer

The momentum distribution is wider for water at 373 K than at 293 K

Ice placed in 373 K water melts

Our thermometer reads 373 K for boiling water and 273 K for water-ice equilibrium

Now, suppose we do exactly the same thing, but just after putting the beer in the water, Omega tells us the state of every water molecule in the pot, but not the beer. Now we know the temperature of the water is exactly 0 K. We still anticipate the same outcome (perhaps more precisely), and observe the same outcome for all of our measurements, but we describe it differently:

Heat flowed from 0 K water to 273 K beer

The momentum distribution is wider for water at 0 K (or recently at 0 K) than at 293 K

Ice placed in 0 K water melts

Our thermometer reads 373 K for water boiling at 0 K, and 273 K for water-ice equilibrium

So the only difference is in the map, not the territory, and it seems to be only in how we’re labeling the map, since we anticipate the same outcome using the same model (assuming you didn’t use the specific molecular states in your prediction).

I agree that temperature should be defined so that 1/T = dS/dE . This is the definition that, as far as I can tell, all physicists use. But nearly every result that uses temperature is derived using the assumption that all microstates are equally probable (your second law example being the only exception that I am aware of). In fact, this is often given as a fundamental assumption of statistical mechanics, and I think this is what makes the “glass of water at absolute zero” comment confusing. (Moreover, many physicists, such as plasma physicists, will often say that the temperature is not well-defined unless certain statistical conditions are met, like the energy and momentum distributions having the correct form, or the system being locally in thermal equilibrium with itself.)

I’m having trouble with brevity here, but what I’m getting at is that if you want to show that we can drop the fundamental postulate of statistical mechanics, and

stillrecover the second law of thermodynamics, then I’m, happy to call it a feature rather than a bug. But it seems like bringing in temperature confuses the issue rather than clarifying it.Omega tells us the state of the water at time T=0, when we put the beer into it. There are two ways of looking at what happens immediately after.

The first way is that the water doesn’t flow heat into the beer, rather it does some work on it. If we know the state of the beer/water interface as well then we can calculate exactly what will happen. It will look like quick water molecules thumping into slow boundary molecules and doing work on them. This is why the concept of temperature is no longer necessary: if we know everything then we can just do mechanics. Unfortunately, we don’t know everything about the full system, so this won’t quite work.

Think about your uncertainty about the state of the water as you run time forward. It’s initially zero, but the water is in contact with something that could be in any number of states (the beer), and so the entropy of the water is going to rise extremely quickly.

The water will initially be doing work on the beer, but after an extremely short time it will be flowing heat into it. One observer’s work is another’s heat, essentially.

This actually clears things up quite a lot. I think my discomfort with this description is mainly aesthetic. Thank you for being patient.

The rule that all microstates that are consistent with a given macrostate are equally probable is a consequence of the maximum entropy principle. See this Jaynes paper.

I am not sure what this means. In what sense is probability in the mind, but energy isn’t? Or if energy is in the mind, as well, what physical characteristic is not and why?

In the standard LW map/territory distinction, probability, entropy, and temperature are all features of the map. Positions, momenta, and thus energy are features of the territory.

I understand that this doesn’t fit your metaphysics, but I think it should still be a useful concept. Probably.

Sorry, I wasn’t clear. I didn’t use “my metaphysics” here, just the standard physical realism, with maps and territories. Suppose energy is the feature of the territory… because it’s the “capacity to do work”, using the freshman definition. Why would you not define temperature as the capacity to transfer heat, or something? And probability is already defined as the rate of decay in many cases… or is that one in the mind, too?

Energy is a feature of the territory because it’s a function of position and momenta and other territory-things.

“Capacity to transfer heat” can mean a few things, and I’m not sure which you want. There’s already heat capacity, which is how much actual energy is stored per degree of temperature. To find the total internal heat energy you just have to integrate this up to the current temperature. The usual example here is an iceberg which stores much more heat energy than a cup of coffee, and yet heat flows from the coffee to the iceberg. If you mean something more like “quantity that determines which direction heat will flow between two systems,” then that’s just the definition I presented :p

I actually have trouble defending “probability is in the mind” in some physics contexts without invoking many-worlds. If it turns out that many-worlds is wrong and copenhagen, say, is right, then it will be useful to believe that for physical processes, probability is in the territory. I think. Not too sure about this.

Yeah, that’s a better definition :)

Feel free to elaborate. I’d think that probability is either in the map or in the territory, regardless of the context or your QM ontology, not sometimes here and sometimes there. And if it is in the territory, then so is entropy and temperature, right?

Say we have some electron in equal superposition of spin up and down, and we measure it.

In Copenhagen, the universe decides that it’s up or down right then and there, with 50% probability. This isn’t in the mind, since it’s not a product of our ignorance. It can’t be, because of Bell stuff.

In many-worlds, the universe does a deterministic thing and one branch measures spin up, one measures spin down. The probability is in my mind, because it’s a product of my ignorance—I don’t know what branch I’m in.

Hmm, so, assuming there is no experimental distinction between the two interpretations, there is no way to tell the difference between map and territory, not even in principle? That’s disconcerting. I guess I see what you mean by ” trouble defending “probability is in the mind”″.

If we inject some air into a box then close our eyes for a few seconds and shove in a partition, there is a finite chance of finding the nitrogen on one side and the oxygen on the other. Entropy can decrease, and that’s allowed by the laws of physics. The internal energy had better not change, though. That’s disallowed.

If energy changes, our underlying physical laws need to be reexamined. In the official dogma, these are firmly in the territory. If entropy goes down, our map was likely wrong.

That doesn’t follow.

Even if Copenhagen is right, I as a rational agent should still ought to use mind-probabilities. It may be the case that the quantum world is truly probabilistic-in-the-territory, but that doesn’t affect the fact that I don’t know the state of any physical system precisely.

Can’t there be forms of probability in the territoryand the map?

You’re most of the way towards why you shouldn’t believe the Jaynes -Yudkowaky argument.

If you really can infer the absence of probability in the territory by reflecting on human reasoning alone, then the truth ofCLI versus MWI shouldn’t matter. If matters, as you seem to think, then armchair reasoning can’t do what Jaynes amd Yudkowsky think it can (in this case)

Its a reference to a bad,but locally popular argument from Jaynes. It holds, that since since, some forms of probability are subjective, they all are, and...ta-daaa ….the territory is therefore deterministic.

Name a probability that is

notsubjective. (And before you bring up quantum-mechanical collapse, I’d just like to say one thing: MWI. And before you complain about unfalsifiability, let me link you here.)I don’t need definite proof of in-the-territory probability to support my actual point, which is that you can’t determine the existence or non existence of features of the territory by armchair reflection.

Of course you can’t

determinewhether something exists or not. There might yet be other probabilities out there that actuallyareobjective. The fact that we have not discovered any such thing, however, is telling. Absence of evidence is evidence of absence. Therefore, it islikely—notcertain, butlikely—that no such probabilities exist. If your claim is that we cannot be certain of this, then of course you are correct. Such a claim, however, is trivial.Thanks for this. I am definitely going to use the Gibbs paradox (page 3 of the Jaynes paper) to nerd-snipe my physics-literate friends.

I’l follow suit with the previous spoiler warning.

SPOILER ALERT .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

I took a bit different approach from the others that have solved this, or maybe you’d just say I quit early once I thought I’d shown the thing I thought you were trying to show:

If we write entropy in terms of the number of particles, N and the fraction of them that are excited: α ≡ E/(Nε) , and take the derivative with respect to α, we get:

dS/dα = N log [(1-α)/α]

Or if that N is bothering you (since temperature is usually an intensive property), we can just write:

T = 1/(dS/dE) = E / log[(1-α)/α]

This will give us zero temperature for all excited or no excited particles (which makes sense, because you know exactly where you are in phase space), and it blows up at half particles are excited. This means that there is no reservoir hot enough to get from α < .5 to α = .5 .

I posted some plots in the comment tree rooted by DanielFilan. I don’t know what you used as the equation for entropy, but your final answer isn’t right. You’re right that temperature should be intensive, but the second equation you wrote for it is still extensive, because E is extensive :p

You’re right. That should be ε, not E. I did the extra few steps to substitute α = E/(Nε) back in, and solve for E, to recover DanielFilan’s (corrected) result:

E = Nε / (exp(ε/T) + 1)

I used S = log[N choose M], where M is the number of excited particles (so M = αN). Then I used Stirling’s approximation as you suggested, and differentiated with respect to α.

Good show!

I am not quite sure in which way this statement is useful.

″..and for an encore goes on to prove that black is white and gets himself killed on the next zebra crossing.”—

Douglas AdamsI had that thought as well, but the ‘Second Law Trickery’ section convinced me that it was a useful statement.

I’ll grant that it is an

interestingstatement, but at the moment my impression is that it’s justredefining the word “temperature” in a particular way.I don’t know of any way that statement in particular is useful, but understanding the model that produces it can be helpful. For example, it’s possible to calculate the minimum amount of energy necessary to run a certain computation on a computer at a certain temperature. It’s further useful in that it shows that if the computation is reversible, there is no minimum energy.

The model is fine, what I’m having problems with is the whole “in the mind” business which goes straight to philosophy and seems completely unnecessary for the discussion of properties of classic systems

in physics.Entropy is statistical laws. Thus, like statistics, it’s in the mind. It’s also no more philosophical than statistics is, and not psychological at all.

I have a feeling you’re confusing the map and the territory. Just because statistics (defined as a toolbox of methods for dealing with uncertainty) exists in the mind, there is no implication that uncertainty exists only in the mind as well. Half-life of a radioactive element is a statistical “thing” that exists in real life, not in the mind.

In the same way, phase changes of a material exist in the territory. You can usefully define temperature as a particular metric such that water turns into gas at 100 and turns into ice at zero. Granted, this approach has its limits but it does not seem to depend on being “in the mind”.

The half-life of a radioactive element is something that can be found without using probability. It is the time it takes for the measure of the universes in which the atom is still whole to be exactly half of the initial measure. Similarly, phase change can be defined without using probability.

The universe may be indeterministic (though I don’t think it is), but all this means is that the past is not sufficient to conclude the future. A mind that already knows the future (perhaps because it exists further in the future) would still know the future.

So, does your probability-less half-life require MWI? That’s not a good start. What happens if you are unwilling to just assume MWI?

Why do you think such a thing is possible?

Even without references to MWI, I’m pretty sure you can just say the following: if at time t=0 you have an atom of carbon-14, at a later time t>0 you will have a superposition of carbon-14 and nitrogen-14 (with some extra stuff). The half-life is the value of t for which the two coefficients will be equal in absolute value.

Uncertainty in the mind and uncertainty in the territory are related, but they’re not the same thing, and calling them both “uncertainty” is misleading. If indeterminism is true, there is an upper limit to how certain someone can reliably be about the future, but someone further in the future can know it with perfect certainty and reliability.

If I ask if the billionth digit of pi is even or odd, most people would give even odds to those two things. But it’s something that you’d give even odds to on a bet, even in a deterministic universe.

If I flip a coin and it lands on heads, you’d be a fool to bet otherwise. It doesn’t matter if the universe is nondeterministic and you can prove that, given all the knowledge of the universe before the coin was flipped, it would be exactly equally likely to land on heads or tails. You know it landed on heads. It’s 100% certain.

Yes, future is uncertain but past is already fixed and certain. So? We are not talking about probabilities of something happening in the past. The topic of the discussion is how temperature (and/or probabilities) are “in the mind” and what does that mean.

The past is certain but the future is not. But the only difference between the two is when you are in relation to them. It’s not as if certain time periods are inherently past or future.

An example of temperature being in the mind that’s theoretically possible to set up but you’d never manage in practice is Maxwell’s demon. If you already know where all of the particles of gas are and how they’re bouncing, you could make it so all the fast ones end up in one chamber and all the slow ones end up in the other. Or you can just get all of the molecules into the same chamber. You can do this with an arbitrarily small amount of energy.

I think his “in the mind” is correct in his context, because in the model of entropy he is discussing, temperature_entropy is dependent on entropy, is dependent on your

knowledgeof the states of the system.I’ll repeat what I said earlier in the context of the discussion of different theories of time.

New physics didn’t make old ideas useless. Temperature_kineticenergy is probably more relevant in most situations.

The OP makes his mistake by identifying temperature_entropy with temperature_kineticenergy.

I’m don’t see the issue in saying [you don’t know what temperature really is] to someone working with the definition [T = average kinetic energy]. One definition of temperature is always true. The other is only true for idealized objects.

Nobody knows what anything really is. We have more or less accurate models.

What do you mean by “true”? They both can be expressed for any object. They are both equal for idealized objects.

Only one of them actually corresponds with temperature for all objects. They are both equal for one subclass of idealized objects, in which case the “average kinetic energy” definition

follows fromthe the entropic definition, not the other way around. All I’m saying is that it’s worth emphasizing that one definition is strictly more general than the other.Average kinetic energy always corresponds to average kinetic energy, and the amount of energy it takes to create a marginal amount of entropy always corresponds to the amount of energy it takes to create a marginal amount of entropy. Each definition corresponds perfectly to itself all of the time, and applies to the other in the case of idealized objects. How is one more general?

Two systems with the same “average kinetic energy” are not necessarily in equilibrium. Sometimes energy flows from a system with lower average kinetic energy to a system with higher average kinetic energy (eg. real gases with different degrees of freedom). Additionally “average kinetic energy” is not applicable at all to some systems, eg. ising magnet.

I just mean as definitions of temperature. There’s temperature(from kinetic energy) and temperature(from entropy). Temperature(from entropy) is a fundamental definition of temperature. Temperature(from kinetic energy) only tells you the actual temperature in certain circumstances.

Why is one definition more fundamental than another? Why is only one definition “actual”?

Because one is true in all circumstances and the other isn’t? What are you actually objecting to? That physical theories can be more fundamental than each other?

I admit that some definitions can be better than others. A whale lives underwater, but that’s about the only thing it has in common with a fish, and it has everything else in common with a whale. You could still make a word to mean “animal that lives underwater”. There are cases where where it lives is so important that that alone is sufficient to make a word for it. If you met someone who used the word “fish” to mean “animal that lives underwater”, and used it in contexts where it was clear what it meant (like among other people who also used it that way), you might be able to convince them to change their definition, but you’d need a better argument than “my definition is always true, whereas yours is only true in the special case that the fish is not a mammal”.

The distinction here goes deeper than calling a whale a fish (I do agree with the content of the linked essay).

If a layperson asks me what temperature is, I’ll say something like, “It has to do with how energetic something is” or even “something’s tendency to burn you”. But I would never say “It’s the average kinetic energy of the translational degrees of freedom of the system” because they don’t know what most of those words mean. That latter definition is almost always used in the context of, essentially, undergraduate problem sets as a convenient fiction for approximating the real temperature of monatomic ideal gases—which, again, is usually a stepping stone to the thermodynamic definition of temperature as a partial derivative of entropy.

Alternatively, we could just have temperature(lay person) and temperature(precise). I will always insist on temperature(precise) being the entropic definition. And I have no problem with people choosing whatever definition they want for temperature(lay person) if it helps someone’s intuition along.

So, effectively there are two different things which go by the same name? Temperature_entropy is one measure (coming from the information-theoretic side) and temperature_kineticenergy is another measure (coming from, um, pre-Hamiltonian mechanics?)..?

That makes some sense, but then I have a question. If you take an ice cube out of the freezer and put it on a kitchen counter, will it melt if there is no one to watch it? In other words, how does the “temperature is in the mind” approach deal with phase transitions?

They look like two different concepts to me.

I don’t know. I suppose that would depend on how much that mind knows about phase transitions.

That’s difficult to say. If you build a heat pump, you deal with entropy. If you radiate waste heat, you deal with kinetic energy. If you want to know how much waste heat you’re going to have, you deal with entropy. If you significantly change the temperature of something with a heat pump, then you have to deal with both for a large variety of temperatures.

Calling them Temperature_kineticenergy and Temperature_entropy is somewhat misleading, since both involve kinetic energy. Temperature_kineticenergy is average kinetic energy, and Temperature_entropy is the change in kinetic energy necessary to cause a marginal increase in entropy.

Also, if you escape your underscores with backslashes, you won’t get the italics.

Is that because you didn’t read the rest of the post?

“Temperature is in the mind” doesn’t mean that you can make a cup of water boil just by wishing hard enough. It means that whether or not you should expect a cup of water to boil depends on what you know about it.

(It also doesn’t mean that whether an ice cube melts depends on whether anyone’s watching. The ice cube does whatever the ice cube does in accordance with its initial conditions and the laws of mechanics.)

So now that you’ve told me what it does NOT mean, perhaps you can clarify what it DOES mean? I still don’t understand.

In particular, the phrase “in the mind” implies that temperature requires a mind and would not exist if there were no minds around. Given that we are talking about classical systems, this seems an unusual position to take.

Another implication of “in the mind” is that different minds would see temperature differently. In fact, if you look into the original EY post, it explicitly says

And that makes me curious about phase changes. Can I freeze water into ice

by knowing more about it? Note: not bydoingthings like separating molecules by energy and ending up with ice and electricity, but purely byknowing?If I plunge my hand into boiling water, I will get scalded. Will I still get scalded if I know the position and momentum of every particle involved? If so, what causes it? If not, where does this stop—is everything in the mind?

ETA: I should have reread the discussion first, because there has been a substantial amount about this very question. However, I’m not sure it has come to a conclusion that resolves the question. Also, no-one has taken on Shalizi’s conundrum that someone cited, that Bayesian reasoners should see entropy decrease with time.

ETA2: One response would be that the detailed knowledge allows one to predict the same injury, by predicting the detailed properties of all of the particles through time. But I find this unsatisfying, because the easiest way to get that prediction is to start by throwing away almost all of the information you started with. Find the temperature you would attribute to this microstate if you didn’t know the microstate, and make the prediction you would have made knowing only the temperature. This will almost invariably give the right answer: it will be right as often as you would actually get scalded from boiling water. If the simplest way to make some objective prediction is in terms of a supposedly subjective quantity that is not being experienced by any actual subject, just how subjective is that quantity?

ETA 3: Consider the configuration space of the whole system, a manifold in some gigantic number of dimensions, somewhat more than Avogadro’s number. I am guessing that with respect to any sensible measure on that manifold, almost all of it is in states whose temporal evolution almost exactly satisfies equipartition. Equipartition gives you an objective definition of temperature.

Equipartition is what you will see virtually all of the time, when you do not know the microstate. But it is also what you will see when you do know the microstate, for virtually every microstate. The only way to come upon a non-equipartitioned pan of hot water is by specifically preparing it in such a state.

But what is a sensible measure, if we do not deliberately define it in such a way as to make the above true? You could invent a measure that gave most of its mass to the non-equipartitioned cases. But to do that you would have to already know that you wanted to do that, in order to devote the mass to them. I think there’s a connection here to the matter of (alleged) “Bayesian brittleness”, and to the game of “follow the improbability”. But I do not quite see a resolution of this point yet. Entsophy/Joseph Wilson seemed to be working towards this on his blog last year, until he had some sort of meltdown and vanished from the net. I had intended to ask him how he would define measures on continuous manifolds (he had up to then only considered the discrete case and promised he would get to continuous ones), but I never did.

Assuming that you plunge your hand into the water at a random point in time, yes you will get scalded with probability ~1. This means that the water is “hot” in the same sense that the lottery is “fair” even if you know what the winning numbers will be—if you don’t use that winning knowledge and instead just pick a series of random numbers, as you would if you

didn’tknow the winning numbers, then of course you will still lose. I suppose if you are willing to call such a lottery “fair”, then by that same criterion, the waterishot. However, if you use this criterion, I suspect a large number of people would disagree with you on what exactly it means for a lottery to be “fair”. If, on the other hand, you would call a lottery in which you know the winning numbers “unfair”, you should be equally willing to call water about which you know everything “cold”.Well, if I know the winning numbers but Alice doesn’t, the lottery is “fair” for Alice. If I know everything about that cup of water, but Alice doesn’t, is the water at zero Kelvin for me but still hot for Alice?

And will we both predict the same result when someone puts their hand in it?

Probably yes, but then I will have to say things like “Be careful about dipping your finger into that zero-Kelvin block of ice, it will scald you” X-)

It won’t be ice. Ice has a regular crystal structure, and if you know the microstate you know that the water molecules aren’t in that structure.

So then temperature has nothing to do with phase changes?

Temperature in the thermodynamic sense (which is the same as the information-theoretic sense if you have only ordinary macroscopic information) is the same as average energy per molecule, which has a lot to do with phase changes for the obvious reason.

In exotic cases where the information-theoretic and thermodynamic temperatures diverge, thermodynamic temperature still tells you about phase changes but information-theoretic temperature doesn’t. (The thermodynamic temperature is still useful in these cases; I hope no one is claiming otherwise.)

You probably know this, but average energy per molecule is not temperature at low temperatures. Quantum kicks in and that definition fails. dS/dE never lets you down.

Whoops! Thanks for the correction.

Aha, thanks. Is information-theoretic temperature observer-specific?

In the sense I have in mind, yes.

I am somewhat amused that you linked to the same post on which we are currently commenting. Was that intentional?

Actually, no! There have been kinda-parallel discussions of entropy, information, probability, etc., here and in the Open Thread, and I hadn’t been paying much attention to which one this was.

Anyway, same post or no, it’s as good a place as any to point someone to for a clarification of what notion of temperature I had in mind.

In the lottery, there is something I can do with foreknowledge of the numbers: bet on them. And with perfect knowledge of the microstate I can play Maxwell’s demon to separate hot from cold. But still, I can predict from the microstate all of the phenomena of thermodynamics, and assign temperatures to all microstates that are close to equipartition (which I am guessing to be almost all of them). These temperatures will be the same as the temperatures assigned by someone ignorant of the microstate. This assignation of temperature is independent of the observer’s knowledge of the microstate.

There is a peculiar consequence of this, pointed out by Cosma Shalizi. Suppose we have a deterministic physical system S, and we observe this system carefully over time. We are steadily gaining information about its microstates, and therefore by this definition, its entropy should be decreasing.

You might say, “the system isn’t closed, because it is being observed.” But consider the system “S plus the observer.” Saying that entropy is nondecreasing over time seems to require that the observer is in doubt about its own microstates. What does that mean?