# 5 Physics Problems

Muireall and DaemonicSigil trade physics problems. Answers and discussion of the answers have been spoilered so you can try the problems yourself. Please also use spoiler formatting (type “>!”) in the comments.

DaemonicSigil

### Smeared Out Sun

Okay, so the first problem is from Thinking Physics. (I promise I’ll have some original problems also. And there’s a reason I’ve chosen this particular one.)

It’s the problem of the Smeared out Sun (caution! link contains spoilers in the upside-down text).

The problem goes as follows: The sun is far enough away that we could replace it with a disc of equal radius at the same temperature (and with the same frequency-dependent emissivity), and so long as the plane of the disc was facing the Earth, there would be little difference in the way it was heating the Earth. While scientists would certainly be able to tell what had happened, there would be little effect on everyday life. (Assume no change to the gravitational field in the solar system.)

Now, suppose that after turning into a disc, the sun is spread into a sphere of radius 1AU surrounding Earth. We’d like to keep the spectrum exactly the same, so we’ll imagine breaking the disc into many tiny pieces, each perhaps the size of a dime, and spreading these pieces out evenly across the 1AU sphere. Between these sun-dimes is empty space.

The goal of this exercise is to keep the incoming radiation as similar as possible to that which is given to us by the sun. The spectrum is the same, the total energy delivered is the same, the only difference is that it now comes in from all directions. The question is: What happens to the average temperature of the Earth after this has happened: Does it heat up, cool down, or stay the same?

Muireall

I think this question is basically asking about the convexity of the relationship between total radiated power and temperature. It’s (some law with a name that I forget), which is strictly convex, so for the Earth to be in power balance again, the average temperature needs to be hotter than when there was a wider spread of temperatures. (If the Earth had a cold side at absolute zero and a hot side at T, with an average temperature of T/​2 and average radiated power like , then with the Earth at a single temperature you’d need it to be , which is hotter.)

Muireall

That should be the main effect. The Earth sees the same amount of hot vs cold sky, so if we ignore how the Earth equilibrates internally, I think there’s no change from moving pieces of the Sun disc around.

DaemonicSigil

Yes, exactly, the Earth gets hotter on average after the sun is spread out over the sky.

The name of the radiation law is the Stefan-Boltzmann Law, in case a reader would like to look it up. As things get hotter, the amount they radiate increases more than you’d expect by just extrapolating linearly. So things that are hot in some places and cold in others radiate more than you’d expect from looking at the average temperature.

Interestingly, Epstein’s answer in Thinking Physics is that the average temperature of the Earth stays the same, which I think is wrong. Also, in his version the sun becomes cooler and cooler as it spreads out, rather than breaking into pieces. We can still model it as a blackbody, so this shouldn’t change the way it absorbs radiation, but then greenhouse-type effect might become important. I didn’t want to have to think about that, so I just broke the sun into pieces instead.

### Measuring Noise and Measurement Noise

Muireall

I agree that your version is cleaner, and I’m not really sure what Epstein was getting at—I don’t really have any conflicting intuitions if he’s treating the Earth as at a single temperature to begin with. I do think there’s an interesting line of questions here that leads to something like [redacted for now in case we end up going there], which tends to strike people as a bit paradoxical.

I’ll copy the “measuring noise and measurement noise” question from my shortform here, adding a little diagram in case that’s clearer:

You’re using an oscilloscope to measure the thermal noise voltage across a resistance . Internally, the oscilloscope has a parallel input resistance and capacitance , where the voltage on the capacitor is used to deflect electrons in a cathode ray tube to continuously draw a line on the screen proportional to the voltage over time.

The resistor and oscilloscope are at the same temperature. Is it possible to determine from the amplitude of the fluctuating voltage shown on the oscilloscope?

1. Yes, if

2. Yes, if

3. Yes, if

4. No

DaemonicSigil

I’m first going to try and figure this out without looking anything up.

The first simplification to make is that we can lump the resistors , and together, into a single resistor of resistance:

The fact that one of the resistors happens to be inside the scope doesn’t matter.

If I’m remembering the way Johnson noise works correctly, a resistor with its two ends connected together will have current noise in it. If we integrate the current noise to get the net quantity of charge that has flowed around the loop as a function of time, the result is a Brownian random walk. This situation is almost like that, except the two ends of the circuit are connected by a capacitor rather than a wire. Now the integrated current equals the gorge of the capacitor. So the resistors still cause a Brownian motion in the gorge, but there is also a tendency for the capacitor to disgorge over time. If we add this drift into the system, then I believe the result is an Ornstein-Uhlenbeck process? Such a process has an equilibrium standard deviation, so it makes sense to say that we’re measuring the amplitude of the fluctuating voltage. The cathode ray tube is pretty much directly recording the gorge as a function of time, and then we just take the variance.

The Johnson noise should be stronger for a smaller resistor (and at higher temperatures, but everything’s at the same temperature). And stronger Johnson noise should result in a greater variance in the voltage recorded on the screen. But as a counterpoint, having a smaller resistor in an RC circuit means it disgorges more quickly, and this would tend to result in less variance in the voltage recorded on the screen. We might suspect that these two effects cancel out.

Variance in gorge due only to diffusion (i.e. Johnson noise) goes as for some constant . Decrease in variance due to disgorge goes as a multiplicative factor of . Let denote the variance, and make time infinitesimal, and we get:

setting , we get . The does indeed cancel, and there is no way to measure it merely by inspecting . So the correct answer to the question is 4.

If we can inspect the waveform more closely, then it should be possible to infer from how stretched out it is in the time dimension. The time constant still depends on resistance. So in principle, we should be able to measure the resistance in cases 2 and 3 if we’re allowed to inspect things about the function being drawn on the scope screen besides just its variance.

Muireall

That’s right. As long as you’re not allowed to look at time dependence, I believe one can even argue that no linear network between the oscilloscope input and the capacitor will make this work as an ohmmeter.

If you can calculate statistics involving time, as you say, you can learn the time constant and thus .

DaemonicSigil

### Relativistic Rain

WARMUP: A classic problem is to consider an approximately spherical individual, eg. Kirby, crossing the street in the rain. Kirby would like to reach the other side while being struck by as few raindrops as possible. Moving faster means more raindrops striking Kirby’s front, but moving slower means staying in the rain for longer. The rain is falling at about 10m/​s straight down. At what speed should Kirby travel to minimize the total amount of rain that hits him?

1. As slow as possible.

2. 10m/​s

3. 20m/​s

4. As fast as possible.

5. Speed makes no difference.

BOX VARIANT WARMUP: Now Kirby is carrying a box that looks as seen below. He can’t tilt the box, nor point its opening away from his direction of travel:

The goal is to choose a speed so as to minimize the amount of water collected in the bottom of the box. Now what is the optimal speed of travel?

1. As slow as possible.

2. 10m/​s

3. 20m/​s

4. As fast as possible.

5. Speed makes no difference.

RELATIVISTIC RAIN: Now for our main problem, Kirby must cross a beam of light shining perpendicular to his direction of travel. Rather than the amount of water, we now want to minimize the total energy of the light that hits Kirby /​ the inside of the box. Energy being measured in Kirby’s frame, of course. What is the optimal velocity to ensure minimal light energy hits the inside of the box?

1. As slow as possible.

2. ~0.7c

3. As fast as possible.

4. Speed makes no difference.

And for an unburdened Kirby: What is the optimal velocity to ensure minimal light energy hits a spherical Kirby?

1. As slow as possible.

2. ~0.7c

3. As fast as possible.

4. Speed makes no difference.

Muireall

For the warmup:

I’d freeze everything at the starting point and look at this as a geometric problem—what’s the volume of rain that would eventually hit Kirby? It’s Kirby’s cross section swept along the hypotenuse of a right triangle whose base is the horizontal distance Kirby needs to cross, and whose angle with the vertical is that of the relative velocity of Kirby and the rain. To minimize the volume, that angle should be as close to 90 degrees as possible, so Kirby should go 4. As fast as possible.

For the box variant:

Now the relevant cross section also depends on velocity.

Let’s call the horizontal distance across the street , and call the angle the relative velocity makes with the vertical , so that , where is Kirby’s horizontal speed. So the volume is the cross section times . If we draw some triangles we find that the cross section is directly proportional to , so Kirby’s speed vanishes from the result. So the answer should be 5. Speed makes no difference.

Relativistic rain:

I think now we need to (1) use a different velocity addition formula to find the relevant angle and (2) account for the blueshift in Kirby’s frame. I don’t remember how to do either of these.

Actually, for the version with the box, it seems like drops out of the volume regardless of how you add velocities. Then the difference would just be the blueshift, meaning that Kirby should go 1. As slow as possible. [Edit: Maybe the volume picture here is actually not quite appropriate here or at least unnecessarily confusing. But you still have a cancellation of velocities in the crossing time and the incident component in Kirby’s frame.]

For an unburdened Kirby, it seems like it has to be better to go faster, at least up to a point—the problem looks like the nonrelativistic version at low speeds. As Kirby approaches the speed of light, the angle approaches 45 degrees while the blueshift gets worse without bound (I think?). So it seems like there ought to be a crossover when going faster starts making things worse, which leaves only one answer among the choices.

I’m going to look some things up; hope that’s alright.

OK, now I think the relevant angle approaches 90 degrees—the total speed needs to still be in Kirby’s frame but the horizontal component is also just , so the apparent vertical component does go to zero—but the blueshift is in fact . The thing we’re trying to minimize will then scale like , which we can check is minimized at . So I’ll go with 2. ~0.7c.

DaemonicSigil

Yes, all of the above is correct!

My approach was basically the same, I transformed the 4-momentum of the light from reference to in Kirby’s frame for rapidity . Therefore everything picks up a factor of due to blueshift. And then different geometries mean different amounts of light are absorbed in the reference frame. The overall results are proportional to for box with opening facing forward, for box with opening facing upward, and for a spherical Kirby (this function has the other two as limits at low/​high speeds. graph.)

### Martian Dome

Muireall

Here’s one to keep us going. Mars has an average temperature around −60 °C (with significant day-night swings). I’m building a dome on Mars to fill with atmosphere, and I want it to be warmer than that inside. I’m considering some options for the dome. Can you order options from coolest to hottest?

A. A perfect conductor, reflecting all electromagnetic radiation.
B. A special conducting oxide, which reflects infrared and ultraviolet light but is mostly transparent at visible wavelengths.
C. Transparent plastic that just keeps the atmosphere in place.
D. Green paint, reflecting green (and shorter-wavelength) light but absorbing red, infrared and longer wavelengths.
E. Black paint, acting as a perfect absorber at all wavelengths.

Does the ranking change if we scatter the Sun into dime-sized pieces across the sky as in your question?

What about a “smeared-out sun” that takes up the entire sphere of sky but is cooler to keep the incoming power constant as in Epstein’s version?

DaemonicSigil

Okay, so let’s assume no significant extra heat source inside the dome (eg. nuclear reactor, or machinery running on power generated by solar panels located outside the dome). Also, let’s assume that all the materials have the same thermal conductivity, so the conductive heat loss due to each is the same, assuming an equal temperature difference.

Figuring out the temperature inside of dome A almost feels a little bit like dividing 0 by 0. Radiation can neither enter nor leave, so the temperature is dominated by the non-radiative effects that I just assumed away in the previous paragraph. I can probably get away with considering A to behave partly like a perfect conductor and partly like C. i.e. it’s a perfect conductor with some small holes in it.

It seems like the problem would be easiest to do in reverse order. So starting with the smeared and cooled sun:

A = B = C = D = E

Explanation: everything is in thermal equilibrium.

Next, say we shatter the sun into pieces. Now the sun is radiating mostly in the visible part of the spectrum, while the inside of the dome is mostly radiating in the infra-red. So the best dome would let high frequency through, while reflecting infra-red. This is B, the special conducting oxide. D is the opposite of this, and nearly the worst option. C vs E is the interesting question here. Firstly, dome C is kind of a no-op dome. We can put it around any of the other domes, and leave them unchanged. Since there’s no day-night cycle, the inside of dome E is at the same temperature as dome E itself. So the question is whether a flat patch of martian soil or a fully black painted dome should end up at a higher temperature.

First we should make sure that geometry doesn’t matter. For the spread-out sun, all parts of the sky are equally good and bad in terms of emitting and absorbing energy. Facing more of the sky means faster equilibriation, but shouldn’t affect the equilibrium temperature.

Similarly, uniformly changing the reflectivity shouldn’t affect the equilibrium temperature, just the rate. So we kind of need to know the way in which martian soil reflectivity changes as a function of frequency. That sounds like a huge pain to look up, so I’m just going to take a guess based personal experience with black-painted things in the sun and say that the black painted dome gets hotter. (Yes, I know some of that personal experience is merely due to a faster rate of heating. But still.) So my proposed ranking for this problem is:

D < A = C < E < B

Where probably C and E are fairly close.

Finally we introduce the normal sun and the day/​night cycle. This splits C and A. A doesn’t change temperature very fast with the day night cycle, because its rate of heat transfer is low. C oscillates more in temperature, so by the law its average temperature is lower. If we now look at the planet’s surface inside the dome for case E, it oscillates less than C, because it has the blackbody shield layer. But it probably still oscillates more than A. So my overall guess would be:

D < C < E < A < B

I probably got at least something wrong in all that...

Muireall

Yeah, the interesting question is C vs E. I agree with everything else.

Mars has an albedo of around 0.25, meaning roughly that it reflects about 25% of incident power from the Sun. It turns out that it tends to reflect red and infrared more than shorter-wavelength light. You’re right that this is kind of a pain to look up. Maybe it’s a reasonable guess given the color of the planet but I was a little surprised that it’s about as reflective in infrared as in red. [Edit: Just realized the data I was looking at only goes out to 3 microns. I’d guess then that it’s still a pretty good blackbody further in the infrared.]

One idea that’s implicit in A = B = C = D = E for thermal equilibrium is that there’s a symmetry between absorptivity and emissivity of a surface. If black heats up faster because it absorbs more at a certain wavelength then it will also emit more efficiently at that wavelength. Otherwise it would somehow tend get hotter in “equilibrium”. I would have liked to phrase the problem so that idea was explicitly necessary—I’m not sure it is actually necessary here, but it’s what I had in mind.

In particular, the black paint will tend to both absorb and emit more light than Martian soil would, but the relative enhancement is greater over Mars’s blackbody emission spectrum (infrared, ~15 microns) than at the Sun’s (mostly visible). [See edit above, this might not actually be true. Reflectivity is about 25% averaged over the Sun’s spectrum (and averaged over the planet, which might matter if it comes out this close), and down to 20% at 3 microns and falling further into the infrared. Darn!] So there’s an overall tendency for the blackbody dome to provide cooling. I believe this should be the case for both the ordinary sun and scattered suns, but I admit I haven’t worked out the problem in more detail than this.

DaemonicSigil

### Thermodynamic Computer

Consider a simple thermodynamic computer. It has states it can be in, and you can control the energies of the states and the heights of the energy barriers between them. We’ll suppose that the states are organized in a 2D grid.

How might one build such a “computer”? Here is one way: The machine is made of many small chambers, with channels connecting them. The chambers are filled with air, and a tiny charged particle is placed into one of the chambers, where it’s buffeted about. It will end up in a different chamber after some time.

The system is engineered such that the particle spends much more time in chambers than in the channels between them, and which channel it leaves through is statistically independent of which channel it entered through. Voltages in each of the chambers and in the channels can be individually controlled with the aid of metal meshes, and this allows the energy of each state and the height of the energy barriers between states to be tuned. The material the chambers are made of is transparent, so from time to time we can shine a light through the computer to see where the particle is.

Our goal will be to compute how some system coupled to a heat bath transitions between states. Specifically, we want to know: If it starts in state , what is the the expected value of some slowly varying function after time ? By repeatedly placing the particle at the starting position and then shining light through the grid at time later to see where it ended up, we can use the computer to get a Monte-Carlo estimate of the expected value of .

Say we come up with two simple systems coupled to heat baths as test cases:

1. Two independent Ornstein-Uhlenbeck processes, corresponds to one process, and corresponds to the other.

2. Langevin dynamics on a simple harmonic oscillator. corresponds to position and corresponds to momentum.

Here is the question:

What is the most serious problem with this “thermodynamic computer” idea for test case 1? What is the most serious problem for test case 2?

Ignore practical concerns like cost and whether or not we can find a transparent material that the charged particle won’t stick to.

Muireall

For 2 I want to say it’s that you can’t simulate a potential that drives the particle cyclically. You’d want there to be some overall tendency to go around the origin in a circle. Maybe for positive momentum, you could have smaller barriers on the right than on the left to get a tendency to increase the position coordinate, but if you just keep decreasing barriers around in a circle eventually you come back to where you started. Then you’ve got a problem going across a barrier to the right because you already set it high to keep the particle from going left when it’s on the other side.

For 1 maybe it’s that you can’t avoid introducing anti-correlation between X and Y jumps? On small enough intervals the probability of getting both will be too low. I guess that’s a problem with discretization even in 1D, though, and you do say the particle spends most of its time in the chambers, meaning we’re not in that limit.

So maybe it’s more of an issue like in case 2. In general, to favor going towards the origin, you need the barrier going towards the origin to be smaller than the barrier going away. But if you wanted to set it up so that you have the correct probability of moving out of a coordinate “bin” in a fixed small timestep, then barriers-toward-the-origin would need to be smaller in absolute terms the further you get away. In 1D you can work with this by saying the probabilities far from the origin describe shorter simulated timesteps and weight samples accordingly when calculating your expected value. I guess you can still do that in 2D, it just gets increasingly annoying to sample enough “simulated time”.

I’m not sure that’s right, though, because you can control both the energy of each state and the height of the barriers. So you’d set the I don’t know, this still seems pretty tricky to me, although I suspect there’s at least a clearer way to look at it. Seems like you run into problems far enough from the origin no matter what.

DaemonicSigil

Exactly correct for 2. If you look at it in phase space, a simple harmonic oscillator does not obey detailed balance. Because there is a net circular flow, and cyclic flows are exactly what detailed balance rules out. Which is kind of interesting, since we expect real physical systems to obey detailed balance, and plenty of real physical systems are well modelled by a simple harmonic oscillator. The paradox goes away if we instead consider energy eigenstates instead of just chopping up phase space.

Looking back, I’m not super happy with how I phrased the problem, in particular for case 1. Feels a bit too much like asking to guess the “teacher’s password”. 2 seems better in that respect because there’s a huge obstacle that prevents the computer from working. Whereas the computer would mostly work fine for 1.

Anyway, the specific thing I had in mind was the following:

Is it possible to replace several connected states (of a system obeying detailed balance) with a single state, with this single state having an effective “energy” corresponding to the free energy of all the states it replaces?

The answer is yes if you only care about the equilibrium distribution. But if you also care about the transition rates, then it’s not possible in general. When the particle enters the cluster of states, which exactly state in the cluster it’s in is important for being able to determine which edge it leaves through, or even how soon it’s likely to leave.

And chopping up the space into discrete boxes is a particular case of this. The computer can still do a good approximation, though, and discretization errors are really par for the course for computers. I mainly wanted to do this problem for the sake of case 2, and the question of substituting multiple states for one was just an interesting thing I noticed while thinking about it.

And yes, running out of computer once you get far enough from the origin is indeed a problem, though it becomes exponentially unlikely as one quadratically invests in computer hardware. As you point out, this is a problem for both cases.

Muireall

Ah, interesting, got it. (As long as things converge when you increase the resolution I tend not to worry. I sometimes use finite element methods in situations where that’s not necessarily the case, which might make for an interesting question. Though typically it takes some machinery to actually prove things, so maybe not.)