You’ve attached one end of a conductive molecule to an electrode. If the molecule bends by a certain distance d at the other end, it touches another electrode, closing an electrical circuit. (You also have a third electrode where you can apply a voltage to actuate the switch.)
You’re worried about the thermal bending motion of the molecule accidentally closing the circuit, causing an error. You calculate, using the Boltzmann distribution over the elastic potential energy in the molecule, that the probability of a thermal deformation of at least d is 10−9 (a single-tailed six-sigma deformation in a normal distribution where expected potential energy is kBT/2), but you don’t know how to use this information. You know that the bending motion has a natural frequency of 100 GHz with an energy decay timescale of 0.1 nanosecond, and that it behaves as an ideal harmonic oscillator in a thermal bath.
You’re considering integrating this switch into a 1 GHz processor. What is the probability p of an error in a 1 nanosecond clock cycle?
p<10−9 — the Boltzmann distribution is a long-time limit, so you have sub-Boltzmann probability in finite time.
p=10−9 — the probability is determined by the Boltzmann distribution.
p≈10−8 — the 0.1 nanosecond damping timescale means, roughly, it gets 10 draws from the Boltzmann distribution.
p≈10−7 — the 100 GHz natural frequency means it gets 100 tries to cause an error.
p>10−7 — the Boltzmann distribution is over long-time averages, so you expect larger deviations on short timescales that otherwise get averaged away.
I’m going to guess 3. Reasoning: I’m sure right away that 1, 2 are wrong. Reason: If you leave the thing sitting for long enough then obviously it’s going to eventually fail. So 2 is wrong and 1 is even wronger. I’m also pretty sure that 5 is wrong. Something like 5 is true for the velocity (or rather, the estimated velocity based on measuring displacement after a given time Δt) of a particle undergoing Brownian motion, but I don’t think that’s a good model for this situation. For one thing, on a small time-scale, Brownian velocities don’t actually become infinite, instead we see that they’re actually caused by individual molecules bumping into the object, and all energies remain finite.
3 and 4 are both promising because they actually make use of the time-scales given in the problem. 4 seems wrong because if we imagined that the relaxation timescale was instead 1 second, then after looking at the position and velocity once the system oscillates in that same amplitude for a very long time, and doesn’t get any more tries to beat its previous score. Answer is 3 by elimination, and it also seems intuitive that the relaxation timescale is the one that counts how many tries you get. (up to some constant factors)
This reasoning is basically right, but the answer ends up being 5 for a relatively mundane reason.
If the time-averaged potential energy is k_B T / 2, so is the kinetic energy. Because damping is low, at some point in a cycle, you’ll deterministically have the sum of the two in potential energy and nothing in kinetic energy. So you do have some variation getting averaged away.
More generally, while the relaxation timescale is the relevant timescale here, I also wanted to introduce an idea about very fast measurement events like the closing of the electrical circuit. If you have observables correlated on short timescales, then measurements faster than that won’t necessarily follow expectations from naive equilibrium thinking.
Good point, I had briefly thought of this when answering, and it was the reason I mentioned constant factors in my comment. However, on closer inspection:
The “constant” factor is actually only nearly constant.
It turns out to be bigger than 10.
Explanation:
10^{-9} is about 6 sigma. To generalize, let’s say we have a sigma, where a is some decently large number so that the position-only Boltzmann distribution gives an extremely tiny probability of error.
So we have the following probability of error for the position-only Boltzmann distribution:
p1=1√2π∫∞ae−x2/2
Our toy model for this scenario is that rather than just sampling position, we jointly sample position and momentum, and then compute the amplitude. Equivalently, we sample position twice, and add it in quadrature to get amplitude. This gives a probability of:
p2=1(√2π)2∫∞ae−r2/2∫2π0rdϕdr=∫∞are−r2/2dr=e−a2/2
Since we took a to be decently large, we can approximate the integrand in our expression for p1 with an exponential distribution (basically, we Taylor expand the exponent):
p1≈=1√2πe−a2/2∫∞ae−a(x−a)dx=1√2πe−a2/21a
Result: p2 is larger than p1 by a factor of a√2π. While the √2π is constant, a grows (albeit very slowly) as the probability of error shrinks. Hence “nearly constant”. For this problem, where a=6, we get a factor of about 15, so probability 1.5×10−8 per try.
Why is this worth thinking about? If we just sample at a single point in time, and consider only the position at that time, then we get the original 10−9 per try. This is wrong because momentum gets to oscillate and turn into displacement, as you’ve already pointed out. On the other hand, if we remember the equipartition theorem, then we might reason that since the variance of amplitude is twice the variance of position, the probability of error is massively amplified. We don’t have to naturally get a 6 sigma displacement. We only need to get a roughly a 6/√2 sigma displacement and wait for it to rotate into place. This is wrong because we’re dealing with rare events here, and for the above scenario to work out, we actually need to simultaneously get 6/√2 displacement and 6/√2 momentum, both of which are rare and independent.
So it’s quite interesting that the actual answer is in between, and comes, roughly speaking, from rotating the tail of the distribution around by a full circle of circumference 2πa. :::
Anyway, very cool and interesting question! Thanks for sharing it.
Molecular electromechanical switch
You’ve attached one end of a conductive molecule to an electrode. If the molecule bends by a certain distance d at the other end, it touches another electrode, closing an electrical circuit. (You also have a third electrode where you can apply a voltage to actuate the switch.)
You’re worried about the thermal bending motion of the molecule accidentally closing the circuit, causing an error. You calculate, using the Boltzmann distribution over the elastic potential energy in the molecule, that the probability of a thermal deformation of at least d is 10−9 (a single-tailed six-sigma deformation in a normal distribution where expected potential energy is kBT/2), but you don’t know how to use this information. You know that the bending motion has a natural frequency of 100 GHz with an energy decay timescale of 0.1 nanosecond, and that it behaves as an ideal harmonic oscillator in a thermal bath.
You’re considering integrating this switch into a 1 GHz processor. What is the probability p of an error in a 1 nanosecond clock cycle?
p<10−9 — the Boltzmann distribution is a long-time limit, so you have sub-Boltzmann probability in finite time.
p=10−9 — the probability is determined by the Boltzmann distribution.
p≈10−8 — the 0.1 nanosecond damping timescale means, roughly, it gets 10 draws from the Boltzmann distribution.
p≈10−7 — the 100 GHz natural frequency means it gets 100 tries to cause an error.
p>10−7 — the Boltzmann distribution is over long-time averages, so you expect larger deviations on short timescales that otherwise get averaged away.
EDIT: added spoiler formatting
I’m going to guess 3. Reasoning: I’m sure right away that 1, 2 are wrong. Reason: If you leave the thing sitting for long enough then obviously it’s going to eventually fail. So 2 is wrong and 1 is even wronger. I’m also pretty sure that 5 is wrong. Something like 5 is true for the velocity (or rather, the estimated velocity based on measuring displacement after a given time Δt) of a particle undergoing Brownian motion, but I don’t think that’s a good model for this situation. For one thing, on a small time-scale, Brownian velocities don’t actually become infinite, instead we see that they’re actually caused by individual molecules bumping into the object, and all energies remain finite.
3 and 4 are both promising because they actually make use of the time-scales given in the problem. 4 seems wrong because if we imagined that the relaxation timescale was instead 1 second, then after looking at the position and velocity once the system oscillates in that same amplitude for a very long time, and doesn’t get any more tries to beat its previous score. Answer is 3 by elimination, and it also seems intuitive that the relaxation timescale is the one that counts how many tries you get. (up to some constant factors)
This reasoning is basically right, but the answer ends up being 5 for a relatively mundane reason.
If the time-averaged potential energy is k_B T / 2, so is the kinetic energy. Because damping is low, at some point in a cycle, you’ll deterministically have the sum of the two in potential energy and nothing in kinetic energy. So you do have some variation getting averaged away.
More generally, while the relaxation timescale is the relevant timescale here, I also wanted to introduce an idea about very fast measurement events like the closing of the electrical circuit. If you have observables correlated on short timescales, then measurements faster than that won’t necessarily follow expectations from naive equilibrium thinking.
Good point, I had briefly thought of this when answering, and it was the reason I mentioned constant factors in my comment. However, on closer inspection:
The “constant” factor is actually only nearly constant.
It turns out to be bigger than 10.
Explanation:
10^{-9} is about 6 sigma. To generalize, let’s say we have a sigma, where a is some decently large number so that the position-only Boltzmann distribution gives an extremely tiny probability of error.
So we have the following probability of error for the position-only Boltzmann distribution:
p1=1√2π∫∞ae−x2/2
Our toy model for this scenario is that rather than just sampling position, we jointly sample position and momentum, and then compute the amplitude. Equivalently, we sample position twice, and add it in quadrature to get amplitude. This gives a probability of:
p2=1(√2π)2∫∞ae−r2/2∫2π0rdϕdr=∫∞are−r2/2dr=e−a2/2
Since we took a to be decently large, we can approximate the integrand in our expression for p1 with an exponential distribution (basically, we Taylor expand the exponent):
p1≈=1√2πe−a2/2∫∞ae−a(x−a)dx=1√2πe−a2/21a
Result: p2 is larger than p1 by a factor of a√2π. While the √2π is constant, a grows (albeit very slowly) as the probability of error shrinks. Hence “nearly constant”. For this problem, where a=6, we get a factor of about 15, so probability 1.5×10−8 per try.
Why is this worth thinking about? If we just sample at a single point in time, and consider only the position at that time, then we get the original 10−9 per try. This is wrong because momentum gets to oscillate and turn into displacement, as you’ve already pointed out. On the other hand, if we remember the equipartition theorem, then we might reason that since the variance of amplitude is twice the variance of position, the probability of error is massively amplified. We don’t have to naturally get a 6 sigma displacement. We only need to get a roughly a 6/√2 sigma displacement and wait for it to rotate into place. This is wrong because we’re dealing with rare events here, and for the above scenario to work out, we actually need to simultaneously get 6/√2 displacement and 6/√2 momentum, both of which are rare and independent.
So it’s quite interesting that the actual answer is in between, and comes, roughly speaking, from rotating the tail of the distribution around by a full circle of circumference 2πa. :::
Anyway, very cool and interesting question! Thanks for sharing it.