[LINK] The Bayesian Second Law of Thermodynamics

Sean Car­roll et al. posted a preprint with the above ti­tle. Sean also has a dis­cus­sion of it in his blog.

While I am a physi­cist by train­ing, statis­ti­cal me­chan­ics and ther­mo­dy­nam­ics is not my strong suit, and I hope some­one with ex­per­tise in the area can give their per­spec­tive on the pa­per. For now, here is my sum­mary, apolo­gies for any po­ten­tial er­rors:

There is a ten­sion be­tween differ­ent defi­ni­tions of en­tropy: Boltz­mann en­tropy, which counts macro­scop­i­cally in­dis­t­in­guish­able microstates always in­creases, ex­cept for ex­tremely rare de­creases. Gibbs/​Shan­non en­tropy, which counts our knowl­edge of a sys­tem, can de­crease if an ob­server ex­am­ines the sys­tem and learns some­thing new about it. Jaynes had a pa­per on that topic, Eliezer dis­cussed this in the Se­quences, and spxtr re­cently wrote a post about it. Now Car­roll and col­lab­o­ra­tors pro­pose the “Bayesian Se­cond Law” that quan­tifies this de­crease in Gibbs/​Shan­non en­tropy due to a mea­sure­ment:

[...] we de­rive the Bayesian Se­cond Law of Ther­mo­dy­nam­ics, which re­lates the origi­nal (un-up­dated) dis­tri­bu­tion at ini­tial and fi­nal times to the up­dated dis­tri­bu­tion at ini­tial and fi­nal times. That re­la­tion­ship makes use of the cross en­tropy be­tween two dis­tri­bu­tions [...]

[...] the Bayesian Se­cond Law (BSL) tells us that this lack of knowl­edge — the amount we would learn on av­er­age by be­ing told the ex­act state of the sys­tem, given that we were us­ing the un-up­dated dis­tri­bu­tion — is always larger at the end of the ex­per­i­ment than at the be­gin­ning (up to cor­rec­tions be­cause the sys­tem may be emit­ting heat)

This last point seems to re­solve the ten­sion be­tween the two defi­ni­tions of en­tropy, and has ap­pli­ca­tions to non-equil­ibrium pro­cesses, where an ob­server is re­placed with an out­come of some nat­u­ral pro­cess, such as RNA self-as­sem­bly.