In a comment on my last post, someone deplored the tendency of LW bloggers to rediscover ideas of famous philosophers and pretend that they discovered it first. This made me think of an interesting question of epistemology: is there value in reformulating and/or rediscovering things?

A naive answer would be no— after all, we already have the knowledge. But a look at the history of science brings up many examples showing otherwise..

In physics, the most obvious case comes from the formulations of classical mechanics. Newton gave the core of the theory in his original formulation, but what gets applied in modern science an play a key part in the revolutions of quantum mechanics and quantum electrodynamics among others, are the other two formulations: Lagrangian and Hamiltonian. And what these two, particularly the Lagrangian, pushed and pioneered the least action principle in physics, but capturing every system through an action that was maximized or minimized for every trajectory. This principle is one of the core building blocks of modern physics, and came out of a pure compression and reformulation of Newtonian mechanics!

Similarly, potential energy started as a mathematical trick, the potential function of Lagrange which captured all gravitational forces acting on a system of N-bodies, which were the coefficients of its partial derivatives. Once again, there was nothing new here; even more than the Lagrangian and the action, this was a computational trick that sped up annoying calculations. And yet. And yet it’s hard to envision modern physics, even classical mechanics at all, without potential.

In modern evolutionary biology, many advances like exaptation can be brought back to Darwin’s seminal work. And yet they often reveal different aspects of the idea, clarifying and bringing new intuitions to bear that make our models of life and its constraints that much richer.

Again in theoretical computer science, reframing of the same core ideas is at the center of so many productive inquiries:

all the models of computation unified by the Church Turing thesis, all equivalent yet providing different angles on questions of computations

the non-deterministic complexity classes like NP reframed as proving/certifying classes, leading to far better intuitions and to interactive proofs and IP=PSPACE

the Curry-Howard isomorphism shows that proof-calculi like Natural Deduction and type systems for computation models like the lambda calculus are actually the same thing. But this makes the two stronger, rather than weakening them for being the same thing.

All in all, this pattern of rediscovering the same thing from a different perspective, of reframing the known, has been particularly productive in the history of science. Compressions, new formulations, new framing have delivered to us more than just the original insight.

Why?

Because we’re not logically omniscient — we don’t instantaneously know all the consequences of what we discover. Similarly, we’re not bayesian-omniscient — we don’t update on all the evidence available in the data. In both cases we are limited by what we can compute, what we can figure out. Reframings then are like shortcuts in the vastness of logical consequences, which highlight particularly interesting aspects of the evidence we originally unearthed.

And I believe that one big reason for the misguided and simplistic models of science that so many share lies in the non-obviousness of these reframings’ power. We don’t remember them very well, we tend instead to ascribe the clarity and the power to the original thinker, even when we never used or touched their original formulation. We take for granted the intuitions and instincts that have been patiently built and baked into the concepts we use.

As Oliver Darrigol writes in World of Flow, his history of hydrodynamics:

There is, however, a puzzling contrast between the conciseness and ease of the modem treatment of these topics, and the long, difficult struggles of nineteenth-century physicists with them. For example, a modern reader of Poisson’s old memoir on waves finds a bewildering accumulation of complex calculations where he would expect some rather elementary analysis. The reason for this difference is not any weakness of early nineteenth century mathematicians, but our overestimation of the physico-mathematical tools that were available in their times. It would seem, for instance, that all that Poisson needed to solve his particular wave problem was Fourier analysis, which Joseph Fourier had introduced a few years earlier. In reality, Poisson only knew a raw, algebraic version of Fourier analysis, whereas modem physicists have unconsciously assimilated a physically ‘dressed’ Fourier analysis, replete with metaphors and intuitions borrowed from the concrete wave phenomena of optics, acoustics, and hydrodynamics. In our mind, a Fourier component is no longer a mere coefficient in an algebraic development, it is a periodic wave that may interfere with other waves in a manner we can easily imagine. The transition from a dry mathematical analysis to a genuinely physico-mathematical analysis occurred gradually in the nineteenth century, through reversible analogies between different domains of physics. It concerned not only Fourier analysis, but also the theory of ordinary differential equations, potential theory, perturbative methods, Cauchy’s method of residues, etc. The modern recourse to such mathematical techniques involves a great deal of implicit knowledge that only becomes apparent in comparisons with older usage.

This post is garbage because someone else has already said the same thing earlier. Heehee :-P

See also the fact that writing textbooks and review articles are widely seen as worthwhile activities.

I have 200 citations on a physics paper with (I believe) no original content, and indeed barely any content that wasn’t already known in 1900.

Agreed with the overall point in this post that there is value in reframing and rediscovery. However,

consists of two points and I think the second one also deserves some consideration.

I don’t agree with the framing of pretense—if you don’t know about the earlier idea, you probably sincerely think you discovered it. But if such a “discovery” turns out to be a reframing after all, I think there is also a lot of value to be had in

pointing this out: to integrate the idea in the common web of knowledge, to make clear to others that the idea exists in another form that they might already know or that might help to deepen their understanding.So I would urge readers (or posters themselves) to please do keep pointing these correspondences out; in a spirit of helpfulness, of course, not as a ‘gotcha’.

Agreed. In addition to the point about deepening understanding, see also this comment by Jacob Steinhardt: if the relationship to existing work isn’t pointed out, that makes it harder to know whether it’s worth reading the post or not (for readers who are aware of the previous work).

Anna Salamon made a point like this in a post several years ago: https://www.lesswrong.com/posts/ZGzDNfNCXzfx6hYAH/how-to-learn-soft-skills

It’s something that really stuck with me. Not all minds are alike, and it’s often worth finding your own words to say things that others have said. It’s useful to you, and it can be useful to others.

The thing I think many LW writers get wrong is that they aren’t humble about it. They rediscover something and act like they invented it, mostly because there seems to be some implicit belief that we’re better than those who came before us because we have Rationality(tm). I’ve been guilty of this, as have many others.

I saw another thing recently which put this idea about reinventing ideas in a new light. The author mentioned that when they were studying in a yeshiva everyone celebrated when one of the students rediscovered an argument made by an earlier writer, and the older the original author the better. It was a sign that the person was really grasping the ideas and was getting closer to God than the other students.

Whatever you think of studying rabbinical texts, this seems like a healthy sentiment to adopt when someone rediscovers an idea.

Minkowski space mathematics was around for quite a while. But people associate Einstein with spacetime. This is kind of a weird reverse example where the reframing has all the value when it feels like the “just mathematics” seems to be footnote levels of fame when it is quite a big chunk of the engine.

Stack Overflow moderators would beg to differ.

But yes, retrodding old ground can be very useful. Just from the standpoint of education, actually going through the process of discovery can instill a much deeper understanding of the subject than is possible just from reading or hearing a lecture about it. And if the discovery is a stepping stone to further discoveries, then those who’ve developed that level of understanding will be at an advantage to push the boundaries of the field.

I think it is definitely the case that most physicists have quite a wrong picture of past physics.

I have some historical information about the work of Joseph Louis Lagrange. (I don’t think it affects the thrust of your post.)

An english translation of Lagrange’s ‘Mecanique Analytique’ is available on Archive.org. Analytic mechanics Lagrange used calculus of variations for problems in statics, but not for problems in mechanics.

At the time the action concept that was available was Maupertuis’ action. (Hamilton’s action was introduced by William Rowan Hamiltons in 1834, Lagrange died in 1813) In his work ‘Mecanique Analytique’ Lagrange offers the opinion that Maupertuis’ action is not particularly relevant. Quoting one sentence from page 183 of the english translation. ” [...] which I view not as a metaphysical principle but as a simple and general result of the laws of mechanics.”

I want to emphasize the necessity of using the name ‘stationary action’. You do acknowledge that action can be “either minimized or maximized”. To acknowledge that, and to keep using the name ‘least action’ amounts to self-contradiction. I understand that many people prefer the name ‘least action’ because that just sounds

sexierthan ‘stationary action’. But it is what it is: minimum or maximum is immaterial. The criterion is: identify the point in variation space such that the derivative of Hamilton’s action is zero.