# Foreword

Some­times, it’s eas­ier to say how things change than to say how things are.

From 3Blue1Brown: Differ­en­tial Equations

When you write down a differ­en­tial equa­tion, you’re spec­i­fy­ing con­straints and in­for­ma­tion about e.g. how to model some­thing in the world. This gives you a fam­ily of solu­tions, from which you can pick out any func­tion you like, de­pend­ing on de­tails of the prob­lem at hand.

To­day, I finished the bulk of Lo­gan’s A First Course in Or­di­nary Differ­en­tial Equa­tions, which is eas­ily the best ODE book I came across.

# A First Course in Or­di­nary Differ­en­tial Equations

As usual, I’ll just talk about ran­dom cool things from the book.

## Bee Movie

In the sum­mer of 2018 at a MIRI-CHAI in­tern work­shop, I wit­nessed a fas­ci­nat­ing de­bate: what math­e­mat­i­cal func­tion rep­re­sents the movie time elapsed in videos like The En­tire Bee Movie but ev­ery time it says bee it speeds up by 15%? That is, what map­ping con­verts the viewer times­tamp to the movie times­tamp for this video?

I don’t re­mem­ber their con­clu­sion, but it’s sim­ple enough to an­swer. Sup­pose counts how many times a char­ac­ter has said the word “bee” by times­tamp in the movie. Since the view­ing speed it­self in­creases ex­po­nen­tially with , we have . Fur­ther­more, since the video starts at the be­gin­ning of the movie, we have the ini­tial con­di­tion .

This prob­lem can­not be cleanly solved an­a­lyt­i­cally (be­cause is dis­con­tin­u­ous and ob­vi­ously lack­ing a clean closed form), but is ex­pressed by a beau­tiful and sim­ple differ­en­tial equa­tion.

## Gears-level mod­els?

Differ­en­tial equa­tions help us ex­plain and model phe­nom­ena, of­ten giv­ing us in­sight into causal fac­tors: for a triv­ial ex­am­ple, a pop­u­la­tion might grow more quickly be­cause that pop­u­la­tion is larger.

## Equil­ibria and sta­bil­ity theory

This ma­te­rial gave me a great con­cep­tual frame­work for think­ing about sta­bil­ity. Here are some good han­dles:

Let’s think about rocks and hills. Un­sta­ble equil­ibria have the rock rol­ling away for­ever lost, no mat­ter how lightly the rock is nudged, while lo­cally sta­ble equil­ibria have some level of tol­er­ance within which they’ll set­tle back down. For a globally sta­ble equil­ibrium, no mat­ter how hard the per­tur­ba­tion, the rock comes rol­ling back down the parabola.

## Resonance

A fa­mil­iar ex­am­ple is a play­ground swing, which acts as a pen­du­lum. Push­ing a per­son in a swing in time with the nat­u­ral in­ter­val of the swing (its res­o­nant fre­quency) makes the swing go higher and higher (max­i­mum am­pli­tude), while at­tempts to push the swing at a faster or slower tempo pro­duce smaller arcs. This is be­cause the en­ergy the swing ab­sorbs is max­i­mized when the pushes match the swing’s nat­u­ral os­cilla­tions. ~ Wikipedia

And that’s also how the Ta­coma bridge col­lapsed in 1940. The sec­ond-or­der differ­en­tial equa­tions un­der­ly­ing this al­low us to solve for the forc­ing func­tion which could in­duce catas­trophic res­o­nance.

Also note that there is only at most one res­o­nant fre­quency of any given sys­tem, be­cause even lower oc­taves of the nat­u­ral fre­quency would provide de­struc­tive in­terfer­ence a good amount of the time.

### Ran­dom notes

• This book gave me great chance to re­view my calcu­lus, from in­te­gra­tion by parts to the deeper mean­ing of Tay­lor’s the­o­rem: that for many func­tions, you can re­cover all of the global in­for­ma­tion from the lo­cal in­for­ma­tion, in the form of deriva­tives. I don’t fully un­der­stand why this doesn’t work for some func­tions which are in­finitely differ­en­tiable (like ), but ap­par­ently this be­comes clearer af­ter some com­plex anal­y­sis.

• Bifur­ca­tion di­a­grams al­low us to model the be­hav­ior, birth, and de­struc­tion of equil­ibria as we vary pa­ram­e­ters in the differ­en­tial equa­tion. I’m look­ing for­ward to learn­ing more about bifur­ca­tion the­ory. In this video, Ver­ata­sium high­lights stun­ning pat­terns be­hind the bifur­ca­tion di­a­grams of sin­gle-humped func­tions.

# Forwards

I sup­ple­mented my un­der­stand­ing with the first two chap­ters of Stro­gatz’s Non­lin­ear Dy­nam­ics And Chaos. I might come back for more of the lat­ter at a later date; I’m feel­ing like mov­ing on and I think it’s im­por­tant to fol­low that feel­ing.

• This prob­lem can­not be cleanly solved an­a­lyt­i­cally (be­cause f is dis­con­tin­u­ous and ob­vi­ously lack­ing a clean closed form), but is ex­pressed by a beau­tiful and sim­ple differ­en­tial equa­tion.

Huh, this is a great ex­am­ple. My his­tor­i­cal re­la­tion­ship to differ­en­tial equa­tions has been mostly from the per­spec­tive of “it’s a method by which you even­tu­ally ar­rive at good an­a­lytic de­scrip­tions of sys­tems”. I think this ex­am­ple re­ally con­cretely illus­trated why that’s a wrong per­spec­tive.

• I don’t fully un­der­stand why this doesn’t work for some func­tions which are in­finitely differ­en­tiable (like logx), but ap­par­ently this be­comes clearer af­ter some com­plex anal­y­sis.

Be­cause the deriva­tive isn’t zero? (x^2 is in­finitely differ­en­tiable but ends up at zero fast. (x^2, 2x, 2, 0, 0, 0...))

• The ques­tion is ill founded. You can in fact re­cover all of the in­for­ma­tion about log x from its Tay­lor se­ries. I think TurnTrout is con­fused maybe be­cause the Tay­lor se­ries only con­verges on a cer­tain in­ter­val, not globally? I’ll an­swer the ques­tion as­sum­ing that’s the con­fu­sion.

If you know all the deriva­tives of log x at x=1, but you know noth­ing else about log x, then you can find a Tay­lor se­ries that con­verges on (0,2). But, given the Tay­lor se­ries, you now also know all the deriva­tives at x=1.9. Writ­ing a Tay­lor se­ries cen­tered at 1.9, you get a se­ries that con­verges on (0,3.8). Con­tin­u­ing in this fash­ion, you can find all val­ues of log x, for all pos­i­tive real in­puts, us­ing only the deriva­tives at x=1. You just need mul­ti­ple “steps.”

That said, there is a fun­da­men­tal limi­ta­tion. Con­sider the func­tions f(x) = 1/​x and g(x) = {1/​x if x > 0, 1 + 1/​x if x < 0}. For x > 0, f(x) = g(x), but for x<0 they are not equal. Clearly both func­tions are in­finitely differ­en­tiable,
but just be­cause you know all the deriva­tives of f at x=1, doesn’t mean you can de­ter­mine it’s value at x=-1.

Okay, so Tay­lor se­ries al­low you to probe all val­ues of a func­tion, but it might take mul­ti­ple steps, and sin­gu­lar­i­ties cause real un­fix­able prob­lems. The cor­rect way to think about this is that func­tions aren’t just differ­en­tiable or not, they are in­finitely differ­en­tiable *on a set*. For ex­am­ple, 1/​x is smooth on (-in­finity,0) union (0,in­finity), which is a set with two con­nected com­po­nents. The Tay­lor se­ries al­lows you to probe all of the val­ues on any in­di­vi­d­ual con­nected com­po­nent, but it very ob­vi­ously can’t tell you any­thing about other con­nected com­po­nents.

As for why it some­times takes mul­ti­ple “steps,” like for log x: for rea­sons, the Tay­lor se­ries has to con­verge on a sym­met­ric in­ter­val. For log x cen­tered at x=1, it sim­ply can’t con­verge at 3 with­out also con­verg­ing at −1, which is ob­vi­ously im­pos­si­ble since it’s out­side the con­nected com­po­nent where log x is differ­en­tiable. The Tay­lor se­ries con­verges on the largest in­ter­val where it can pos­si­bly con­verge, but it still tells you the val­ues el­se­where (in the con­nected com­po­nent) if you’re will­ing to work slightly harder.

Every­thing I said is true for an­a­lytic func­tions. There is still the is­sue of in­finitely differ­en­tiable non-an­a­lytic func­tions as de­scribed here. Log x is not an ex­am­ple of such a func­tion, log x is an­a­lytic. Th­ese coun­terex­am­ples are much more sub­tle, but it has to do with the fact that the er­ror in an n-th deriva­tive ap­prox­i­ma­tion de­cays like O(x^n), so even a Tay­lor se­ries al­lows for er­rors like O(e^x) be­cause ex­po­nen­tial de­cay beats any polyno­mial.

• Thank you for this, that’s very helpful.

• Coun­terex­am­ple: is an­a­lytic but its deriva­tives don’t satisfy your pro­posed con­di­tion for be­ing an­a­lytic.