Turning Up the Heat: Insights from Tao’s ‘Analysis II’

Foreword

It’s been too long—a month and a half since my last re­view, and about three months since Anal­y­sis I. I’ve been im­mersed in my work for CHAI, but re­al­ity doesn’t grade on a curve, and I want more math­e­mat­i­cal fire­power.

On the other hand, I’ve been cook­ing up some­thing re­ally spe­cial, so watch this space!

Anal­y­sis II

12: Met­ric Spaces

Met­ric spaces; com­plete­ness and com­pact­ness.

Prov­ing Completeness

It sucks, and I hate it.

13: Con­tin­u­ous Func­tions on Met­ric Spaces

Gen­er­al­ized con­ti­nu­ity, and how it in­ter­acts with the con­sid­er­a­tions in­tro­duced in the pre­vi­ous chap­ter. Also, a ter­rible in­tro­duc­tion to topol­ogy.

There’s a lot I wanted to say here about topol­ogy, but I don’t think my un­der­stand­ing is good enough to break things down—I’ll have to read an ac­tual book on the sub­ject.

14: Uniform Convergence

Poin­t­wise and uniform con­ver­gence, the Weier­strass -test, and uniform ap­prox­i­ma­tion by polyno­mi­als.

Break­ing Point

Sup­pose we have some se­quence of func­tions , , which con­verge poin­t­wise to the 1-in­di­ca­tor func­tion (i.e., and oth­er­wise). Clearly, each is (in­finitely) differ­en­tiable; how­ever, the limit­ing func­tion isn’t differ­en­tiable at all! Ba­si­cally, poin­t­wise con­ver­gence isn’t at all strong enough to stop the limit from “snap­ping” the con­ti­nu­ity of its con­stituent func­tions.

Progress

As in pre­vi­ous posts, I mark my pro­gres­sion by shar­ing a re­sult de­rived with­out out­side help.

Already proven: .

Defi­ni­tion. Let and . A func­tion is said to be an -ap­prox­i­ma­tion to the iden­tity if it obeys the fol­low­ing three prop­er­ties:

  • is com­pactly sup­ported on .
  • is con­tin­u­ous, and .
  • for all .

Lemma: For ev­ery and , there ex­ists an -ap­prox­i­ma­tion to the iden­tity which is a polyno­mial on .

Proof of Ex­er­cise 14.8.2(c). Sup­pose ; define for and oth­er­wise. Clearly, is com­pactly sup­ported on and is con­tin­u­ous. We want to find such that the sec­ond and third prop­er­ties are satis­fied. Since is non-nega­tive on , must be pos­i­tive, as must in­te­grate to . There­fore, is non-nega­tive.

We want to show that for all . Since is non-nega­tive, we may sim­plify to . Since the left-hand side is strictly mono­tone in­creas­ing on and strictly mono­tone de­creas­ing on , we sub­sti­tute with­out loss of gen­er­al­ity. As , so we may take the re­cip­ro­cal and mul­ti­ply by , ar­riv­ing at .

We want ; as is com­pactly sup­ported on , this is equiv­a­lent to . Us­ing ba­sic prop­er­ties of the Rie­mann in­te­gral, we have . Sub­sti­tut­ing in for ,

with the sec­ond in­equal­ity already hav­ing been proven ear­lier. Note that al­though the first in­equal­ity is not always true, we can make it so: since is fixed and , the left-hand side ap­proaches more quickly than does. There­fore, we can make as large as nec­es­sary; iso­lat­ing ,

the sec­ond line be­ing a con­se­quence of . Then set to be any nat­u­ral num­ber such that this in­equal­ity is satis­fied. Fi­nally, we set . By con­struc­tion, these val­ues of satisfy the sec­ond and third prop­er­ties. □

Con­voluted No Longer

Those look­ing for an ex­cel­lent ex­pla­na­tion of con­volu­tions, look no fur­ther!

Weier­strass Ap­prox­i­ma­tion Theorem

The­o­rem. Sup­pose is con­tin­u­ous and com­pactly sup­ported on . Then for ev­ery , there ex­ists a polyno­mial such that .

In other words, any con­tin­u­ous, real-val­ued on a finite in­ter­val can be ap­prox­i­mated with ar­bi­trary pre­ci­sion by polyno­mi­als.

Why I’m talk­ing about this. On one hand, this re­sult makes sense, es­pe­cially af­ter tak­ing ma­chine learn­ing and see­ing how polyno­mi­als can be con­torted into ba­si­cally what­ever shape you want.

On the other hand, I find this the­o­rem in­tensely beau­tiful. ’s proof was slowly con­structed, much to the reader’s benefit. I re­mem­ber the very mo­ment the proof sketch came to me, newly-in­stalled gears whirring hap­pily.

15: Power Series

Real an­a­lytic func­tions, Abel’s the­o­rem, and , com­plex num­bers, and tri­gono­met­ric func­tions.

Cached thought from my CS un­der­grad: ex­po­nen­tial func­tions always end up grow­ing more quickly than polyno­mi­als, no mat­ter the de­gree. Now, I fi­nally have the gears to see why:

has all the de­grees, so no polyno­mial (of nec­es­sar­ily finite de­gree) could ever hope to com­pete! This also sug­gests why .

Com­plex Exponentiation

You can mul­ti­ply a num­ber by it­self some num­ber of times.

[nods]

You can mul­ti­ply a num­ber by it­self a nega­tive num­ber of times.

[Sure.]

You can mul­ti­ply a num­ber by it­self an ir­ra­tional num­ber of times.

[OK, I un­der­stand limits.]

You can mul­ti­ply a num­ber by it­self an imag­i­nary num­ber of times.

[Out. Now.]

Se­ri­ously, this one’s weird (rather, it seems weird, but how can “how the world is” be “weird”)?

Sup­pose we have some , where . Then , so “all” we need to figure out is how to take an imag­i­nary ex­po­nent. Brian Slesin­sky has us cov­ered.

Years be­fore be­com­ing in­volved with the ra­tio­nal­ist com­mu­nity, Nate asks this ques­tion, and Qiaochu an­swers.

This isn’t a co­in­ci­dence, be­cause noth­ing is ever a co­in­ci­dence.

Or maybe it is a co­in­ci­dence, be­cause Qiaochu an­swered ev­ery ques­tion on Stack­Ex­change.

16: Fourier Series

Pe­ri­odic func­tions, tri­gono­met­ric polyno­mi­als, pe­ri­odic con­volu­tions, and the Fourier the­o­rem.

17: Sev­eral Vari­able Differ­en­tial Calculus

A beau­tiful unifi­ca­tion of Lin­ear Alge­bra and calcu­lus: lin­ear maps as deriva­tives of mul­ti­vari­ate func­tions, par­tial and di­rec­tional deriva­tives, Clairaut’s the­o­rem, con­trac­tions and fixed points, and the in­verse and im­plicit func­tion the­o­rems.

Im­plicit Func­tion Theorem

If you have a set of points in , when do you know if it’s se­cretly a func­tion ? For func­tions , we can just use the ge­o­met­ric “ver­ti­cal line test” to figure this out, but that’s a bit harder when you only have an alge­braic defi­ni­tion. Also, some­times we can im­plic­itly define a func­tion lo­cally by re­strict­ing its do­main (even if no ex­plicit form ex­ists for the whole set).

The­o­rem. Let be an open sub­set of , let be con­tin­u­ously differ­en­tiable, and let be a point in such that and . Then there ex­ists an open con­tain­ing , an open con­tain­ing , and a func­tion such that , and

So, I think what’s re­ally go­ing on here is that we’re us­ing the deriva­tive at this known zero to lo­cally lin­earize the man­i­fold we’re op­er­at­ing on (similar to New­ton’s ap­prox­i­ma­tion), which lets us have some neigh­bor­hood in which we can de­rive an im­plicit func­tion, even if we can’t always write it out.

18: Lebesgue Measure

Outer mea­sure; mea­surable sets and func­tions.

Tao lists desider­ata for an ideal mea­sure be­fore de­riv­ing it. Imag­ine that.

19: Lebesgue Integration

Build­ing up the Lebesgue in­te­gral, cul­mi­nat­ing with Fu­bini’s the­o­rem.

Con­cep­tual Rotation

Sup­pose is mea­surable, and let be a mea­surable, non-nega­tive func­tion. The Lebesgue in­te­gral of is then defined as

This hews closely to how we defined the lower Rie­mann in­te­gral in Chap­ter 11; how­ever, we don’t need the equiv­a­lent of the up­per Rie­mann in­te­gral for the Lebesgue in­te­gral.

To see why, let’s re­view why Rie­mann in­te­gra­bil­ity de­mands the equal­ity of the lower and up­per Rie­mann in­te­grals of a func­tion . Sup­pose that we in­te­grate over , and that is the in­di­ca­tor func­tion for the ra­tio­nals. As the ra­tio­nals are dense in the re­als, any in­ter­val () con­tains ra­tio­nal num­bers, no mat­ter how much the in­ter­val shrinks! There­fore, the up­per Rie­mann in­te­gral equals 1, while the lower equals 0 (for similar rea­sons). is Lebesgue in­te­grable; since it’s 0 al­most ev­ery­where (as the ra­tio­nals have 0 mea­sure), its in­te­gral is 0.

This marks a fun­da­men­tal shift in how we in­te­grate. With the Rie­mann in­te­gral, we con­sider the and of in­creas­ingly-re­fined up­per and lower Rie­mann sums—this is the length ap­proach. In Lebesgue in­te­gra­tion, how­ever, we con­sider which is re­spon­si­ble for each value in the range (i.e., ), mul­ti­ply­ing by the mea­sure of - this is in­ver­sion.

In a sense, the Lebesgue in­te­gral more cleanly strikes at the heart of what it means to in­te­grate. Surely, Rie­mann in­te­gra­tion was not far from the mark; how­ever, if you ro­tate the prob­lem slightly in your mind, you will find a bet­ter, cleaner way of struc­tur­ing your think­ing.

Fi­nal Thoughts

Although Tao botches a few ex­er­cises and the sec­tion on topol­ogy, I’m a big fan of Anal­y­sis I and II. Do note, how­ever, that II is far more difficult than I (not just in con­tent, but in terms of the ex­er­cises). He gen­er­ally pro­vides rele­vant, ap­pro­pri­ately-difficult prob­lems, and is quite adept at helping the reader de­velop rigor­ous and in­tu­itive un­der­stand­ing of the ma­te­rial.

Forwards

Next is Jaynes’ Prob­a­bil­ity The­ory.

Tips

  • To avoid get­ting hung up in Chap­ter 17, this book should be read af­ter a lin­ear alge­bra text.

  • Don’t do ex­er­cise 17.6.3 - it’s wrong.

  • Deep un­der­stand­ing comes from sweat­ing it out. Don’t hide, don’t wave away both­er­some de­tails—stay and ex­plore. If you fol­low my strat­egy of quickly gen­er­at­ing out­lines—can you for­mally and pre­cisely write out each step?

Verification

I com­pleted ev­ery ex­er­cise in this book; in the sec­ond half, I started avoid­ing look­ing at the hints pro­vided by prob­lems un­til I’d already thought for a few min­utes. Often, I’d solve the prob­lem and then turn to the hint: “be care­ful when do­ing X—don’t for­get edge case Y; hint: use lemma Z”! A pit would form in my stom­ach as I pre­pared to lo­cate my mis­take and back-prop­a­gate where-I-should-have-looked, be­fore re­al­iz­ing that I’d already taken care of that edge case us­ing that lemma.

Why Bother?

One can ar­gue that my time would be bet­ter spent pick­ing up things as I work on prob­lems in al­ign­ment. How­ever, while I’ve made, uh, quite a bit of progress with im­pact mea­sures this way, con­cept-shaped holes are im­pos­si­ble to no­tice. If there’s some helpful in­for­ma­tion-the­o­retic way of view­ing a prob­lem that I’d only re­al­ize if I had already taken in­for­ma­tion the­ory, I’m out of luck.

Also, de­vel­op­ing math­e­mat­i­cal ma­tu­rity brings with it a more rigor­ous thought pro­cess.

Fairness

There’s a sense I get where even though I’ve made im­mense progress over the past few months, it still might not be enough. The stan­dard isn’t “am I do­ing im­pres­sive things for my refer­ence class?”, but rather the stric­ter “am I good enough to solve se­ri­ous prob­lems that might not get solved in time oth­er­wise?”. This is quite the stan­dard, and even given my text­book and re­search progress (in­clud­ing the up­com­ing posts), I don’t think I meet it.

In a way, this ex­cites me. I wel­come any ad­vice for buck­ling down fur­ther and be­com­ing yet stronger.


If you are in­ter­ested in work­ing with me or oth­ers on the task of learn­ing MIRI-rele­vant math, if you have a burn­ing de­sire to knock the al­ign­ment prob­lem down a peg—I would be more than happy to work with you. Mes­sag­ing me may also net you an in­vi­ta­tion to the MIRIx Dis­cord server.

On a re­lated note: thank you to ev­ery­one who has helped me; in par­tic­u­lar, TheMa­jor has been in­cred­ibly gen­er­ous with their ex­pla­na­tions and en­courage­ment.