My name is Charles Renshaw-Whitman. I am a physicist (‘symbol gremlin’) by training, currently a MATS scholar studying the connection between the structure of natural data and the structure of learned computations.
Currently training away an aversion to sharing my writing/thoughts publicly—please modulate tone of comments accordingly :)
Hey commendations on sharing your update.
Another similar line of work I like is Roberts+Yaida’s “Principles of Deep Learning Theory”—this is a similar-in-spirit approach to MFT, but they perturb around a different limit and get feature-learning as a finite-width effect. I haven’t studied MFT to compare the validity of the two; my guess is MFT is the more relevant description. PDLT at least does a very good job modernizing the NTK approach and connecting to the older literature. I’m a fanboy as it was my gateway drug for learning theory lol.