Correct on all counts!
What’s happened is the we’ve-evolved-the-Residency-into-two-new-programs thing mentioned in this post. Applications sent in for the (Residency-turned) Iliad Fellowship in the old Residency form are fine: people do not need to reapply for the Iliad Fellowship in the newer common app.
Relatively soon we’ll update the Fellowship branding on the PrincInt site, which is otherwise correct in its current details about the program.
Great ramble, but I feel like adopting this thesis doesn’t make me feel any better about smarter-than-human AGI alignment. Rather, I would feel awful, because in your sketched-out world you just cannot realistically reach the level of understanding you would need to feel safe ceding the trump card of being the smartest kind of thing around. Safety is not implied if you really really take the Bitter Lesson to heart. (Not implying that your above comment says otherwise: as you suggest, the ramble is not cutting at Zack’s main thesis here.)
More directly to your point, though, we do sometimes extract the clean mathematical models embedded inside of an otherwise messy naturalistic neural network. Most striking to me is the days of the week group result: if you know how to look at the thing from the right angle, the clean mathematical structure apparently reveals itself. (Now admittedly, the whole rest of GPT-2 or whatever is a huge murky mess. So the stage of the science we’re groping towards at the moment is more like “we have a few clean mathematical models that really shine of individual phenomena in neural networks” than “we have anything like a clean grand unified theory.” But confusion is in the map, not in the territory, and all that, even if a particular science is extraordinarily difficult.)