(Self-review.) I was having fun with a rhetorical device in this one, which didn’t land for all readers; I guess that’s how it goes sometimes.
To try to explain what I was trying to do here in plainer words: I feel like a lot of people who read this website but don’t read textbooks walk away with an intuitive picture of deep learning as being like evolving an animal to do your bidding, which is scary because evolution is not controllable.
That was strikingly not the intuitive picture I got from reading standard academic tutorial material on the topic in late 2023 and early 2024. As a lifetime Less Wrong reader reading Simon Prince’s Understanding Deep Learning (2023), one gets the sense that Prince isn’t really thinking about “AI” as people on this website understand it, even if the term gets used.
It’s a computational statistics book. Prince is writing about a class of techniques for fitting statistical models to data. The fact that statistics happens to have some impressive applications doesn’t make it about summoning a little animal.
At some point I switched from seeing neural networks as arcane devices to seeing them as moldable variants of “boring” building blocks from signal processing, feedback control, associative learning, & functional programming. Like some kind of function approximation plastic/epoxy
Tbh “neural network” is maybe too suggestive of a term. Like it anchors people on vague intuitions about emergence/agency rather than on mechanistic thinking. Call them “nonlinear coupling networks” or “plastic basis functions” or go back to “parallel distributed processors”
Sorry, I worry that I’m still not succeeding at conveying the intuition: probably some people reading this review comment are shaking their head, disappointed that I seem to be trotting out the AI skeptic’s ignorant “It’s just math; math can’t hurt you” canard. So to be clear (and I think I was clear enough about this in the post; see, e.g., the final paragraph), I absolutely agree that math can kill you, obviously. I’m just saying that after I read the math, the summoning-a-little-animal mental image didn’t seem faithful to the math: you should be thinking about how the model’s outputs interpolate the training data, not how the little animal’s behavior unpredictably fails to reflect some putative utility function that “outer” training failed to instill.
… I’m still not communicating the thing, am I? You know what? Forget it. Don’t read this review and don’t read this post. Read Prince 2023 or Bishop and Bishop 2024. Read textbooks!
(Self-review.) I was having fun with a rhetorical device in this one, which didn’t land for all readers; I guess that’s how it goes sometimes.
To try to explain what I was trying to do here in plainer words: I feel like a lot of people who read this website but don’t read textbooks walk away with an intuitive picture of deep learning as being like evolving an animal to do your bidding, which is scary because evolution is not controllable.
That was strikingly not the intuitive picture I got from reading standard academic tutorial material on the topic in late 2023 and early 2024. As a lifetime Less Wrong reader reading Simon Prince’s Understanding Deep Learning (2023), one gets the sense that Prince isn’t really thinking about “AI” as people on this website understand it, even if the term gets used.
It’s a computational statistics book. Prince is writing about a class of techniques for fitting statistical models to data. The fact that statistics happens to have some impressive applications doesn’t make it about summoning a little animal.
The difference in views seemed worth writing about. I was inspired by some Tweets by Charles Foster:
Sorry, I worry that I’m still not succeeding at conveying the intuition: probably some people reading this review comment are shaking their head, disappointed that I seem to be trotting out the AI skeptic’s ignorant “It’s just math; math can’t hurt you” canard. So to be clear (and I think I was clear enough about this in the post; see, e.g., the final paragraph), I absolutely agree that math can kill you, obviously. I’m just saying that after I read the math, the summoning-a-little-animal mental image didn’t seem faithful to the math: you should be thinking about how the model’s outputs interpolate the training data, not how the little animal’s behavior unpredictably fails to reflect some putative utility function that “outer” training failed to instill.
… I’m still not communicating the thing, am I? You know what? Forget it. Don’t read this review and don’t read this post. Read Prince 2023 or Bishop and Bishop 2024. Read textbooks!