1. Classical Learning Theory is flawed and predicts that neural networks should overfit when they don’t. The correct way to understand this is through the lens of singular learning theory.
2. Quantilizing agents can actually be reflectively stable. There’s work by Diffractor (Alex Appel) on this topic that should become public soon.
I think it’s more accurate to say it’s incomplete. And the standard generalization bound math doesn’t make that prediction as far as I’m aware, it’s just the intuitive version of the theory that does. I’ve been excited by the small amount of singular learning theory stuff I’ve read. I’ll read more, thanks for making that page.
Great stuff Jeremy!
Two basic comments:
1. Classical Learning Theory is flawed and predicts that neural networks should overfit when they don’t.
The correct way to understand this is through the lens of singular learning theory.
2. Quantilizing agents can actually be reflectively stable. There’s work by Diffractor (Alex Appel) on this topic that should become public soon.
Thanks!
I think it’s more accurate to say it’s incomplete. And the standard generalization bound math doesn’t make that prediction as far as I’m aware, it’s just the intuitive version of the theory that does. I’ve been excited by the small amount of singular learning theory stuff I’ve read. I’ll read more, thanks for making that page.
Fantastic!