Nitpick: first alphago was trained by a combination of supervised learning from human expert games and reinforcement learning from self-play. Also, Ke Jie was beaten by AlphaGo Master which was a version at a later stage of development.
Yes, my original comment wasn’t clear about this, but your nitpick is actually a key part of what I’m trying to get at.
Usually, you start with imitation learning and tack on RL at the end. That’s what AlphaGo is. It’s what predecessors to Dreamer-V3 like VPT are. It’s what current reasoning models are.
But then, eventually, you figure out how to bypass the imitation learning/behavioral cloning part and do RL from the start. Human priors serve as a temporary bootstrapping mechanism until we develop approaches that can learn effectively from scratch.
>Human priors serve as a temporary bootstrapping mechanism until we develop approaches that can learn effectively from scratch.
I would argue that instead human priors serve as a mechanism to help the search process, as it’s being shown with cold-started reasoning models: they bake-in some reasoning traces that the model can then learn to exploit via RL. While this is not very bitter lesson-esque, the solution space is so large that it’d probably be quite difficult to do so without the cold start phase (although R1-zero kind of hints at this being possible). Maybe we have not yet thrown as much compute at the problem to do this search from scratch effectively.
Nitpick: first alphago was trained by a combination of supervised learning from human expert games and reinforcement learning from self-play. Also, Ke Jie was beaten by AlphaGo Master which was a version at a later stage of development.
Yes, my original comment wasn’t clear about this, but your nitpick is actually a key part of what I’m trying to get at.
Usually, you start with imitation learning and tack on RL at the end. That’s what AlphaGo is. It’s what predecessors to Dreamer-V3 like VPT are. It’s what current reasoning models are.
But then, eventually, you figure out how to bypass the imitation learning/behavioral cloning part and do RL from the start. Human priors serve as a temporary bootstrapping mechanism until we develop approaches that can learn effectively from scratch.
>Human priors serve as a temporary bootstrapping mechanism until we develop approaches that can learn effectively from scratch.
I would argue that instead human priors serve as a mechanism to help the search process, as it’s being shown with cold-started reasoning models: they bake-in some reasoning traces that the model can then learn to exploit via RL. While this is not very bitter lesson-esque, the solution space is so large that it’d probably be quite difficult to do so without the cold start phase (although R1-zero kind of hints at this being possible). Maybe we have not yet thrown as much compute at the problem to do this search from scratch effectively.