All this seems relevant, but there’s still the fact that a human elo at go or chess will improve much more from playing 1000 games (and no more) than an AI playing a 1000 games. That’s suggestive of property learning, or reflection, or conceptualization, or generalization, or something, that the AIs seem to lack, but can compensate for with brute force.
So for the case of our current RL game-playing AIs not learning much from 1000 games—sure, the actual game-playing AIs we have built don’t learn games as efficiently as humans do, in the sense of “from as little data.” But:
Learning from as little data as possible hasn’t actually been a research target, because self-play data is so insanely cheap. So it’s hard to conclude that our current setup for AIs is seriously lacking, because there hasn’t been serious effort to push along this axis.
To point out some areas we could be pushing on, but aren’t: Game-play networks are usually something like ~100x smaller than LLMs, which are themselves ~100-10x smaller than human brains (very approximate numbers). We know from numerous works that data efficiency scales with network size, so even if Adam over matmul is 100% as efficient as human brain matter, we’d still expect our current RL setups to do amazingly poorly with data-efficiency simply because of network size, even leaving aside further issues about lack of hyperparameter search and research effort.
Given this, while this is of course a consideration, it seems far from a conclusive consideration.
Edit: Or more broadly, again—different concepts of “intelligence” will tend to have different areas where they seem to have more predictive use, and different areas they seem to have more epicycles. The areas above are the kind of thing that—if one made them central to one’s notions of intelligence rather than peripheral—you’d probably end up with something different than the LW notion. But again—they certainly do not compel one to do that refactor! It probably wouldn’t make sense to try to do the refactor unless you just keep getting the feeling “this is really awkward / seems off / doesn’t seem to be getting at it some really important stuff” while using the non-refactored notion.
All this seems relevant, but there’s still the fact that a human elo at go or chess will improve much more from playing 1000 games (and no more) than an AI playing a 1000 games. That’s suggestive of property learning, or reflection, or conceptualization, or generalization, or something, that the AIs seem to lack, but can compensate for with brute force.
So for the case of our current RL game-playing AIs not learning much from 1000 games—sure, the actual game-playing AIs we have built don’t learn games as efficiently as humans do, in the sense of “from as little data.” But:
Learning from as little data as possible hasn’t actually been a research target, because self-play data is so insanely cheap. So it’s hard to conclude that our current setup for AIs is seriously lacking, because there hasn’t been serious effort to push along this axis.
To point out some areas we could be pushing on, but aren’t: Game-play networks are usually something like ~100x smaller than LLMs, which are themselves ~100-10x smaller than human brains (very approximate numbers). We know from numerous works that data efficiency scales with network size, so even if Adam over matmul is 100% as efficient as human brain matter, we’d still expect our current RL setups to do amazingly poorly with data-efficiency simply because of network size, even leaving aside further issues about lack of hyperparameter search and research effort.
Given this, while this is of course a consideration, it seems far from a conclusive consideration.
Edit: Or more broadly, again—different concepts of “intelligence” will tend to have different areas where they seem to have more predictive use, and different areas they seem to have more epicycles. The areas above are the kind of thing that—if one made them central to one’s notions of intelligence rather than peripheral—you’d probably end up with something different than the LW notion. But again—they certainly do not compel one to do that refactor! It probably wouldn’t make sense to try to do the refactor unless you just keep getting the feeling “this is really awkward / seems off / doesn’t seem to be getting at it some really important stuff” while using the non-refactored notion.