Interesting point about the scaling hypothesis. My initial take was that this was a slightly bad sign for natural abstractions: Go has a small set of fundamental abstractions, and this attack sure makes it look like KataGo didn’t quite learn some of them (liberties and capturing races), even though it was trained on however many million games of self-play and has some customizations designed to make those specific things easier. Then again, we care about Go exactly because it resisted traditional AI for so long, so maybe those abstractions aren’t as natural in KataGo’s space as they are in mine, and some other, more generally-useful architecture would be better behaved.
Definitely we’re missing efficient lifelong learning, and it’s not at all clear how to get there from current architectures.
Interesting point about the scaling hypothesis. My initial take was that this was a slightly bad sign for natural abstractions: Go has a small set of fundamental abstractions, and this attack sure makes it look like KataGo didn’t quite learn some of them (liberties and capturing races), even though it was trained on however many million games of self-play and has some customizations designed to make those specific things easier. Then again, we care about Go exactly because it resisted traditional AI for so long, so maybe those abstractions aren’t as natural in KataGo’s space as they are in mine, and some other, more generally-useful architecture would be better behaved.
Definitely we’re missing efficient lifelong learning, and it’s not at all clear how to get there from current architectures.