One could argue that Go engines instantly went from “can’t serve as good opponents to train against” to “vastly outstripping the ability of any human to serve as a training opponent” in a similar way.
This is still not true. In 2011, Zen was already 5 (amateur) dan, which is better than the vast majority of hobbyists, and I’ve known people use Zen as a a training opponent. I think by 2014 it was already useful as a training partner even for people who were preparing for getting their professional certification.
And even at the professional level, ‘instantly’ is still an exaggeration. AlphaGO defeated the professional Go player and European champion Fan Hui in October 2015, and Lee Sedol still said at the time that he could defeat AlphaGo, and I think he was probably right. It took another half year, until March 2016 for Lee Sedol to play against AlphaGo, where AlphaGo won, but still didn’t vastly outstrip human ability: Lee Sedol still won one out of the five matches.
(Also, this is nitpicking, but if you restrict the question to a computer serving as a training partner in Go, then I’m not sure that even now the computers vastly outstrip the human ability. There are advantages of training against the best Go programs, but I don’t think they are that vast, most of the variance is still in how the student is doing, and I’m pretty sure that professional players still regularly train against other humans too.)
Another important point here is if there was substantial economic incentive to build strong Go players, then powerful Go players would have built earlier, and the time between players of those two levels would probably have been more longer.
AlphaGO defeated the professional Go player and European champion Fan Hui in October 2021, and Lee Sedol still said. at the time that he could defeat AlphaGo, and I think he was probably right. It took another half year, until March 2016.
Is the first October date supposed to be a earlier date (before March 2016) or am I completely misreading this sentence?
Because this is the history of computer Go, with fifty years added on to each date. In 1997, the best computer Go program in the world, Handtalk, won NT$250,000 for performing a previously impossible feat – beating an 11 year old child (with an 11-stone handicap penalizing the child and favoring the computer!) As late as September 2015, no computer had ever beaten any professional Go player in a fair game. Then in March 2016, a Go program beat 18-time world champion Lee Sedol 4-1 in a five game match. Go programs had gone from “dumber than children” to “smarter than any human in the world” in eighteen years, and “from never won a professional game” to “overwhelming world champion” in six months.
Sorry, I made a typo, the Fan Hui match was in 2015, I have no idea why I wrote 2021.
I think Scott’s description is accurate, though it leaves out the years from 2011-2015 when AIs were around the level of the strongest amateurs, which makes the progress look more discontinuous than it was.
This is still not true. In 2011, Zen was already 5 (amateur) dan, which is better than the vast majority of hobbyists, and I’ve known people use Zen as a a training opponent. I think by 2014 it was already useful as a training partner even for people who were preparing for getting their professional certification.
And even at the professional level, ‘instantly’ is still an exaggeration. AlphaGO defeated the professional Go player and European champion Fan Hui in October 2015, and Lee Sedol still said at the time that he could defeat AlphaGo, and I think he was probably right. It took another half year, until March 2016 for Lee Sedol to play against AlphaGo, where AlphaGo won, but still didn’t vastly outstrip human ability: Lee Sedol still won one out of the five matches.
(Also, this is nitpicking, but if you restrict the question to a computer serving as a training partner in Go, then I’m not sure that even now the computers vastly outstrip the human ability. There are advantages of training against the best Go programs, but I don’t think they are that vast, most of the variance is still in how the student is doing, and I’m pretty sure that professional players still regularly train against other humans too.)
Another important point here is if there was substantial economic incentive to build strong Go players, then powerful Go players would have built earlier, and the time between players of those two levels would probably have been more longer.
Is the first October date supposed to be a earlier date (before March 2016) or am I completely misreading this sentence?
I think what you’ve written here is compatible with Scott Alexander’s summary in Superintelligence FAQ but doublechecking if you think this is accurate:
Sorry, I made a typo, the Fan Hui match was in 2015, I have no idea why I wrote 2021.
I think Scott’s description is accurate, though it leaves out the years from 2011-2015 when AIs were around the level of the strongest amateurs, which makes the progress look more discontinuous than it was.