Yet, at no point during this development did any project leap forward by a huge margin. Instead, every paper built upon the last one by making minor improvements and increasing the compute involved. Since these minor improvements nonetheless happened rapidly, the result is that the GANs followed a fast development relative to the lifetimes of humans.
Does anyone have time series data on the effectiveness of Go-playing AI? Does that similarly follow a gradual trend?
AlphaGo seems much closer to “one project leaps forward by a huge margin.” But maybe I’m mistaken about how big an improvement AlpahGo was over previous Go AIs.
In the wake of AlphaGo’s victory against Fan Hui, much was made of the purported suddenness of this victory relative to expected computer Go progress. In particular, people at DeepMind and elsewhere have made comments to the effect that experts didn’t think this would happen for another decade or more. One person who said such a thing is Remi Coulom, designer of CrazyStone, in a piece in Wired magazine. However, I’m aware of no rigorous effort to elicit expert opinion on the future of computer Go, and it was hardly unanimous that this milestone was that long off. I and others, well before AlphaGo’s victory was announced, said on Twitter and elsewhere that Coulom’s pessimism wasn’t justified. Alex Champandard noted that at a gathering of game AI experts a year or so ago, it was generally agreed that Go AI progress could be accelerated by a concerted effort by Google or others. At AAAI last year [2015], I also asked Michael Bowling, who knows a thing or two about game AI milestones (having developed the AI that essentially solved limit heads-up Texas Hold Em), how long it would take before superhuman Go AI existed, and he gave it a maximum of five years. So, again, this victory being sudden was not unanimously agreed upon, and claims that it was long off are arguably based on cherry-picked and unscientific expert polls. [...]
Hiroshi Yamashita extrapolated the trend of computer Go progress as of 2011 into the future and predicted a crossover point to superhuman Go in 4 years, which was one year off. In recent years, there was a slowdown in the trend (based on highest KGS rank achieved) that probably would have lead Yamashita or others to adjust their calculations if they had redone them, say, a year ago, but in the weeks leading up to AlphaGo’s victory, again, there was another burst of rapid computer Go progress. I haven’t done a close look at what such forecasts would have looked like at various points in time, but I doubt they would have suggested 10 years or more to a crossover point, especially taking into account developments in the last year. Perhaps AlphaGo’s victory was a few years ahead of schedule based on reported performance, but it should always have been possible to anticipate some improvement beyond the (small team/data/hardware-based) trend based on significant new effort, data, and hardware being thrown at the problem. Whether AlphaGo deviated from the appropriately-adjusted trend isn’t obvious, especially since there isn’t really much effort going into rigorously modeling such trends today. Until that changes and there are regular forecasts made of possible ranges of future progress in different domains given different effort/data/hardware levels, “breakthroughs” may seem more surprising than they really should be.
AlphaGo seems much closer to “one project leaps forward by a huge margin.”
I don’t have the data on hand, but my impression was that AlphaGo indeed represented a discontinuity in the domain of Go. It’s difficult to say why this happened, but my best guess is that DeepMind invested a lot more money into solving Go than any competing actor at the time. Therefore, the discontinuity may have followed straightforwardly from a background discontinuity in attention paid to the task.
If this hypothesis is true, I don’t find it compelling that AlphaGo is evidence for a discontinuity for AGI, since such funding gaps are likely to be much smaller for economically useful systems.
The following is mostly a nitpick / my own thinking through of a scenario:
If this hypothesis is true, I don’t find it compelling that AlphaGo is evidence for a discontinuity for AGI, since such funding gaps are likely to be much smaller for economically useful systems.
If there is no fire alarm for general intelligence, it’s not implausible that that there will be a similar funding gap for useful systems. Currently, there are very few groups explicitly aiming at AGI, and of those groups Deep Mind is by far the best funded.
If we are much nearer to AGI than most of us suspect, we might see the kind of funding differential exhibited in the Go example for AGI, because the landscape of people developing AGI will look a lot closer to that of Alpha Go (only one group trying seriously), vs. the one for GANs (many groups making small iterative improvements on each-other’s work).
Overall, I find this story to be pretty implausible, though. It would mean that there is a capability cliff very nearby in ML design space, somehow, and that cliff is so sharp to be basically undetectable right until someone’s gotten to the top of it.
Does anyone have time series data on the effectiveness of Go-playing AI? Does that similarly follow a gradual trend?
AlphaGo seems much closer to “one project leaps forward by a huge margin.” But maybe I’m mistaken about how big an improvement AlpahGo was over previous Go AIs.
Miles Brundage argues that “it’s an impressive achievement, but considering it in this larger context should cause us to at least slightly decrease our assessment of its size/suddenness/significance in isolation”.
Some skepticism from Eliezer here: https://twitter.com/ESRogs/status/1337869362678571008
I don’t have the data on hand, but my impression was that AlphaGo indeed represented a discontinuity in the domain of Go. It’s difficult to say why this happened, but my best guess is that DeepMind invested a lot more money into solving Go than any competing actor at the time. Therefore, the discontinuity may have followed straightforwardly from a background discontinuity in attention paid to the task.
If this hypothesis is true, I don’t find it compelling that AlphaGo is evidence for a discontinuity for AGI, since such funding gaps are likely to be much smaller for economically useful systems.
The following is mostly a nitpick / my own thinking through of a scenario:
If there is no fire alarm for general intelligence, it’s not implausible that that there will be a similar funding gap for useful systems. Currently, there are very few groups explicitly aiming at AGI, and of those groups Deep Mind is by far the best funded.
If we are much nearer to AGI than most of us suspect, we might see the kind of funding differential exhibited in the Go example for AGI, because the landscape of people developing AGI will look a lot closer to that of Alpha Go (only one group trying seriously), vs. the one for GANs (many groups making small iterative improvements on each-other’s work).
Overall, I find this story to be pretty implausible, though. It would mean that there is a capability cliff very nearby in ML design space, somehow, and that cliff is so sharp to be basically undetectable right until someone’s gotten to the top of it.