Did Google actually say how long it took to train Alpha Go? In any case, even if it took a week or less, that is not strong evidence that an AGI could go from knowing nothing to knowing a reasonable amount in a week. It could easily take months, even if it would learn faster than a human being. You need to learn a lot more for general intelligence than to play Go.
Did Google actually say how long it took to train Alpha Go?
They did. In the methodology part they give an exact breakdown of how much wallclock time it took to train each step (I excerpted it in the original discussion here or on Reddit), which was something like 5 weeks total IIRC. Given the GPU counts on the various steps, it translated to something like 2 years on a regular laptop GPU, so the parallelization really helped; I don’t know what the limit on parallelization for reinforcement learning is, but note the recent DeepMind paper establishing that you can even throw away experience-replay entirely if you go all-in on parallelization (since at least one copy will tend to be playing something relevant while the others explore, preventing catastrophic forgetting), so who knows what one could do with 1k GPUs or a crazy setup like that?
The answer is “mine Bitcoin in the pre-FPGA days” :-)
This year Nvidia is releasing its next generation of GPUs (Pascal) which is supposed to provide a major speed-up (on the order of 10x) for neural net applications.
First, at least it establishes a minimum. If an AI can learn the basics of English in a day, then it still has that much of a head start against humans. Even if it takes longer to master the rest of language, you can at least cut 3 years off the training time, and presumably the rest can be learned at a rapid rate as well.
It also establishes that AI can teach itself specialized skills very rapidly. Today it learns the basics of language, tomorrow it learns the basics of programming, the day after it learns vision, and then it can learns engineering nanotechnology, etc. This is an ability far above what humans can do, and would give it a huge advantage.
Finally, even if it takes months, that’s still FOOM. I don’t know where the cutoff point is, but anything that advances at a pace that rapid is dangerous. It’s very different than than the alternative “slow takeoff” scenarios where AI takes years and years to advance to superhuman level.
Did Google actually say how long it took to train Alpha Go? In any case, even if it took a week or less, that is not strong evidence that an AGI could go from knowing nothing to knowing a reasonable amount in a week. It could easily take months, even if it would learn faster than a human being. You need to learn a lot more for general intelligence than to play Go.
They did. In the methodology part they give an exact breakdown of how much wallclock time it took to train each step (I excerpted it in the original discussion here or on Reddit), which was something like 5 weeks total IIRC. Given the GPU counts on the various steps, it translated to something like 2 years on a regular laptop GPU, so the parallelization really helped; I don’t know what the limit on parallelization for reinforcement learning is, but note the recent DeepMind paper establishing that you can even throw away experience-replay entirely if you go all-in on parallelization (since at least one copy will tend to be playing something relevant while the others explore, preventing catastrophic forgetting), so who knows what one could do with 1k GPUs or a crazy setup like that?
The answer is “mine Bitcoin in the pre-FPGA days” :-)
This year Nvidia is releasing its next generation of GPUs (Pascal) which is supposed to provide a major speed-up (on the order of 10x) for neural net applications.
First, at least it establishes a minimum. If an AI can learn the basics of English in a day, then it still has that much of a head start against humans. Even if it takes longer to master the rest of language, you can at least cut 3 years off the training time, and presumably the rest can be learned at a rapid rate as well.
It also establishes that AI can teach itself specialized skills very rapidly. Today it learns the basics of language, tomorrow it learns the basics of programming, the day after it learns vision, and then it can learns engineering nanotechnology, etc. This is an ability far above what humans can do, and would give it a huge advantage.
Finally, even if it takes months, that’s still FOOM. I don’t know where the cutoff point is, but anything that advances at a pace that rapid is dangerous. It’s very different than than the alternative “slow takeoff” scenarios where AI takes years and years to advance to superhuman level.