Note that DeepMind’s two big successes (Atari and Go) come from scenarios that are perfectly simulable in a computer. That means they can generate an arbitrarily large number of data points to train their massive neural networks. Real world ML problems almost all have strict limitations on the amount of training data that is available.
That is true. However, since they released those papers, they’ve published some results demonstrating learning from only a handful of samples in certain contexts by using specialized memory networks which seem to be more analogous to human memory.
I’m not sure this is true. The internet contains billions of hours of video, trillions of images, and libraries worth of text. If they can use unsupervised, semi-supervised, or weakly-supervised learning, they could take advantage of nearly limitless data. And neural networks can do unsupervised learning well, by learning features for one task and then transferring those to another task.
Deepmind has also had a paper on approximate bayesian learning for neural net parameters. That would make them much more able to learn from limited amounts of data, instead of overfitting.
Anyway deep nets are not really going to take over traditional ML methods, but rather open up a whole new set of problems that traditional methods can’t handle. Like processing audio and video data, or reinforcement learning.
Note that DeepMind’s two big successes (Atari and Go) come from scenarios that are perfectly simulable in a computer. That means they can generate an arbitrarily large number of data points to train their massive neural networks. Real world ML problems almost all have strict limitations on the amount of training data that is available.
That is true. However, since they released those papers, they’ve published some results demonstrating learning from only a handful of samples in certain contexts by using specialized memory networks which seem to be more analogous to human memory.
I’m not sure this is true. The internet contains billions of hours of video, trillions of images, and libraries worth of text. If they can use unsupervised, semi-supervised, or weakly-supervised learning, they could take advantage of nearly limitless data. And neural networks can do unsupervised learning well, by learning features for one task and then transferring those to another task.
Deepmind has also had a paper on approximate bayesian learning for neural net parameters. That would make them much more able to learn from limited amounts of data, instead of overfitting.
Anyway deep nets are not really going to take over traditional ML methods, but rather open up a whole new set of problems that traditional methods can’t handle. Like processing audio and video data, or reinforcement learning.
On the other hand, it’s simple to generate AI-complete problems where you can generate training data.