What makes recent “deep learning” progress interesting to me is that traditionally there’s been a sort of paradox in AI: things we might naively think of as impressive achievements of the human intellect (e.g., grandmaster-level chess) turned out to be much easier to get computers to do than things we take for granted because even averagely intelligent children do them without much trouble (e.g., looking at a cat and saying “cat”) -- and deep neural networks seem (not hugely surprisingly, perhaps) to be a good approach to some of those.
That doesn’t, of course, mean that deep NNs + good old-fashioned AI = human-level intelligence. There are still what seem like important gaps that no one has very good ideas how to fill. But it does seem like one gap is getting somewhat filled.
What makes recent “deep learning” progress interesting to me is that traditionally there’s been a sort of paradox in AI: things we might naively think of as impressive achievements of the human intellect (e.g., grandmaster-level chess) turned out to be much easier to get computers to do than things we take for granted because even averagely intelligent children do them without much trouble (e.g., looking at a cat and saying “cat”) -- and deep neural networks seem (not hugely surprisingly, perhaps) to be a good approach to some of those.
That doesn’t, of course, mean that deep NNs + good old-fashioned AI = human-level intelligence. There are still what seem like important gaps that no one has very good ideas how to fill. But it does seem like one gap is getting somewhat filled.