But a year before the author made this prediction:
My impression from this exercise is that it will be hard to go above 80%, but I suspect improvements might be possible up to range of about 85-90%, depending on how wrong I am about the lack of training data.
And then 4 years later:
2015 update: Obviously this prediction was way off, with state of the art now in 95%, as seen in this Kaggle competition leaderboard. I’m impressed!
A few percent is a huge deal on a machine learning benchmark, because improving each percentage point is exponentially harder than the previous.
I’m not saying I think strong AI is really close. At least not based on RNNs are becoming more popular. But it’s worth noting that experts can underestimate progress just as easily as overestimate it.
If you’re being generous, you might take the apparent wide applicability of simple techniques and moderate-to-massive computing power as a sign (given that it’s the exact opposite of old-style approaches) that AGI might not be as hard as we think. It does match better with how brains work.
But this particular result is in no way a step towards AI, no. It’s one guy playing around with well-known techniques, that are being used vastly more effectively with e.g. Google’s image labelling. This article should only push your posteriors around if you were unaware of previous work.
An interesting post, but I don’t know if it implies that “strong AI may be near”. Indeed, the author has written another post in which he says that we are “really, really far away” from human-level intelligence: https://karpathy.github.io/2012/10/22/state-of-computer-vision/.
But a year before the author made this prediction:
And then 4 years later:
A few percent is a huge deal on a machine learning benchmark, because improving each percentage point is exponentially harder than the previous.
I’m not saying I think strong AI is really close. At least not based on RNNs are becoming more popular. But it’s worth noting that experts can underestimate progress just as easily as overestimate it.
If you’re being generous, you might take the apparent wide applicability of simple techniques and moderate-to-massive computing power as a sign (given that it’s the exact opposite of old-style approaches) that AGI might not be as hard as we think. It does match better with how brains work.
But this particular result is in no way a step towards AI, no. It’s one guy playing around with well-known techniques, that are being used vastly more effectively with e.g. Google’s image labelling. This article should only push your posteriors around if you were unaware of previous work.