Invest time into figuring out how to answer natural language questions on a desktop PC with 12GB RAM.
“Solve the hard problem before the easy precursor problem.”
If that wasn’t what you meant, I may have misunderstood you.
Evolution could only pull that lever a certain amount, which is why brain software is so impressive.
But we aren’t even up to using the kind of processing power that evolution used. Human-level reasoning in a machine will be impressive without regard to the physical characteristics of the machine it runs on. Once the problem is well-understood, we’ll get smaller and cheaper versions.
There’s a categorical difference between “try to find a reasonable solution” and “throw money at this until it’s no longer a problem” and you’re acting like there isn’t. I already made exactly the same comments you have in the OP, where I said:
I don’t mean to criticize Watson unduly; it certainly is an impressive engineering achievement and has generated a lot of good publicity and public interest in computing. The engineering feat is impressive if for no other reason than that it is the first accomplishment of this scale, and pioneering is always hard… future Watson’s will be cheaper, faster, and more effective because of IBM’s great work on this.
But there’s a categorical difference in the two approaches. In my own field of computer vision, it’s like this: if you want to understand how face recognition works, you will study the neuroscience of primate brains and come up with compact and efficient representations of the problem that can run in a manner similar to the way primates do it. If you just want to recognize faces right now, you just concatenate every feature vector imaginable at every scale level that could conceivably be relevant and you train 10,000 SVMs over a month and then use cross-validation and mutual information to reduce that down to a “lean” set of 2,000 SVMs and there you go, you’ve overfitted a solution that still leaves face recognition as a total black box, and you use orders of magnitude more resources and time to get that solution.
It’s interesting that current researchers who spent years working on the primate brain / Barlow infomax principle idea and studied monkey face recognition at Cal Tech, and couldn’t do good face recognition for years, are now blowing face.com and other proprietary face recognition software out of the water.
There’s a categorical difference between even trying to solve the hard problem and resorting to using more resources when you have to, vs. just overblowing the whole thing and not even making an attempt at solving the hard problem. From what I know about natural language processing, machine learning, and Watson, Watson is the latter approach and its power and memory consumption reveal it to be quite unimpressive… though hopefully trying to miniaturize it will spawn interesting engineering research.
“Solve the hard problem before the easy precursor problem.”
If that wasn’t what you meant, I may have misunderstood you.
But we aren’t even up to using the kind of processing power that evolution used. Human-level reasoning in a machine will be impressive without regard to the physical characteristics of the machine it runs on. Once the problem is well-understood, we’ll get smaller and cheaper versions.
There’s a categorical difference between “try to find a reasonable solution” and “throw money at this until it’s no longer a problem” and you’re acting like there isn’t. I already made exactly the same comments you have in the OP, where I said:
But there’s a categorical difference in the two approaches. In my own field of computer vision, it’s like this: if you want to understand how face recognition works, you will study the neuroscience of primate brains and come up with compact and efficient representations of the problem that can run in a manner similar to the way primates do it. If you just want to recognize faces right now, you just concatenate every feature vector imaginable at every scale level that could conceivably be relevant and you train 10,000 SVMs over a month and then use cross-validation and mutual information to reduce that down to a “lean” set of 2,000 SVMs and there you go, you’ve overfitted a solution that still leaves face recognition as a total black box, and you use orders of magnitude more resources and time to get that solution.
It’s interesting that current researchers who spent years working on the primate brain / Barlow infomax principle idea and studied monkey face recognition at Cal Tech, and couldn’t do good face recognition for years, are now blowing face.com and other proprietary face recognition software out of the water.
There’s a categorical difference between even trying to solve the hard problem and resorting to using more resources when you have to, vs. just overblowing the whole thing and not even making an attempt at solving the hard problem. From what I know about natural language processing, machine learning, and Watson, Watson is the latter approach and its power and memory consumption reveal it to be quite unimpressive… though hopefully trying to miniaturize it will spawn interesting engineering research.
Yeah, I read them at different times, and missed that.