Recently came across Valiant’s A Theory of the Learnable. Basically, it covers a method of machine learning in the following way: if there’s a collection of objects which either possess some property P or do not, then you can teach a machine to recognize this with arbitrarily small error simply by presenting it with randomly selected objects and saying whether they possess P. The learner may give false positives, but will not give a false negative. Perhaps the following passage best illustrates the concept:

Consider a world containing robots and elephants. Suppose that one of the robots has discovered a recognition algorithm for elephants that can be meaningfully expressed in k-conjunctive normal form. Our Theorem A implies that this robot can communicate its algorithm to the rest of the robot population by simply exclaiming “elephant” whenever one appears.

The mathematics are done in terms of Boolean functions and “k-conjunctive normal form” is a certain technical condition.

What struck me was that the learning could take place without the learner knowing the definition of the concept to be learned. That a thing could be identified with probability arbitrarily close to 1 without the learner necessarily being able to formulate a definition. I was reminded of the judge who said that he could not define pornography, but he knew it when he saw it. There are plenty of other concepts I can think of where identification is easy (most of the time at least) but which defy precise definition.

I’m usually wary of applying scientific results to philosophy, especially where I’m not an expert. Any expert input on whether this is a fair interpretation of the subject would be appreciated.

I’ve yet to read the paper, so forgive me if it’s repeated there (but I will read it), but the situation is similar to NP problems: hard to find a solution, easy to check a solution. Maybe concepts are NP problems encoded in our brains.

Recently came across Valiant’s A Theory of the Learnable. Basically, it covers a method of machine learning in the following way: if there’s a collection of objects which either possess some property P or do not, then you can teach a machine to recognize this with arbitrarily small error simply by presenting it with randomly selected objects and saying whether they possess P. The learner may give false positives, but will not give a false negative. Perhaps the following passage best illustrates the concept:

The mathematics are done in terms of Boolean functions and “k-conjunctive normal form” is a certain technical condition.

What struck me was that the learning could take place without the learner knowing the definition of the concept to be learned. That a thing could be identified with probability arbitrarily close to 1 without the learner necessarily being able to formulate a definition. I was reminded of the judge who said that he could not define pornography, but he knew it when he saw it. There are plenty of other concepts I can think of where identification is easy (most of the time at least) but which defy precise definition.

I’m usually wary of applying scientific results to philosophy, especially where I’m not an expert. Any expert input on whether this is a fair interpretation of the subject would be appreciated.

EY made a whole series of posts about this idea. A Human’s Guide to Words. Specifically see the Cluster Structure of Thingspace.

I’ve yet to read the paper, so forgive me if it’s repeated there (but I will read it), but the situation is similar to NP problems: hard to find a solution, easy to check a solution.

Maybe concepts are NP problems encoded in our brains.