Let’s Talk About Intelligence

I’m writing this because, for a while, I have noticed that I am confused: particularly about what people mean when they say someone is intelligent. I’m more interested in a discussion here than actually making a formal case, so please excuse my lack of actual citations. I’m also trying to articulate my own confusion to myself as well as everyone else, so this will not be as focused as it could be.

If I had to point to a starting point for this state, I’d say it was in psych class, where we talked about research presented by Eyesenck and Gladwell. Eyesenck is very clear to define intelligence as the ability to solve abstract problems, but not necessarily the motivation . In many ways, this matches Yudkowsky’s definition, where he talks about intelligence as a property we can ascribe to an entity, which lets us predict that the entity will be able to complete a task, without ourselves necessarily understanding the steps toward completion.

The central theme I’m confused about is the generality of the concept: are we really saying that there is a general algorithm or class of algorithms that will solve most or all problems to within a given distance from optimum?

Let me give an example. Depending on what test you use, an autistic can look clinically retarded, but with ‘islands’ of remarkable ability, even up to genius levels. The classic example is “Rain Man,” who is depicted as easily solving numerical problems most people don’t even understand, but having trouble tying his shoes. This is usually an exaggeration (by no means are all autistics savants), and these island skills are hardly limited to math. The interesting point, though, is that even someone with many such islands can have an abysmally low overall IQ.

Some tests correct for this – Raven’s Pattern matching test, for instance, gives you increasingly complex patterns that you have to complete – and this tends to level out those islands, and give an overall score that seems commensurate with the sheer genius that can be found in some areas.

What I find confusing is why we’re correcting this at all. Certainly, we know that some people, given a task, can complete that task, and of course, depending on the person, this task can be unfathomably complex. But do we really have the evidence to say that, in general, this task does not depend on the person as well? Or, more specifically, on the algorithms they’re running? Is it reasonable to say that a person runs an algorithm that will solve all problems within an efficiency x (with respect to processing time and optimality of the solution)? Or should we be looking closer for islands in neurological baselines as well?

Certainly, we could change the question and ask how efficient are all the algorithms the person is running, and from that, we could give an average efficiency, which might serve as a decent rough estimate for the efficiency with which a person will solve a problem. And for some uses, this is exactly the information we’re looking for, and that’s fine. But, as a general property of the people we’re studying, it seems like the measure is insufficient.

If we’re trying to predict specific behavior, it seems like it would be useful to be aware of whatever ‘islands’ exist – for instance, the common separation between algebraic and geometric approaches to math. In my experience, using geometric explanations to someone with an algebraic approach may not be at all successful, but this is not predictive of what we might think of as the person’s a priori probability of solving the problem: occasionally they seem to solve the problem with no more than a few algebraic hints. Of course, this is hardly hard evidence, but I think it points to what I’m getting at.

Looking at the specific algorithm that’s being used (or perhaps, the class of algorithm?) can be considerably more predictive of the outcome. Actually, I can’t really say that, either: looking at what could be a distinct algorithm can be considerably more predictive of the outcome. There are numerous explanations for these observations, one of which is of course that these are all the same algorithm, just trained on different inputs, and perhaps even constrained or aided by changes in the local neural architecture (as some studies on neurological correlates of autism might suggest). But computational power alone seems insufficient if we’re going to explain phenomena like the autistic ‘islands’. A savant doesn’t want for computational power – but in some areas, they can want for intelligence.

Here’s where I start getting confused: the research I’ve seen assumes intelligence is a single trait which could be genetically, epigenetically, or culturally transmitted. When correlates of intelligence are looked for, from what I’ve seen, the correlates are for the ‘average’ intelligence score, and largely disregard the ‘islands’ of ability. As I’ve said, this can be useful, but it seems like answering some of these questions would be useful for a more general understanding of intelligence, especially going into the neurological side of things, whether that’s in wetware or hardware.

Then again, there’s a good chance I’m missing something: in which case, I’d appreciate some help updating my priors.