Are Intelligence and Generality Orthogonal?

A common presupposition seems to be that intelligent systems can be classified on two axes:

  • Intelligence (low to high)

  • Generality (narrow to general)

For example, AlphaGo is presumably fairly intelligent, but quite narrow, while humans are both quite intelligent and quite general.

A natural hypothesis would be that these two axes are orthogonal, such that any combination of intelligence and generality is possible.

Surprisingly, I know of nobody who has explicitly spelt out this new orthogonality thesis, let alone argued for or against it.

(The original orthogonality thesis only states that level of intelligence and terminal goals are independent. It does not talk about the narrow/​general distinction.)

MIRI seems to be not very explicit about this, too. On Arbital there are no separate entries for the notions of intelligence and generality, and the article on General Intelligence is rather vague. It appears to mix the two notions together. What I find most surprising: the article suggests that chimpanzees are substantially less general than humans. But it seems to me that chimpanzees are merely less intelligent than humans, not less general by a relevant amount. Just like AlphaGo’s internal predecessor was presumably not less general than AlphaGo, just less intelligent.

What makes AlphaGo fairly narrow, apparently, is that it has a fairly small cognitive domain. I would even go so far and argue that most animals are highly general agents, since they have what MIRI calls a real-world domain.

Evolutionary speaking, constant optimization for high generality makes sense. Over several hundred million years of evolution, animals had to contionously survive in a diverse, noisy, and changing environment, and they had to compete with other animals. Under such evolutionary presurres, narrow specialization would quickly be punished. So most animals have evolved to be very general, just not always very intelligent. (Insects, presumably, are an example of fairly high generality but quite low intelligence.)

One could argue against this by claiming that the new orthogonality thesis is false. But then it is not clear how the narrow/​general distinction is different from the low/​high intelligence distinction.

Matthew Barnett suggests to me that what we commonly mean with “intelligence” is really a multiplicative combination of generality and (what I called) intelligence:

That is, intuitively we would not call something “intelligent” when it does not have a sufficient amount of both generality and (what I called) intelligence. Case in point: A very narrow AI perhaps does not seem intelligent, even if it is very good within its narrow domain. For such a multiplicative notion of intelligence the new orthogonality thesis would not hold.

But whatever we call it, there seems to be some notion of intelligence which is orthogonal to generality.

Or are there counterexamples?