How Smart Are Humans?

Epistemic status: free speculation

How intelligent should we expect AI to be, compared to humans, and how quickly should we expect it to reach this level of intelligence? This is of course an important strategic question, which has a large impact on our AI threat models. At the moment, the most common answers to these questions are “much more intelligent”, and “very quickly”. In this post, I will describe an arguably realistic scenario in which this would not be the case.

First of all, why should we expect AI to end up much more intelligent than humans? A common argument goes roughly like this: humans are much, much smarter than animals. We can build computers, and send things to space, but they can’t. This proves that the range of possible intelligence is very wide. Moreover, it would be prima facie very surprising if human intelligence is at the top of this range. Therefore, we should expect it to be possible for AI systems to get much smarter than humans. Moreover, there is no reason to think that AI progress would slow down around human intelligence in particular. Therefore, we should expect AI intelligence to quickly far outstrip our intelligence.

This argument relies very crucially on the assumption that humans are much smarter than animals. But is this actually true? I’m not entirely convinced. First of all, there have been “feral” humans that grew up surrounded by animals. As far as I know, these humans are not obviously much more intelligent than animals (in terms of their ability to solve problems). This already casts some doubt on the notion that humans are much, much smarter than animals.

It is important to remember that humans, unlike all other species, are able to use complex language. This is a huge confounding factor, when we try to compare the intelligence of humans and animals. It is obviously very powerful to be able to exchange complex ideas, and build up knowledge intergenerationally. This would probably be enough to give humans a very large advantage, even if our intelligence was otherwise exactly the same as that of other primates.

Therefore, consider the following hypothesis: humans have an innate ability to use complex, combinatorial language, but all other species lack this ability (in the way Noam Chomsky thinks). In addition to this, humans are also somewhat (but not hugely) more intelligent than other primates (eg, 1.5x as intelligent, say).

As far as I can tell, this hypothesis roughly fits all our observations (?). However, if it is the case that the difference between humans and monkeys is mostly due to a one-shot discrete difference (ie language), then this cannot necessarily be repeated to get a similar gain in intelligence a second time. If that is the case, then we should perhaps expect AI to still end up quite a bit more intelligent than humans, but perhaps not to an incomprehensible extent (ie, we end up with AI geniuses, but not AI gods).

There are obviously a lot of ways that this conclusion could be wrong, and a lot of counter-arguments one could offer (eg, in most board games, AI has quickly gone from below human performance, to far above human performance, yadda yadda). However, I don’t know of any knock-down arguments, and so I put some weight on something like this being true.