[Link] Word-vector based DL system achieves human parity in verbal IQ tests

A research team in China has created a system for answering verbal analogy questions of the type found on the GRE and IQ tests that scores a little above the average human score, perhaps corresponding to an IQ of around 105 or so. This improves substantially on the reported SOTA in AI for these types of problems.

This work builds on deep word-vector embeddings which have led to large gains in translation and many NLP tasks. One of their key improvements involves learning multiple vectors per word, where the number of specific word meanings is simply grabbed from a dictionary. This is important because verbal analogy questions often use more rare word meanings. They also employ modules specialized for the different types of questions.

I vaguely remember reading that AI systems already are fairly strong at solving visual raven-matrix style IQ questions, although I haven’t looked into that in detail.

The multi-vector technique is probably the most important take away for future work.

Even if subsequent follow up work reaches superhuman verbal IQ in a few years, this of course doesn’t immediately imply AGI. These types of IQ tests measure specific abilities which are correlated with general intelligence in humans, but these specific abilities are only a small subset of the systems/​abilities required for general intelligence, and probably rely on a smallish subset of the brain’s circuitry.