[Question] Why Do People Think Humans Are Stupid?

Introduction

On more than one occasion, I’ve seen the following comparisons used to describe how a superintelligence might relate to/​perceive humans:

  • Humans to ants

  • Humans to earthworms

  • And similar

More generally, people seem to believe that humans are incredibly far from the peak of attainable intelligence. And that’s very not obvious to me?

Argument

I suspect that the median human’s cognitive capabilities are qualitatively closer to an optimal bounded superintelligence than they are to a honeybee. The human brain seems to be a universal learner. There are some concepts that no human can fully grasp, but those seems to be concepts that are too large to fit in the working memory of a human. And humans can overcome those working memory limitations with a pen and paper, a smartphone, a laptop or other technological aids.

There doesn’t seem to be anything a sufficiently motivated and resourced intelligent human is incapable of grasping given enough time. A concept that no human could ever grasp, seems like a concept that no agent could ever grasp. If it’s computable, then a human can learn to compute it (even if they must do so with the aid of technology).

Somewhere in the progression from honeybee to humans, there is a phase shift to a universal learner. Our usage of complex language/​mathematics/​abstraction seems like a difference in kind of cognition. I do not believe there are any such differences in kinds ahead of us on the way to a bounded superintelligence.

I don’t think “an agent whose cognitive capabilities are as far above humans as humans are above ants” is necessarily a well-defined, sensible or coherent concept. I don’t think it means anything useful or points to anything real.

I do not believe there are any qualitatively more powerful engines of cognition than the human brain (more powerful in the sense that a Turing machine is more powerful than a finite state machine). There are engines of cognition with better serial/​parallel processing speed, larger working memories, faster recall, etc. But they don’t have a cognitive skill on the level of “use of complex language/​symbolic representation” that we lack. There is nothing they can learn that we are fundamentally incapable of learning (even if we need technological aid to learn it).

The difference between a human and a bounded superintelligence is a difference of degree. It’s not at all obvious to me that superintelligences would be cognitively superior to sufficiently enhanced brain emulations.

I am not even sure the “human—chimpanzee gap” is a sensible notion for informing expectations of superintelligence. That seems to be a difference of kind I simply don’t think will manifest. Once you make the jump to universality, there’s nowhere higher to jump to.

Perhaps, superintelligence is just an immensely smart human that also happens to be equipped with faster processing speeds, much larger working memories, larger attention spans, etc.

Addenda

And even then, there are still fundamental constraints to attainable intelligence:

  1. What can be computed

    1. Computational tractability

  2. What can be computed efficiently

    1. Computational complexity

  3. Translating computation to intelligence

    1. Mathematical optimisation

    2. Algorithmic and statistical information theories

    3. Algorithmic and statistical learning theories

  4. Implementing computation within physics

    1. Thermodynamics of computation

      1. Minimal energy requirements

      2. Heat dissipation

      3. Maximum information density

    2. Speed of light limits

      1. Latency of communication

      2. Maximum serial processing speeds

I do not think humans are necessarily quantitatively close to the physical limits (the brain is extremely energy efficient from a thermodynamic point of view, but it also runs at only 20 watts). AI systems could have much larger power budgets [some extant supercomputers consume gigawatts of power]. But I expect many powerful/​useful/​interesting cognitive algorithms to be NP hard/​or require exponential time (an underlying intuition is that the size of search trees grow exponentially with each “step”/​searching for a particular string grows exponentially with string length. Search seems like a natural operationalisation of planning and I expect it to feature in other cognitive skills (searching for efficient encodings, approximations, compressions, patterns, etc. maybe how we generate abstractions and enrich our world model etc.), so I’m also pessimistic on just how useful quantitative progress will turn out to be in practice.

Counterargument

There’s a common rebuttal along the lines that an ant is also a universal computer and so can in theory compute any computable program.

The difference is that you cannot actually teach an ant how to implement universal models of computation. Humans on the other hand can actually be taught that (and invented it of their own accord). Perhaps, the hardware of an ant is a universal computer, but the ant software is not a universal learner. Human software is.