[Question] Are Human Brains Universal?

[Previously]

Introduction

After reading and updating on the answers to my previous question, I am still left unconvinced that the human brain is qualitatively closer to chimpanzee (let alone an ant/​earthworm) than it is to hypothetical superintelligences.

I suspect a reason behind my obstinacy is an intuition that human brains are “universal” in a sense that chimpanzee brains are not. So, you can’t really have other engines of cognition that are more “powerful” than human brains (in the way a Turing Machine is more powerful than a Finite State Automaton), only engines of cognition that are more effective/​efficient.

By “powerful” here, I’m referring to the class of “real world” problems that a given cognitive architecture can learn within a finite time.

Core Claim

Human civilisation can do useful things that chimpanzee civilisation is fundamentally incapable of:

  • Heavier than air flight

  • Launching rockets

  • High-fidelity long-distance communication

  • Etc.

There do not seem to be similarly useful things that superintelligences are capable of that humans are also fundamentally incapable of. Useful things that we could never accomplish in the expected lifetime of the universe.

Superintelligences seem like they would just be able to do the things we are already — in principle — capable of, but more effectively and/​or more efficiently.

Cognitive Advantages of Artificial Intelligences

I expect a superintelligence to be superior to humans quantitatively via:

  • Larger working memories

  • Faster clock cycles (5 GHz vs 0.1 − 2 Hz)

    • Faster thought? [1]

  • Larger attention spans

  • Better recall

  • Larger long term memories

(All of the above could potentially be a several orders of magnitude difference vs homo sapiens brain given sufficient compute.)

And qualitatively via:

  • Parallel/​multithreaded cognition

    • The ability to simultaneously execute:

      • Multiple different cognitive algorithms

      • Multiple instances of the same cognitive algorithm

    • Here too, the AI may have a several orders of magnitude difference in the number of thoughts/​cognitive threads they can simultaneously maintain vs a human’s “one”

    • This may also be a quantitative difference, but it’s the closest to a qualitatively different kind of cognition exclusive to AIs that has been proposed so far

Cognitive Superiority of Artificial Intelligence

I think the aforementioned differences are potent. And would confer the AI considerable advantage over humans:

For example:

  • It could enable massively parallel learning, allowing the AI to attain immense breadth and depth of domain knowledge

    • The AI could become a domain expert in virtually every domain of relevance (or at least domain of relevance to humans)

      • Giving sufficient compute, the AI could learn millions of domains simultaneously

    • This would give it a cross-disciplinary perspective/​viewpoint that no human can attain

  • It could perform multiple cognitive processes at the same time while tackling a given problem

    • This may be equivalent to having n minds collaborating on a problem but without any of the problems of collaboration, massively higher communication bandwidth and high fidelity sharing of rich and complex cognitive representations (unlike the lossy transmissions of language)

    • It could simultaneously tackle every node of a well factorised problem

    • The inherent limitations of population intelligences may not apply to a single mind running N threads

  • Multithreaded thought may allow them to represent, manipulate and navigate abstractions that single threaded brains cannot (within reasonable compute)

    • A considerable difference in what abstractions are available to them could constitute a qualitative difference

  • Larger working memory could allow it to learn abstractions too large to fit in human brains

  • The above may allow it to derive/​synthesise insights that human brains will never find in any reasonable time frame

Equivalent Power?

My intuition is that there will be problems that it would take human mathematicians/​scientists/​philosophers centuries to solve that such an AI can probably get done in reasonable time frames. That’s powerful.

But it still doesn’t feel as large as the chimp to human gap. It feels like the AIs can do things much quicker/​more efficiently than humans. Solve problems faster than we can.

It doesn’t feel like the AI can solve problems that humans will never solve period[2] in the way that humans can solve many problems that chimpanzees will never solve period[3](most of mathematics, physics, computer science, etc).

It feels to me that the human brain — though I’m using human civilisation here as opposed to any individual human — is still roughly as “powerful” as this vastly superior engine of cognition. We can solve the exact same problems as superintelligences; they can just do it more effectively/​efficiently.

I think the last line above is the main sticker. Human brains are capable of solving problems that chimpanzee society will never solve (unless they evolve to smarter species). I am not actually convinced that this much smarter AI can solve problems that humans will never solve?

Universality?

One reason the human brain would be equivalently powerful to a superintelligence would be that the human brain is “universal” in some sense (note that it would have to be a sense in which chimpanzee brains are not universal). If the human brain was capable of solving all “real world” problems, then of course there wouldn’t be any other engines of cognition that were strictly more powerful.

I am not able to provide a rigorous definition of the sense of “universality” I mean here — but to roughly gesture in the direction of the concept I have in mind — it’s something like “can eventually learn any natural “real world”[4] problem set/​domain that another agent can learn”.

Caveat

I think there’s an argument that if there are (real world) problems that human civilisation can never solve[5] no matter what, we wouldn’t be able to conceive/​imagine them. I think this is kind of silly/​find myself distrustful/​sceptical of that line of reasoning.

We have universal languages (our natural languages also seem universal), so a description of such problems should be presentable in such languages. Though perhaps the problem description is too large to fit in working memory. But even then, it can still be stored electronically.

But more generally, I do not think that “I can coherently describe a problem” implies “I can solve the problem”. There are many problems that I can describe but not solve[6], and I don’t expect this to be broadly different for humans. If there are problems we cannot solve, I would still expect that we are able to describe them. I welcome suggestions for problems that you think human civilisation can never solve, but it’s not particularly my primary inquiry here.

  1. ^

    To be clear, I do not actually expect that the raw speed difference between CPU clock cycles and neuronal firing rates will straightforwardly translate to a speed of thought difference between human and artificial cognition (I expect a great many operations may be involved in a single thought, and I suspect intelligence won’t just be that easy), but the sheer 9 order of magnitude difference does deserve consideration.

    Furthermore, it needs to be stressed that the 0.1 − 2Hz figure is a baseline/​average rate. Our maximum rate during periods of intense cognitive effort could well be significantly higher (this may be thought of as “overclocking”).

  2. ^

    To be clear, when I say “humans will never solve”, I am imagining human civilisation not an individual human scientists. There are some problems that remained unsolved by civilisation for centuries. And while we may accelerate our solution of hard problems by developing thinking machines, I think we are only accelerating said solutions. I do not think there are problems that civilisation will just never solve if we never develop human level general AI.

  3. ^

    Assuming that the intelligence of chimpanzees is roughly held constant or only drifts within a narrow range across generations. Chimpanzees evolving to considerably higher levels of intelligence would not still be “chimpanzees” for the purpose of my questions.

  4. ^

    Though it may be better to replace “real world” with “useful”. There may be some practical tasks that some organisms engage in, which the human brain cannot effectively “learn”. But those tasks aren’t useful for us to learn, so I don’t feel they would be necessary for the notion of universality I’m trying to gesture at.

  5. ^

    In case it was not clear, for the purposes of this question, the problems that “human civilisation can solve” refer to those problems that human civilisation can solve within the lifetime of the universe without developing human level general AI

  6. ^