In general, efficiency at the level of logic gates doesn’t translate into the efficiency at the CPU level.
For example, imagine you’re tasked to correctly identify the faces of your classmates from 1 billion photos of random human faces. If you fail to identify a face, you must re-do the job.
Your neurons are perfectly efficient. You have a highly optimized face-recognition circuitry.
Yet you’ll consume more energy on the task than, say, Apple M1 CPU:
you’ll waste at least 30% of your time on sleep
your highly optimized faces-recognition circuitry is still rather inefficient
you’ll make mistakes, forcing you to re-do the job
you can’t hold your attention long enough to complete such a task, even if your life depends on it
Even if the human brain is efficient on the level of neural circuits, it is unlikely to be the most efficient vessel for a general intelligence.
In general, high-level biological designs are a crappy mess, mostly made of kludgy bugfixes to previous dirty hacks, which were made to fix other kludgy bugfixes (an example).
And the newer is the design, the crappier it is. For example, compare:
the almost perfect DNA replication (optimized for ~10^9 years)
the faulty and biased human brain (optimized for ~10^5 years)
With the exception of a few molecular-level designs, I expect that human engineers can produce much more efficient solutions than the natural selection, in some cases - orders of magnitude more efficient.
Human technology is rarely more efficient than biology along the quantitative dimensions that are important to biology, but human technology is not limited to building out of evolved wetware nanobots and can instead employ high energy manufacturing to create ultra durable materials that then enable very high energy density solutions. Our flying machines may not compete with birds in energy efficiency, but they harness power densities of a completely different scale to that available to biology. Basically the same applies to computers vs brains. AGI will outcompete human brains by brute scale, speed, and power rather than energy efficiency.
The human brain is just a scaled up primate brain, which is just a tweaked, more scalable mammal brain, but mammal brains have the same general architecture—which is closer to ~10^8 years old. It is hardly ‘faulty and biased’ - bias is in the mind.
A lot of the advantage of human technology is due to human technology figuring out how to use covalent bonds and metallic bonds, where biology sticks to ionic bonds and proteins held together by van der Waals forces (static cling, basically). This doesn’t fit into your paradigm; it’s just biology mucking around in a part of the design space easily accessible to mutation error, while humans work in a much more powerful design space because they can move around using abstract cognition.
Covalent/metallic vs ionic bonds implements the high energy density vs wetware constrained distinction I was referring to, so we are mostly in agreement; that is my paradigm. But the evidence is pretty clear that “ionic bond and protein” tech does approach the Landauer limit—at least for protein computation. As for the brain, end of Moore’s Law high end chip research is very much neuromorphic (memristor crossbars, etc), and some designs do claim perhaps 10x or so greater synop/J than the brain (roughly), but they aren’t built yet. So if you had wider uncertainty in your claim, with most mass in the region of the brain being 1 to 3 OOMs from the limit, I probably wouldn’t have commented, but for me that one claim distracted from your larger valid points.
In general, efficiency at the level of logic gates doesn’t translate into the efficiency at the CPU level.
For example, imagine you’re tasked to correctly identify the faces of your classmates from 1 billion photos of random human faces. If you fail to identify a face, you must re-do the job.
Your neurons are perfectly efficient. You have a highly optimized face-recognition circuitry.
Yet you’ll consume more energy on the task than, say, Apple M1 CPU:
you’ll waste at least 30% of your time on sleep
your highly optimized faces-recognition circuitry is still rather inefficient
you’ll make mistakes, forcing you to re-do the job
you can’t hold your attention long enough to complete such a task, even if your life depends on it
Even if the human brain is efficient on the level of neural circuits, it is unlikely to be the most efficient vessel for a general intelligence.
In general, high-level biological designs are a crappy mess, mostly made of kludgy bugfixes to previous dirty hacks, which were made to fix other kludgy bugfixes (an example).
And the newer is the design, the crappier it is. For example, compare:
the almost perfect DNA replication (optimized for ~10^9 years)
the faulty and biased human brain (optimized for ~10^5 years)
With the exception of a few molecular-level designs, I expect that human engineers can produce much more efficient solutions than the natural selection, in some cases - orders of magnitude more efficient.
Human technology is rarely more efficient than biology along the quantitative dimensions that are important to biology, but human technology is not limited to building out of evolved wetware nanobots and can instead employ high energy manufacturing to create ultra durable materials that then enable very high energy density solutions. Our flying machines may not compete with birds in energy efficiency, but they harness power densities of a completely different scale to that available to biology. Basically the same applies to computers vs brains. AGI will outcompete human brains by brute scale, speed, and power rather than energy efficiency.
The human brain is just a scaled up primate brain, which is just a tweaked, more scalable mammal brain, but mammal brains have the same general architecture—which is closer to ~10^8 years old. It is hardly ‘faulty and biased’ - bias is in the mind.
A lot of the advantage of human technology is due to human technology figuring out how to use covalent bonds and metallic bonds, where biology sticks to ionic bonds and proteins held together by van der Waals forces (static cling, basically). This doesn’t fit into your paradigm; it’s just biology mucking around in a part of the design space easily accessible to mutation error, while humans work in a much more powerful design space because they can move around using abstract cognition.
Covalent/metallic vs ionic bonds implements the high energy density vs wetware constrained distinction I was referring to, so we are mostly in agreement; that is my paradigm. But the evidence is pretty clear that “ionic bond and protein” tech does approach the Landauer limit—at least for protein computation. As for the brain, end of Moore’s Law high end chip research is very much neuromorphic (memristor crossbars, etc), and some designs do claim perhaps 10x or so greater synop/J than the brain (roughly), but they aren’t built yet. So if you had wider uncertainty in your claim, with most mass in the region of the brain being 1 to 3 OOMs from the limit, I probably wouldn’t have commented, but for me that one claim distracted from your larger valid points.