Tim fetched some size data below, but you also need to compare cortical surface area—and the most accurate comparison should use neuron and synapse counts in the cortex. The human brain had a much stronger size constraint that would tend to make neurons smaller (to the extent possible), and shrink-optimize everything—due to our smaller body size.
The larger a brain, the more time it takes to coordinate circuit trips around the brain. Humans (and I presume other mammals) can make some decisions in nearly 100-200 ms—which is just a dozen or so neuron firings. That severely limits the circuit path size. Neuron signals do not move anywhere near the speed of light.
It estimates whales and elephants at 200 billion neurons, humans at around 100 billion. There is large range of variability in human brain sizes, and the upper end of the human scale may be 200 billion?
Also keep in mind that the core cortical circuit that seems to do all the magic was invented in rats or their precursors and has been preserved in all these lineages with only minor variations.
Thx. But I still don’t see why you said “asymptotic limit” and “grew … then petered out”. There is no reason why H. sap. could not grow to the size of a gorilla over the next few million years, nor any reason why the bottlenose dolphin could not grow to the size of an orca. With corresponding brain size increases in both cases. I don’t see that our brain size growth has petered out.
The fact that mammal brains reached similar upper neuron counts (200-100 billion neurons) in three separate unrelated lineages with widely varying body sizes to me is a strong hint of an asymptotic limit.
Also, Neanderthals had significantly larger brains and perhaps twice as many neurons (just a guess based on size) - and yet they were out-competed by smaller brained homo-sapiens.
The bottle-nose could grow to the size of the orca, but its not clear at all that its brain would grow over a few hundred billion neurons.
The biggest whale brains are several times heavier than human or elephant brains, but the extra mass is glial cells, not neurons.
And if you look at how the brain actually works, a size limit makes perfect sense due to wiring constraints and signal propagation delays mentioned earlier.
Surely there is every reason to expect technological civilization to collapse before any of those things come to fruition.
Projections of the future always disappoint and always surprise. During my childhood in the 1950s, I fully expected to see rocket belts and interplanetary travel within my lifetime. I didn’t even imagine personal computers and laser surgery as an alternative to eyeglasses.
Fifty years before that, they imagined that folks today would have light-weight muscle-powered aircraft in their garages. Jules Verne predicted atomic submarines and time machines.
So, based on how miserable our facilities of prediction really are, the reasonable thing to do would be to assign finite probabilities to both cyborg humans and gorilla-sized humans. The future could go either way.
Surely there is every reason to expect technological civilization to collapse before any of those things come to fruition.
Hmm. We came pretty close with nuclear weapons and two super-powers, and yet we are still here. The dangerous toys are going to get even more dangerous this century, but I don’t see the rationale for assigning > 50% to Doom.
In regards to your expectations. You are still alive and we do have jetpacks today, we have traveled to numerous planets, we do have muscle powered gliders at least, and atomic submarines.
The only mistaken predictions were that humans were useful to send to other planets (they are not), and that time travel is tractable.
And ultimately just because some people make inaccurate predictions does not somehow invalidate prediction itself.
Well, of course I didn’t mean to suggest p=0. I don’t think the collapse of technological civilization is very likely, though—and would assign permanent setbacks a < 1% chance of happening.
In whale brain at least, it appears the larger size is more related to extra
glial cells and other factors:
My pet theory on this is that glial cells are known to stimulate synapse growth and they support synapse function (e.g. by clearing up after firing) - and so the enormous quantity of glial cells in whale brains (9 times as many glial cells in the sperm whale than in the human) - and their huge neurons—both point to an astronomical number of synapses.
The larger a brain, the more time it takes to coordinate circuit trips around the brain.
There are problems like this that arise with large synchronous systems which lack reliable clocks—but one of the good things about machine intelligences of significant size will be that reliable clocks will be available—and they probably won’t require global synchrony to operate in the first place.
I do remember reading that the brain does appear to have some highly regular pulse-like synchronizations in the PFC circuit. Every 33hz and 3hz if I remember correctly.
But that is really besides the point entirely.
The larger a system, the longer it takes information to move across the system. A planet wide intelligence would not be able to think as fast as a small laptop-sized intelligence, this is just a fact of the speed of light.
And its actually much much worse than that when you factor in bandwidth considerations.
You think it couldn’t sort things as fast? Search through a specified data set as quickly? Factor numbers as as fast? If you think any of those things, I think you need to explain further. If you agree that such tasks need not take a hit, which tasks are we talking about?
Actually, many of the examples you list would have huge problems scaling to a planet wide intelligence, but that’s a side issue.
The space in algorithm land in which practical universal intelligence lies requires high connectivity. It is not unique in this—many algorithms require this. Ultimately it can probably be derived back from the 3D structure of the universe itself.
Going from on-chip CPU access to off-chip Memory access to disk access to remote internet access is a series of massive exponential drops in bandwidth and related increases in latency which severely limit scalability of all big interesting distributed algorithms.
Tim fetched some size data below, but you also need to compare cortical surface area—and the most accurate comparison should use neuron and synapse counts in the cortex. The human brain had a much stronger size constraint that would tend to make neurons smaller (to the extent possible), and shrink-optimize everything—due to our smaller body size.
The larger a brain, the more time it takes to coordinate circuit trips around the brain. Humans (and I presume other mammals) can make some decisions in nearly 100-200 ms—which is just a dozen or so neuron firings. That severely limits the circuit path size. Neuron signals do not move anywhere near the speed of light.
Wikipedia has a page comparing brain neuron counts
It estimates whales and elephants at 200 billion neurons, humans at around 100 billion. There is large range of variability in human brain sizes, and the upper end of the human scale may be 200 billion?
this page has some random facts
Of interest: Average number of neurons in the brain(human) = 100 billion cortex − 10 billion
Total surface area of the cerebral cortex(human) = 2,500 cm2 (2.5 ft2; A. Peters, and E.G. Jones, Cerebral Cortex, 1984)
Total surface area of the cerebral cortex (cat) = 83 cm2
Total surface area of the cerebral cortex (African elephant) = 6,300 cm2
Total surface area of the cerebral cortex (Bottlenosed dolphin) = 3,745 cm2 (S.H. Ridgway, The Cetacean Central Nervous System, p. 221)
Total surface area of the cerebral cortex (pilot whale) = 5,800 cm2
Total surface area of the cerebral cortex (false killer whale) = 7,400 cm2
In whale brain at least, it appears the larger size is more related to extra glial cells and other factors:
http://www.scientificamerican.com/blog/60-second-science/post.cfm?id=are-whales-smarter-than-we-are
Also keep in mind that the core cortical circuit that seems to do all the magic was invented in rats or their precursors and has been preserved in all these lineages with only minor variations.
Thx. But I still don’t see why you said “asymptotic limit” and “grew … then petered out”. There is no reason why H. sap. could not grow to the size of a gorilla over the next few million years, nor any reason why the bottlenose dolphin could not grow to the size of an orca. With corresponding brain size increases in both cases. I don’t see that our brain size growth has petered out.
The fact that mammal brains reached similar upper neuron counts (200-100 billion neurons) in three separate unrelated lineages with widely varying body sizes to me is a strong hint of an asymptotic limit.
Also, Neanderthals had significantly larger brains and perhaps twice as many neurons (just a guess based on size) - and yet they were out-competed by smaller brained homo-sapiens.
The bottle-nose could grow to the size of the orca, but its not clear at all that its brain would grow over a few hundred billion neurons.
The biggest whale brains are several times heavier than human or elephant brains, but the extra mass is glial cells, not neurons.
And if you look at how the brain actually works, a size limit makes perfect sense due to wiring constraints and signal propagation delays mentioned earlier.
Surely there is every reason—machine intelligence, nanotechnology, and the engineered future will mean that humans will be history.
Surely there is every reason to expect technological civilization to collapse before any of those things come to fruition.
Projections of the future always disappoint and always surprise. During my childhood in the 1950s, I fully expected to see rocket belts and interplanetary travel within my lifetime. I didn’t even imagine personal computers and laser surgery as an alternative to eyeglasses.
Fifty years before that, they imagined that folks today would have light-weight muscle-powered aircraft in their garages. Jules Verne predicted atomic submarines and time machines.
So, based on how miserable our facilities of prediction really are, the reasonable thing to do would be to assign finite probabilities to both cyborg humans and gorilla-sized humans. The future could go either way.
Hmm. We came pretty close with nuclear weapons and two super-powers, and yet we are still here. The dangerous toys are going to get even more dangerous this century, but I don’t see the rationale for assigning > 50% to Doom.
In regards to your expectations. You are still alive and we do have jetpacks today, we have traveled to numerous planets, we do have muscle powered gliders at least, and atomic submarines.
The only mistaken predictions were that humans were useful to send to other planets (they are not), and that time travel is tractable.
And ultimately just because some people make inaccurate predictions does not somehow invalidate prediction itself.
Well, of course I didn’t mean to suggest p=0. I don’t think the collapse of technological civilization is very likely, though—and would assign permanent setbacks a < 1% chance of happening.
My pet theory on this is that glial cells are known to stimulate synapse growth and they support synapse function (e.g. by clearing up after firing) - and so the enormous quantity of glial cells in whale brains (9 times as many glial cells in the sperm whale than in the human) - and their huge neurons—both point to an astronomical number of synapses.
“Glia Cells Help Neurons Build Synapses”
http://www.scientificamerican.com/article.cfm?id=glia-cells-help-neurons-b
Evidence from actual synapse counts in dolphin brains bears on this issue too.
There are problems like this that arise with large synchronous systems which lack reliable clocks—but one of the good things about machine intelligences of significant size will be that reliable clocks will be available—and they probably won’t require global synchrony to operate in the first place.
I do remember reading that the brain does appear to have some highly regular pulse-like synchronizations in the PFC circuit. Every 33hz and 3hz if I remember correctly.
But that is really besides the point entirely.
The larger a system, the longer it takes information to move across the system. A planet wide intelligence would not be able to think as fast as a small laptop-sized intelligence, this is just a fact of the speed of light.
And its actually much much worse than that when you factor in bandwidth considerations.
I don’t really know what you mean.
You think it couldn’t sort things as fast? Search through a specified data set as quickly? Factor numbers as as fast? If you think any of those things, I think you need to explain further. If you agree that such tasks need not take a hit, which tasks are we talking about?
Actually, many of the examples you list would have huge problems scaling to a planet wide intelligence, but that’s a side issue.
The space in algorithm land in which practical universal intelligence lies requires high connectivity. It is not unique in this—many algorithms require this. Ultimately it can probably be derived back from the 3D structure of the universe itself.
Going from on-chip CPU access to off-chip Memory access to disk access to remote internet access is a series of massive exponential drops in bandwidth and related increases in latency which severely limit scalability of all big interesting distributed algorithms.