I think this kind of reverence for the efficacy of the human brain is comical.
The acknowledgement and analysis of the efficacy of the single practical example of general intelligence that we do have does not imply reverence. Efficacy is a relative term. Do we have another example of a universal intelligence to compare to?
Perhaps you mention efficacy in comparison to a hypothetical optimal universal intelligence. We have only AIXI and its variants which are only optimal in terms of maximum intelligence at the limits, but are grossly inferior in terms of practicality and computational efficacy.
There is a route to analyzing the brain’s efficacy: it starts with analyzing it as a computational system and comparing it’s performance to best known algorithms.
The problem is the brain has a circuit with ~ 10^14-10^15 circuit-elements—about the same amount of storage, and it only cycles at around 100 hz. That is 10^16 to 10^17 net switches/second.
A current desktop GPU has > 10^9 circuit elements and a speed over 10^9 cycles per second. That is > 10^18 net switches/second.
And yet we have no algorithm, running even on a supercomputer, which can beat the best humans in Go. Let alone read a book, pilot a robotic body at human level, write a novel, come up with a funny joke, patent an idea, or even manage a mcdonald’s.
For one particular example, take the case of the game Go and compare to potential parallel algorithms that could run on a 100hz computer, that have zero innate starting knowledge of go, and can beat human players by simply learning about go.
Go is one example, but if you go from checkers to chess to go and keep going in that direction, you get into the large exponential search spaces where the brain’s learning algorithms appear to be especially efficient.
Human technological civilisation exploded roughly speaking in an evolutionary heartbeat from the time it became capable of doing so
Your assumption seems to be? that civilization and intelligence is somehow coded in our brains.
According to the best current theory I have found—Our brains are basically just upsized ape brains with one new extremely important trick: we became singing apes (a few other species sing), but then got a lucky break when the vocal control circuit for singing actually connected to some general simulation-thought circuit (the task-negative and task-positive paths) - thus allowing us to associate song patterns with visual/audio objects.
Its also important to point out that some song birds appear just on the cusp of this capability, with much smaller brains. Its not really a size issue.
Technology and all that is all a result of language—memetics—culture. Its not some miracle of our brains. They appear to be just large ape brains with perhaps just one new critical trick.
Some whale species have much larger brains and in some sense probably have a higher intrinsic genetic IQ. But this doesn’t really matter, because intelligence depends on memetic knowledge.
If einstein had been a feral child raised by wolves, he would have the exact same brain but would be literally mentally retarded on our scale of intelligence.
Genetics can limit intelligence, but it doesn’t provide it.
The chance that this capability opened up at just the moment when human intelligence was at even the maximum that DNA encoded ape descended brains could reach is negligible.
In 3 separate lineages—whales, elephants, and humans, the mammalian brain all grew to about the same upper capacity and then petered out (100 to 200 billion neurons). The likely hypothesis is that we are near some asymptotic limit in neural-net brain space: a sweet spot. Increasing size further would have too many negative drawbacks—such as the speed hit due to the slow maximum signal transmission.
I’d bet its 5 years away perhaps? But it only illustrates my point—because by some measures computers are already more powerful than the brain, which makes its wiring all the more impressive.
Come back when you have an algorithm that runs on a 100hz computer, that has zero starting knowledge of go, and can beat human players by simply learning about go.
I think this kind of reverence for the efficacy of the human brain is comical
Which is equivalent to saying “I think this kind of reverence for the efficacy of Google is comical”, and saying or implying you can obviously do better.
So yes, when there is a clear reigning champion, to say or imply it is ‘inefficient’ is nonsensical, and to make that claim strong requires something of substance, and not just congratulatory back patting and cryptic references to unrelated posts.
I think this kind of reverence for the efficacy of the human brain is comical
Which is equivalent to saying “I think this kind of reverence for the efficacy of Google is comical”, and saying or implying you can obviously do better.
Uh, wedrifid wasn’t saying that he could do better—just that it is possible to do much better. That is about as true for Google as it is for the human brain.
Uh, wedrifid wasn’t saying that he could do better—just that it is possible to do much better. That is about as true for Google as it is for the human brain.
It is only possible to do better than the brain’s learning algorithm in proportion to the distance between that algorithm and the optimally efficient learning algorithm in computational complexity space. There is mounting convergent independent lines of evidence suggesting (but not yet proving) that the brain’s learning algorithm is in the optimal complexity class, and thus further improvements will just be small constant improvements.
At that point we also have to consider that at the circuit level, the brain is highly optimized for it’s particular algorithm (direct analog computation, for one).
There is mounting convergent independent lines of evidence suggesting (but not
yet proving) that the brain’s learning algorithm is in the optimal complexity class,
and thus further improvements will just be small constant improvements.
This just sounds like nonsense to me. We have lots of evidence of how sub-optimal and screwed-up the brain is—what a terrible kluge it is. It is dreadful at learning. It needs to be told everything three times. It can’t even remember simple things like names and telephone numbers properly. It takes decades before it can solve simple physics problems—despite mountains of sense data, plus the education system. It is simply awful.
A simple computer database has perfect memorization but zero learning ability. Learning is not the memorization of details, but rather the memory of complex abstract structural patterns.
I also find it extremely difficult to take your telephone number example seriously, when we have the oral tradition of the torah as evidence of vastly higher memory capacity.
But thats a side issue. We also have the example of savant memory. Evolution has some genetic tweaks that can vastly increase our storage potential for accurate memory, but it clearly has a cost of lowered effective IQ.
It’s not that evolution couldn’t easily increase our memory, its that accurate memory for details is simply of minor importance (compared for pattern abstraction and IQ).
That something is not efficient doesn’t mean that there is currently something more efficient. And you precisely demand for particular proof that we all know doesn’t exist, which is rude and pointless whatever the case.
That something is not efficient doesn’t mean that there is currently something more efficient
Of course not, but if you read through the related points, there is some mix of parallel lines of evidence to suggest efficiency and even near-optimality of some of the brain’s algorithms, and that is what I spent most of the post discussing.
But yes, my tone was somewhat rude with the rhetorical demand for proof—I should have kept that more polite. But the demand for proof was not the substance of my argument.
But the demand for proof was not the substance of my argument.
Systematic elimination of obvious technical errors renders arguments much healthier, in particular because it allows to diagnose hypocritical arguments not grounded in actual knowledge (even if the conclusion is—it’s possible to rationalize correct statements as easily as incorrect ones).
(English usage: “allows” doesn’t take an infinitive, but a description of the action that is allowed, or the person that is allowed, or phrase combining both. The description of the action is generally a noun, usually a gerund. e.g. ”… in particular because it allows diagnosing hypocritical arguments …”)
You are “allowed to diagnose” and I may “allow you to diagnose” but I would “allow diagnosis” in general, rather than “allow to diagnose”. It is an odd language we have.
In 3 separate lineages—whales, elephants, and humans, the mammalian brain all grew to about the same upper size and then petered out. The likely hypothesis is that we are near some asymptotic limit in neural-net brain space. Increasing size further would have too much of a speed hit.
Could you expand on this, and provide a link, if you have one?
Tim fetched some size data below, but you also need to compare cortical surface area—and the most accurate comparison should use neuron and synapse counts in the cortex. The human brain had a much stronger size constraint that would tend to make neurons smaller (to the extent possible), and shrink-optimize everything—due to our smaller body size.
The larger a brain, the more time it takes to coordinate circuit trips around the brain. Humans (and I presume other mammals) can make some decisions in nearly 100-200 ms—which is just a dozen or so neuron firings. That severely limits the circuit path size. Neuron signals do not move anywhere near the speed of light.
It estimates whales and elephants at 200 billion neurons, humans at around 100 billion. There is large range of variability in human brain sizes, and the upper end of the human scale may be 200 billion?
Also keep in mind that the core cortical circuit that seems to do all the magic was invented in rats or their precursors and has been preserved in all these lineages with only minor variations.
Thx. But I still don’t see why you said “asymptotic limit” and “grew … then petered out”. There is no reason why H. sap. could not grow to the size of a gorilla over the next few million years, nor any reason why the bottlenose dolphin could not grow to the size of an orca. With corresponding brain size increases in both cases. I don’t see that our brain size growth has petered out.
The fact that mammal brains reached similar upper neuron counts (200-100 billion neurons) in three separate unrelated lineages with widely varying body sizes to me is a strong hint of an asymptotic limit.
Also, Neanderthals had significantly larger brains and perhaps twice as many neurons (just a guess based on size) - and yet they were out-competed by smaller brained homo-sapiens.
The bottle-nose could grow to the size of the orca, but its not clear at all that its brain would grow over a few hundred billion neurons.
The biggest whale brains are several times heavier than human or elephant brains, but the extra mass is glial cells, not neurons.
And if you look at how the brain actually works, a size limit makes perfect sense due to wiring constraints and signal propagation delays mentioned earlier.
Surely there is every reason to expect technological civilization to collapse before any of those things come to fruition.
Projections of the future always disappoint and always surprise. During my childhood in the 1950s, I fully expected to see rocket belts and interplanetary travel within my lifetime. I didn’t even imagine personal computers and laser surgery as an alternative to eyeglasses.
Fifty years before that, they imagined that folks today would have light-weight muscle-powered aircraft in their garages. Jules Verne predicted atomic submarines and time machines.
So, based on how miserable our facilities of prediction really are, the reasonable thing to do would be to assign finite probabilities to both cyborg humans and gorilla-sized humans. The future could go either way.
Surely there is every reason to expect technological civilization to collapse before any of those things come to fruition.
Hmm. We came pretty close with nuclear weapons and two super-powers, and yet we are still here. The dangerous toys are going to get even more dangerous this century, but I don’t see the rationale for assigning > 50% to Doom.
In regards to your expectations. You are still alive and we do have jetpacks today, we have traveled to numerous planets, we do have muscle powered gliders at least, and atomic submarines.
The only mistaken predictions were that humans were useful to send to other planets (they are not), and that time travel is tractable.
And ultimately just because some people make inaccurate predictions does not somehow invalidate prediction itself.
Well, of course I didn’t mean to suggest p=0. I don’t think the collapse of technological civilization is very likely, though—and would assign permanent setbacks a < 1% chance of happening.
In whale brain at least, it appears the larger size is more related to extra
glial cells and other factors:
My pet theory on this is that glial cells are known to stimulate synapse growth and they support synapse function (e.g. by clearing up after firing) - and so the enormous quantity of glial cells in whale brains (9 times as many glial cells in the sperm whale than in the human) - and their huge neurons—both point to an astronomical number of synapses.
The larger a brain, the more time it takes to coordinate circuit trips around the brain.
There are problems like this that arise with large synchronous systems which lack reliable clocks—but one of the good things about machine intelligences of significant size will be that reliable clocks will be available—and they probably won’t require global synchrony to operate in the first place.
I do remember reading that the brain does appear to have some highly regular pulse-like synchronizations in the PFC circuit. Every 33hz and 3hz if I remember correctly.
But that is really besides the point entirely.
The larger a system, the longer it takes information to move across the system. A planet wide intelligence would not be able to think as fast as a small laptop-sized intelligence, this is just a fact of the speed of light.
And its actually much much worse than that when you factor in bandwidth considerations.
You think it couldn’t sort things as fast? Search through a specified data set as quickly? Factor numbers as as fast? If you think any of those things, I think you need to explain further. If you agree that such tasks need not take a hit, which tasks are we talking about?
Actually, many of the examples you list would have huge problems scaling to a planet wide intelligence, but that’s a side issue.
The space in algorithm land in which practical universal intelligence lies requires high connectivity. It is not unique in this—many algorithms require this. Ultimately it can probably be derived back from the 3D structure of the universe itself.
Going from on-chip CPU access to off-chip Memory access to disk access to remote internet access is a series of massive exponential drops in bandwidth and related increases in latency which severely limit scalability of all big interesting distributed algorithms.
EDIT: Improved politeness.
The acknowledgement and analysis of the efficacy of the single practical example of general intelligence that we do have does not imply reverence. Efficacy is a relative term. Do we have another example of a universal intelligence to compare to?
Perhaps you mention efficacy in comparison to a hypothetical optimal universal intelligence. We have only AIXI and its variants which are only optimal in terms of maximum intelligence at the limits, but are grossly inferior in terms of practicality and computational efficacy.
There is a route to analyzing the brain’s efficacy: it starts with analyzing it as a computational system and comparing it’s performance to best known algorithms.
The problem is the brain has a circuit with ~ 10^14-10^15 circuit-elements—about the same amount of storage, and it only cycles at around 100 hz. That is 10^16 to 10^17 net switches/second.
A current desktop GPU has > 10^9 circuit elements and a speed over 10^9 cycles per second. That is > 10^18 net switches/second.
And yet we have no algorithm, running even on a supercomputer, which can beat the best humans in Go. Let alone read a book, pilot a robotic body at human level, write a novel, come up with a funny joke, patent an idea, or even manage a mcdonald’s.
For one particular example, take the case of the game Go and compare to potential parallel algorithms that could run on a 100hz computer, that have zero innate starting knowledge of go, and can beat human players by simply learning about go.
Go is one example, but if you go from checkers to chess to go and keep going in that direction, you get into the large exponential search spaces where the brain’s learning algorithms appear to be especially efficient.
Your assumption seems to be? that civilization and intelligence is somehow coded in our brains.
According to the best current theory I have found—Our brains are basically just upsized ape brains with one new extremely important trick: we became singing apes (a few other species sing), but then got a lucky break when the vocal control circuit for singing actually connected to some general simulation-thought circuit (the task-negative and task-positive paths) - thus allowing us to associate song patterns with visual/audio objects.
Its also important to point out that some song birds appear just on the cusp of this capability, with much smaller brains. Its not really a size issue.
Technology and all that is all a result of language—memetics—culture. Its not some miracle of our brains. They appear to be just large ape brains with perhaps just one new critical trick.
Some whale species have much larger brains and in some sense probably have a higher intrinsic genetic IQ. But this doesn’t really matter, because intelligence depends on memetic knowledge.
If einstein had been a feral child raised by wolves, he would have the exact same brain but would be literally mentally retarded on our scale of intelligence.
Genetics can limit intelligence, but it doesn’t provide it.
In 3 separate lineages—whales, elephants, and humans, the mammalian brain all grew to about the same upper capacity and then petered out (100 to 200 billion neurons). The likely hypothesis is that we are near some asymptotic limit in neural-net brain space: a sweet spot. Increasing size further would have too many negative drawbacks—such as the speed hit due to the slow maximum signal transmission.
You seriously can’t see that one coming?
I’d bet its 5 years away perhaps? But it only illustrates my point—because by some measures computers are already more powerful than the brain, which makes its wiring all the more impressive.
That seems optimistic to me. A few recent computer strength graphs:
http://www.gokgs.com/graphPage.jsp?user=Zen19
http://www.gokgs.com/graphPage.jsp?user=HcBot
http://www.gokgs.com/graphPage.jsp?user=Manyfaces1
http://www.gokgs.com/graphPage.jsp?user=Zen
http://www.gokgs.com/graphPage.jsp?user=CzechBot
http://www.gokgs.com/graphPage.jsp?user=AyaMC
Demand for particular proof.
The original comment was:
Which is equivalent to saying “I think this kind of reverence for the efficacy of Google is comical”, and saying or implying you can obviously do better.
So yes, when there is a clear reigning champion, to say or imply it is ‘inefficient’ is nonsensical, and to make that claim strong requires something of substance, and not just congratulatory back patting and cryptic references to unrelated posts.
Uh, wedrifid wasn’t saying that he could do better—just that it is possible to do much better. That is about as true for Google as it is for the human brain.
It is only possible to do better than the brain’s learning algorithm in proportion to the distance between that algorithm and the optimally efficient learning algorithm in computational complexity space. There is mounting convergent independent lines of evidence suggesting (but not yet proving) that the brain’s learning algorithm is in the optimal complexity class, and thus further improvements will just be small constant improvements.
At that point we also have to consider that at the circuit level, the brain is highly optimized for it’s particular algorithm (direct analog computation, for one).
This just sounds like nonsense to me. We have lots of evidence of how sub-optimal and screwed-up the brain is—what a terrible kluge it is. It is dreadful at learning. It needs to be told everything three times. It can’t even remember simple things like names and telephone numbers properly. It takes decades before it can solve simple physics problems—despite mountains of sense data, plus the education system. It is simply awful.
learning != memorization
A simple computer database has perfect memorization but zero learning ability. Learning is not the memorization of details, but rather the memory of complex abstract structural patterns.
I also find it extremely difficult to take your telephone number example seriously, when we have the oral tradition of the torah as evidence of vastly higher memory capacity.
But thats a side issue. We also have the example of savant memory. Evolution has some genetic tweaks that can vastly increase our storage potential for accurate memory, but it clearly has a cost of lowered effective IQ.
It’s not that evolution couldn’t easily increase our memory, its that accurate memory for details is simply of minor importance (compared for pattern abstraction and IQ).
That something is not efficient doesn’t mean that there is currently something more efficient. And you precisely demand for particular proof that we all know doesn’t exist, which is rude and pointless whatever the case.
Of course not, but if you read through the related points, there is some mix of parallel lines of evidence to suggest efficiency and even near-optimality of some of the brain’s algorithms, and that is what I spent most of the post discussing.
But yes, my tone was somewhat rude with the rhetorical demand for proof—I should have kept that more polite. But the demand for proof was not the substance of my argument.
Systematic elimination of obvious technical errors renders arguments much healthier, in particular because it allows to diagnose hypocritical arguments not grounded in actual knowledge (even if the conclusion is—it’s possible to rationalize correct statements as easily as incorrect ones).
See also this post.
point taken
(English usage: “allows” doesn’t take an infinitive, but a description of the action that is allowed, or the person that is allowed, or phrase combining both. The description of the action is generally a noun, usually a gerund. e.g. ”… in particular because it allows diagnosing hypocritical arguments …”)
Thanks, I’m trying to fight this overuse of infinitive. (Although it still doesn’t sound wrong in this case...)
You are “allowed to diagnose” and I may “allow you to diagnose” but I would “allow diagnosis” in general, rather than “allow to diagnose”. It is an odd language we have.
Yes, “allowed to” is very different than “allow”.
Demand what? A proof that the brain runs at ~100hz? This is well known—wikipedia neurons.
Vladimir_Nesov is referring to this article.
I see. Unrelated argument from erroneous authority.
Could you expand on this, and provide a link, if you have one?
Tim fetched some size data below, but you also need to compare cortical surface area—and the most accurate comparison should use neuron and synapse counts in the cortex. The human brain had a much stronger size constraint that would tend to make neurons smaller (to the extent possible), and shrink-optimize everything—due to our smaller body size.
The larger a brain, the more time it takes to coordinate circuit trips around the brain. Humans (and I presume other mammals) can make some decisions in nearly 100-200 ms—which is just a dozen or so neuron firings. That severely limits the circuit path size. Neuron signals do not move anywhere near the speed of light.
Wikipedia has a page comparing brain neuron counts
It estimates whales and elephants at 200 billion neurons, humans at around 100 billion. There is large range of variability in human brain sizes, and the upper end of the human scale may be 200 billion?
this page has some random facts
Of interest: Average number of neurons in the brain(human) = 100 billion cortex − 10 billion
Total surface area of the cerebral cortex(human) = 2,500 cm2 (2.5 ft2; A. Peters, and E.G. Jones, Cerebral Cortex, 1984)
Total surface area of the cerebral cortex (cat) = 83 cm2
Total surface area of the cerebral cortex (African elephant) = 6,300 cm2
Total surface area of the cerebral cortex (Bottlenosed dolphin) = 3,745 cm2 (S.H. Ridgway, The Cetacean Central Nervous System, p. 221)
Total surface area of the cerebral cortex (pilot whale) = 5,800 cm2
Total surface area of the cerebral cortex (false killer whale) = 7,400 cm2
In whale brain at least, it appears the larger size is more related to extra glial cells and other factors:
http://www.scientificamerican.com/blog/60-second-science/post.cfm?id=are-whales-smarter-than-we-are
Also keep in mind that the core cortical circuit that seems to do all the magic was invented in rats or their precursors and has been preserved in all these lineages with only minor variations.
Thx. But I still don’t see why you said “asymptotic limit” and “grew … then petered out”. There is no reason why H. sap. could not grow to the size of a gorilla over the next few million years, nor any reason why the bottlenose dolphin could not grow to the size of an orca. With corresponding brain size increases in both cases. I don’t see that our brain size growth has petered out.
The fact that mammal brains reached similar upper neuron counts (200-100 billion neurons) in three separate unrelated lineages with widely varying body sizes to me is a strong hint of an asymptotic limit.
Also, Neanderthals had significantly larger brains and perhaps twice as many neurons (just a guess based on size) - and yet they were out-competed by smaller brained homo-sapiens.
The bottle-nose could grow to the size of the orca, but its not clear at all that its brain would grow over a few hundred billion neurons.
The biggest whale brains are several times heavier than human or elephant brains, but the extra mass is glial cells, not neurons.
And if you look at how the brain actually works, a size limit makes perfect sense due to wiring constraints and signal propagation delays mentioned earlier.
Surely there is every reason—machine intelligence, nanotechnology, and the engineered future will mean that humans will be history.
Surely there is every reason to expect technological civilization to collapse before any of those things come to fruition.
Projections of the future always disappoint and always surprise. During my childhood in the 1950s, I fully expected to see rocket belts and interplanetary travel within my lifetime. I didn’t even imagine personal computers and laser surgery as an alternative to eyeglasses.
Fifty years before that, they imagined that folks today would have light-weight muscle-powered aircraft in their garages. Jules Verne predicted atomic submarines and time machines.
So, based on how miserable our facilities of prediction really are, the reasonable thing to do would be to assign finite probabilities to both cyborg humans and gorilla-sized humans. The future could go either way.
Hmm. We came pretty close with nuclear weapons and two super-powers, and yet we are still here. The dangerous toys are going to get even more dangerous this century, but I don’t see the rationale for assigning > 50% to Doom.
In regards to your expectations. You are still alive and we do have jetpacks today, we have traveled to numerous planets, we do have muscle powered gliders at least, and atomic submarines.
The only mistaken predictions were that humans were useful to send to other planets (they are not), and that time travel is tractable.
And ultimately just because some people make inaccurate predictions does not somehow invalidate prediction itself.
Well, of course I didn’t mean to suggest p=0. I don’t think the collapse of technological civilization is very likely, though—and would assign permanent setbacks a < 1% chance of happening.
My pet theory on this is that glial cells are known to stimulate synapse growth and they support synapse function (e.g. by clearing up after firing) - and so the enormous quantity of glial cells in whale brains (9 times as many glial cells in the sperm whale than in the human) - and their huge neurons—both point to an astronomical number of synapses.
“Glia Cells Help Neurons Build Synapses”
http://www.scientificamerican.com/article.cfm?id=glia-cells-help-neurons-b
Evidence from actual synapse counts in dolphin brains bears on this issue too.
There are problems like this that arise with large synchronous systems which lack reliable clocks—but one of the good things about machine intelligences of significant size will be that reliable clocks will be available—and they probably won’t require global synchrony to operate in the first place.
I do remember reading that the brain does appear to have some highly regular pulse-like synchronizations in the PFC circuit. Every 33hz and 3hz if I remember correctly.
But that is really besides the point entirely.
The larger a system, the longer it takes information to move across the system. A planet wide intelligence would not be able to think as fast as a small laptop-sized intelligence, this is just a fact of the speed of light.
And its actually much much worse than that when you factor in bandwidth considerations.
I don’t really know what you mean.
You think it couldn’t sort things as fast? Search through a specified data set as quickly? Factor numbers as as fast? If you think any of those things, I think you need to explain further. If you agree that such tasks need not take a hit, which tasks are we talking about?
Actually, many of the examples you list would have huge problems scaling to a planet wide intelligence, but that’s a side issue.
The space in algorithm land in which practical universal intelligence lies requires high connectivity. It is not unique in this—many algorithms require this. Ultimately it can probably be derived back from the 3D structure of the universe itself.
Going from on-chip CPU access to off-chip Memory access to disk access to remote internet access is a series of massive exponential drops in bandwidth and related increases in latency which severely limit scalability of all big interesting distributed algorithms.
Sperm whale brain is about 8 Kg
Elephant brain is about 5 Kg
Human brain is about 1.4 Kg
Brain size across all animals is pretty variable.