This is a potentially important question for AI timelines. How much processing power do we expect is needed to replicate human intelligence? “Approximately a brain’s worth” is the default answer, but if this post is correct, it should be a lot less (particularly for text-based AI like the GPTs).
Something seems a bit off about this. Are blind people more intelligent?
From a quick google search, it looks like blind people have many boosted properties (increased working memory, increased ability to differentiate frequencies), but somehow this does not translate to increased IQ? More research needed.
But, I guess the effect size should be huge if it’s really a simple function of total data inputs, right?
OK, I guess a confounding factor is that the brain might not be plastic enough for “blind humans” to be equivalent to “hominids who evolved a similar encephalization quotient but without the sense of sight”, which is what your theory specifically predicts would be quite intelligent.
Another thing that seems a bit off about this is that, in ML, we’ve seen that info from other sensory modalities can be very useful. So taking away one sense-organ and allocating the resources to another shouldn’t necessarily boost “overall intelligence”. But maybe this is like pointing out that a blind person won’t be able to describe a painting, so in that respect their verbal performance will be worse.
Anyway, obviously, if not sense-input, then what??
huh, I am boggled if there exist people with measurably better working memory which doesn’t translate to IQ. I had believed working memory was a key intelligence bottleneck.
I think the disappointing failure of the WM/DNB training paradigm 2000-2015 or so to show any meaningful transfer to fluid intelligence, while being able to show transfer to plenty of WM tasks, proved that the high correlation of WM/Gf was ultimately not due to WM bottlenecking intelligence.
The nature of human intelligence remains something of a puzzle. The most striking recent paper I’ve seen on what human intelligence is is “Testing the structure of human cognitive ability using evidence obtained from the impact of brain lesions over abilities”, Protzko & Colom 2021. That is, interpreting brain lesions as surgically precise interventions into (non-fatally) disabling specific regions of the human brain, there… is no bottleneck anywhere? Yet, we can totally predict intelligence (up to like >90% variance as the ceiling) from neuroimaging and stuff like brain volume has causal influences on intelligence as indicated by LCV etc. So overall I’ve been moving towards the bifactor model with general body integrity as the other (causal but temporally prior) factor, and trying to reconcile it with DL scaling. Highly speculative, but seems reasonably satisfactory so far...
Blind people have enhanced hearing. I would not be surprised if they have better touch and smell too. I have never heard of blind people dominating a field except where there is an obvious hearing component (like echolocation and hacking analog phones).
I think the vision centers of blind people get repurposed into non-vision sensory centers, but that higher-level cognitive ability remains unchanged. We have to be careful when testing the working memory of blind people because available brain matter isn’t the only variable getting modified. Blind people have to memorize more of their environment than sighted people do.
If hominids evolved without a sense of sight then they would improve other senses to compensate. To get an intelligence boost from blind humans, we would need to keep the encephalization quotient constant while reducing the cortical matter dedicated to processing sensory data.
If this post is correct then the human brain has way more compute than is required for high-level cognition.
I have an alternative hypothesis tho: we could say “big brains are for big problems”. As you stated, a blind person still has a similar computational problem to solve, namely navigating a complex 3D environment. In some cases, tons of sense-data will be very easily processed, due to the simplicity of what’s being looked for in that sense data. (Are there cases of animals with very large retinas, but comparatively small brains?)
The sad part of this hypothesis is that it’s difficult to test, as it doesn’t make specific predictions. You’d need to somehow know the computational complexity of surviving in a given environment. (Or, more precisely, the computational complexity where a bigger brain is too much of a cost...)
This is a potentially important question for AI timelines. How much processing power do we expect is needed to replicate human intelligence? “Approximately a brain’s worth” is the default answer, but if this post is correct, it should be a lot less (particularly for text-based AI like the GPTs).
Something seems a bit off about this. Are blind people more intelligent?
From a quick google search, it looks like blind people have many boosted properties (increased working memory, increased ability to differentiate frequencies), but somehow this does not translate to increased IQ? More research needed.
But, I guess the effect size should be huge if it’s really a simple function of total data inputs, right?
OK, I guess a confounding factor is that the brain might not be plastic enough for “blind humans” to be equivalent to “hominids who evolved a similar encephalization quotient but without the sense of sight”, which is what your theory specifically predicts would be quite intelligent.
Another thing that seems a bit off about this is that, in ML, we’ve seen that info from other sensory modalities can be very useful. So taking away one sense-organ and allocating the resources to another shouldn’t necessarily boost “overall intelligence”. But maybe this is like pointing out that a blind person won’t be able to describe a painting, so in that respect their verbal performance will be worse.
Anyway, obviously, if not sense-input, then what??
huh, I am boggled if there exist people with measurably better working memory which doesn’t translate to IQ. I had believed working memory was a key intelligence bottleneck.
I think the disappointing failure of the WM/DNB training paradigm 2000-2015 or so to show any meaningful transfer to fluid intelligence, while being able to show transfer to plenty of WM tasks, proved that the high correlation of WM/Gf was ultimately not due to WM bottlenecking intelligence.
The nature of human intelligence remains something of a puzzle. The most striking recent paper I’ve seen on what human intelligence is is “Testing the structure of human cognitive ability using evidence obtained from the impact of brain lesions over abilities”, Protzko & Colom 2021. That is, interpreting brain lesions as surgically precise interventions into (non-fatally) disabling specific regions of the human brain, there… is no bottleneck anywhere? Yet, we can totally predict intelligence (up to like >90% variance as the ceiling) from neuroimaging and stuff like brain volume has causal influences on intelligence as indicated by LCV etc. So overall I’ve been moving towards the bifactor model with general body integrity as the other (causal but temporally prior) factor, and trying to reconcile it with DL scaling. Highly speculative, but seems reasonably satisfactory so far...
It’s a good point! I failed to notice my confusion there.
Blind people have enhanced hearing. I would not be surprised if they have better touch and smell too. I have never heard of blind people dominating a field except where there is an obvious hearing component (like echolocation and hacking analog phones).
I think the vision centers of blind people get repurposed into non-vision sensory centers, but that higher-level cognitive ability remains unchanged. We have to be careful when testing the working memory of blind people because available brain matter isn’t the only variable getting modified. Blind people have to memorize more of their environment than sighted people do.
If hominids evolved without a sense of sight then they would improve other senses to compensate. To get an intelligence boost from blind humans, we would need to keep the encephalization quotient constant while reducing the cortical matter dedicated to processing sensory data.
If this post is correct then the human brain has way more compute than is required for high-level cognition.
Yep. All sounds right.
I have an alternative hypothesis tho: we could say “big brains are for big problems”. As you stated, a blind person still has a similar computational problem to solve, namely navigating a complex 3D environment. In some cases, tons of sense-data will be very easily processed, due to the simplicity of what’s being looked for in that sense data. (Are there cases of animals with very large retinas, but comparatively small brains?)
The sad part of this hypothesis is that it’s difficult to test, as it doesn’t make specific predictions. You’d need to somehow know the computational complexity of surviving in a given environment. (Or, more precisely, the computational complexity where a bigger brain is too much of a cost...)