What’s about facts from environment—is it good to gloss over applicability of something that you observed in one context, to other context? The compartmentalization may look like good idea when you are spending over a decade to put the effective belief system into children. It doesn’t look so great when you have to process data from environment. We even see correlations where there isn’t any.
The information compartmentalization may look great if the crew of the ship is to engage in pointless idle debates over intercom. Not so much when they need to coordinate actions.
I agree that if “the crew” (that is, the various parts of my brain) are sufficiently competent, and the communications channels between them sufficiently efficient, then making all available information available to everyone is a valuable thing to do. OTOH, if parts of my brain aren’t competent enough to handle all the available information in a useful way, having those parts discard information rather than process it becomes more reasonable. And if the channels between those parts are sufficiently inefficient, the costs of making information available to everyone (especially if sizable chunks of it are ultimately discarded on receipt) might overcome the benefits.
In other words, glossing over the applicability of something I observed in one context to another context is bad if I could have done something useful by non-glossing over it, and not otherwise. Which was reliably the case for our evolutionary predecessors in their environment, I don’t know.
Well, one can conjecture the counter productive effects of intelligence in general and any aspects of it in particular, and sure there were a few, but it stands that we did evolve the intelligence. Keep in mind that without highly developed notion of verbal ‘reasoning’ you may not be able to have the ship flooded with abstract nonsense in the first place. The stuff you feel, it tracks the probabilities.
Can you clarify the relationship between my comment and counterproductive effects of intelligence in general? I’m either not quite following your reasoning, or wasn’t quite clear about mine.
A general-purpose intelligence will, all things being equal, get better results with more data.
But we evolved our cognitive architecture not in the context of a general-purpose intelligence, but rather in the context of a set of cognitive modules that operated adaptively on particular sets of data to perform particular functions. Providing those modules with a superset of that data might well have gotten counterproductive results, not because intelligence is counterproductive, but because they didn’t evolve to handle that superset.
In that kind of environment, sharing all data among all cognitive modules might well have counterproductive effects… again, not because intelligence is counterproductive, but because more data can be counterproductive to an insufficiently general intelligence.
The existence of evolved ‘modules’ within the frontal cortex is not settled science and is in fact controversial. It’s indeed hard to tell how much data do we share, though. Maybe without habit of abstract thought, not so much. On other hand the data about human behaviours seem important.
What’s about facts from environment—is it good to gloss over applicability of something that you observed in one context, to other context? The compartmentalization may look like good idea when you are spending over a decade to put the effective belief system into children. It doesn’t look so great when you have to process data from environment. We even see correlations where there isn’t any.
The information compartmentalization may look great if the crew of the ship is to engage in pointless idle debates over intercom. Not so much when they need to coordinate actions.
I’m not sure I’m understanding you here.
I agree that if “the crew” (that is, the various parts of my brain) are sufficiently competent, and the communications channels between them sufficiently efficient, then making all available information available to everyone is a valuable thing to do. OTOH, if parts of my brain aren’t competent enough to handle all the available information in a useful way, having those parts discard information rather than process it becomes more reasonable. And if the channels between those parts are sufficiently inefficient, the costs of making information available to everyone (especially if sizable chunks of it are ultimately discarded on receipt) might overcome the benefits.
In other words, glossing over the applicability of something I observed in one context to another context is bad if I could have done something useful by non-glossing over it, and not otherwise. Which was reliably the case for our evolutionary predecessors in their environment, I don’t know.
Well, one can conjecture the counter productive effects of intelligence in general and any aspects of it in particular, and sure there were a few, but it stands that we did evolve the intelligence. Keep in mind that without highly developed notion of verbal ‘reasoning’ you may not be able to have the ship flooded with abstract nonsense in the first place. The stuff you feel, it tracks the probabilities.
Can you clarify the relationship between my comment and counterproductive effects of intelligence in general? I’m either not quite following your reasoning, or wasn’t quite clear about mine.
A general-purpose intelligence will, all things being equal, get better results with more data.
But we evolved our cognitive architecture not in the context of a general-purpose intelligence, but rather in the context of a set of cognitive modules that operated adaptively on particular sets of data to perform particular functions. Providing those modules with a superset of that data might well have gotten counterproductive results, not because intelligence is counterproductive, but because they didn’t evolve to handle that superset.
In that kind of environment, sharing all data among all cognitive modules might well have counterproductive effects… again, not because intelligence is counterproductive, but because more data can be counterproductive to an insufficiently general intelligence.
The existence of evolved ‘modules’ within the frontal cortex is not settled science and is in fact controversial. It’s indeed hard to tell how much data do we share, though. Maybe without habit of abstract thought, not so much. On other hand the data about human behaviours seem important.