Compartmentalization is, in part, an architectural necessity—making sure beliefs are all consistent with each other is an intractable computation problem (I recall reading somewhere that the entire computational capacity of the universe is only sufficient to determine the consistency of, at most, 138 propositions).
Working with OWL ontologies and other semantic web technologies eventually makes this very clear. Deductive reasoning is not scalable. But there are probably different levels/types of consistency that could be handled by brains like ours. A simple example: Hueristics that tend to bring to one’s attention the most difficult to reconcile beliefs.
In the worst case scenario, with very pathological propositions.
Even though the various important satisfiability problems are known to be in NP, there are known algorithms for those problems that are polynomial-time for almost all “interesting” inputs.
Compartmentalization is, in part, an architectural necessity—making sure beliefs are all consistent with each other is an intractable computation problem (I recall reading somewhere that the entire computational capacity of the universe is only sufficient to determine the consistency of, at most, 138 propositions).
Working with OWL ontologies and other semantic web technologies eventually makes this very clear. Deductive reasoning is not scalable. But there are probably different levels/types of consistency that could be handled by brains like ours. A simple example: Hueristics that tend to bring to one’s attention the most difficult to reconcile beliefs.
In the worst case scenario, with very pathological propositions.
Even though the various important satisfiability problems are known to be in NP, there are known algorithms for those problems that are polynomial-time for almost all “interesting” inputs.