If comprehensible things become too large, in a way that cannot be factorized, they become incomprehensible. But at the boundary, increasing the complexity by +1 can mean that a more intelligent (and experienced) human could understand it, and a less intelligent one would not. So there is no exact line, it just requires more intellect the further you go.
Maybe an average nerd could visualize a 3x3 matrix multiplication, a specialized scientist could visualize 5x5 (I am just saying random numbers here), and… a superintelligence could visualize 100x100 or maybe even 1000000x1000000.
And similarly, a stupid person could make a plan “first this, then this”, a smart person could make a plan with a few alternatives ”...if it rains, we will go to this café; and if it’s closed, we will go to this gallery instead...”, and a superintelligence could make a plan with a vast network of alternatives.
And yes, just like with biology, a human can understand one simple protein maybe (again, I am just guessing here, what I mean is “there is a level of complexity that a human understands”), and a superintelligence could similarly understand the entire organism.
In each case, there is no clear line between comprehensibility and incomprehensibility, it just becomes intractable when it is too large.
Yet if we extend the “+1 complexity” argument, we eventually reach a boundary where no human, however smart, could understand it. In principle nature could produce a human with the specific mutation necessary to apprehend it, which pushes the human cognitive horizon by some amount without actually eliminating it.
To the extent that AI can be scaled unlike the human brain, it might be able to form conceptual primitives which are so far outside the human cognitive horizon that biology is unlikely to produce a human intelligent enough to apprehend them on any reasonable timescale.
If comprehensible things become too large, in a way that cannot be factorized, they become incomprehensible. But at the boundary, increasing the complexity by +1 can mean that a more intelligent (and experienced) human could understand it, and a less intelligent one would not. So there is no exact line, it just requires more intellect the further you go.
Maybe an average nerd could visualize a 3x3 matrix multiplication, a specialized scientist could visualize 5x5 (I am just saying random numbers here), and… a superintelligence could visualize 100x100 or maybe even 1000000x1000000.
And similarly, a stupid person could make a plan “first this, then this”, a smart person could make a plan with a few alternatives ”...if it rains, we will go to this café; and if it’s closed, we will go to this gallery instead...”, and a superintelligence could make a plan with a vast network of alternatives.
And yes, just like with biology, a human can understand one simple protein maybe (again, I am just guessing here, what I mean is “there is a level of complexity that a human understands”), and a superintelligence could similarly understand the entire organism.
In each case, there is no clear line between comprehensibility and incomprehensibility, it just becomes intractable when it is too large.
Yet if we extend the “+1 complexity” argument, we eventually reach a boundary where no human, however smart, could understand it. In principle nature could produce a human with the specific mutation necessary to apprehend it, which pushes the human cognitive horizon by some amount without actually eliminating it.
To the extent that AI can be scaled unlike the human brain, it might be able to form conceptual primitives which are so far outside the human cognitive horizon that biology is unlikely to produce a human intelligent enough to apprehend them on any reasonable timescale.