AI is not enough

What I write here may be quite simple (and I am certainly not the first to write about it), but I still think it is worth considering:


Say we have an abitrary problem that we assume has an algorithmic solution, and search for the solution of the problem.


How can the algorithm be determined?
Either:
a) Through another algorithm that exist prior to that algorithm.
b) OR: Through something non-algorithmic.


In the case of AI, the only solution is a), since there is nothing else but algorithms at its disposal. But then we have the problem to determine the algorithm the AI uses to find the solution, and then it would have to determine the algorithm to determine that algorithm, etc...
Obviously, at some point we have to actually find an algorithm to start with, so in any case at some point we need something fundamentally non-algorithmic to determine a solution to an problem that is solveable by an algorithm.


This reveals something fundamental we have to face with regards to AI:

Even assuming that all relevant problems are solvable by an algorithm, AI is not enough. Since there is no way to algorithmically determine the appropiate algorithm for an AI (since this would result in an infinite regress), we will always have to rely on some non-algorithmical intelligence to find more intelligent solutions. Even if we found a very powerful seed AI algorithm, there will always be more powerful seed AI algorithms that can’t be determined by any known algorithm, and since we were able to find the first one, we have no reason to suppose we can’t find another more powerful one. If an AI recursively improves 100000x times until it is 100^^^100 times more powerful, it still will be caught up if a better seed AI is found, which ultimately can’t be done by an algorithm, so that further increases of the most general intelligence always rely on something non-algorithmic.

But even worse, it seems obvious to me that there are important practical problems that have no algorithmic solution (as opposed to theoretical problems like the halting problem, which are still tractable in practice), apart from the problem of finding the right algorithm.
In a sense, it seems all algorithms are too complicated to find the solution to the simple (though not necessarily easy) problem of giving rise to further general intelligence.
For example: No algorithm can determine the simple axioms of the natural numbers from anything weaker. We have postulate them by virtue of the simple seeing that they make sense. Thinking that AI could give rise to ever improving *general* intelligence is like thinking that an algorithm can yield “there is a natural number 0 and every number has a successor that, too, is a natural number”. There is simply no way to derive the axioms from anything that doesn’t already include it. The axioms of the natural numbers are just obvious, yet can’t be derived—the problem of finding the axioms of natural numbers is too simple to be solved algorithmically. Yet still it is obvious how important the notion of natural numbers is.
Even the best AI will always be fundamentally incapable of finding some very simple, yet fundamental principles.
AI will always rely on the axioms it already knows, it can’t go beyond it (unless reprogrammed by something external). Every new thing it learns can only be learned in term of already known axioms. This is simply a consequence of the fact that computers/​programs are functioning according to fixed rules. But general intelligence necessarily has to transcend rules (since at the very least the rules can’t be determined by rules).


I don’t think this is an argument against a singularity of ever improving intelligence. It just can’t happen driven (solely or predominantly) by AI, whether through a recursively self-improving seed AI or cognitive augmentation. Instead, we should expect a singularity that happens due to emergent intelligence. I think it is the interaction of different kind of intelligence (like human/​animal intuitive intelligence, machine precision and the inherent order of the non-living universe, if you want to call that intelligence) that leads to increase in general intelligence, not just one particular kind of intelligence like formal reasoning used by computers.