Does Kolmogorov complexity imply a bound on self-improving AI?

The Kolmogorov complexity (“K”) of a string (“S”) specifies the size of the smallest Turing machine that can output that string. If a Turing machine (equivalently, by the Church-Turing thesis, any AI) has size smaller than K, it can rewrite its code as much as it wants to, it won’t be able to output S. To be specific, of course it can output S by enumerating all possible strings, but it won’t be able to decide on S and output it exclusively among the options available. Now suppose that S is the source code for an intelligence strictly better than all those with complexity <K. Now, we are left with 3 options:

  1. The space of all maximally intelligent minds has an upper bound on complexity, and we have already reached it.

  2. The universe contains new information that can be used to build minds of greater complexity, or:

  3. There are levels of intelligence that are impossible for us to reach.

Now, this isn’t meant to be a practical argument against AI being useful. I have no doubt that from just applying the principles humanity has shown we can apply already, at the speed of integrated circuits, we can rise many orders of magnitude in intellectual and productive ability. But it’s a serious wake-up call to anyone who thinks that a self-improving AI will achieve anything that can possibly be achieved.