I disagree with the author in that I believe the universe is ‘compact’ in the sense that what we know is a near approximation of what is knowable by the method.
For example, I believe our ancestors knew everything they can possibly know without knowing extra background knowledge or special kinds of thinking.
Similarly, I believe we now know nearly everything about the universe (the rules, not specific tools) given use of mathematics and sophisticated logic. And we do a much, much better job than our ancestors at predicting the world. So I am skeptical that superintelligence can exist.
You may say, how can you assume whatever we have today is the final stages of logic? Why can’t this be the very beginning? Again, we nearly explained the universe by now. Second, we have so many interconnected people that it is more than any community in old times by orders of magnitude. So even though logic is harder, I think it is mostly explored.
But this wouldn’t negate AGI, for AGI is different from superintelligence in that we know it is possible (humans, sort of), and also much less powerful than superintelligence.
But one thing we must remember is the hollow nature of economic size. Economy is basically about extracting resources to satisfy our needs. Currently, large economics size is meaningful because it means more resources for each human. And even if a flood of AGI agents increases resource extraction, sums of money in accounts of mere sophisticated algorithms wouldn’t mean anything. At least, not to me.
I think many assign a much higher probability to the existence and usefulness of superintelligence than it warrants. My intuition is that they require the universe to have much more structure than we can currently detect. This is because our observations are highly accurate these days (at least in fundamental sciences like physics), and scientific theories give very powerful explanations for them.
This is because superintelligence depends as much on properties of the world as on the algorithms themselves. The same argument works for usefulness. Even if a superintelligence exists, it cannot do impossible tasks.
This is the main reason for my skepticism regarding what I term AI magicalism, in which it is expected of a good superintelligence to magically solve death, while an evil superintelligence can magically doom humanity. And if my skepticism is true, this makes the idea of AI unappealing for me, because it is likely to create AGI that can automate all good jobs and destroy human motivation, but would not bring forth any miracles that balance out the sacrifice.