This argument is, however, nonsense. The human capacity for abstract reasoning over mathematical models is in principle a fully general intelligent behaviour
While it’s true that humans are Turing complete, there does not exist only computability as a barrier to understanding. Brains are, compared to some computers, quite slow and imperfect at storage. Let’s say that the output of a super-intelligence would require to be understood, in human terms, the effort of a thousand-years-long computation written with the aid of a billion sheets of paper. While it would not be, in principle, unintelligible, it doesn’t matter because nobody will ever understand it. You can combine, if you want a more principled approach, Chaitin’s theorem and Blum’s speed-up theorem to show that, whatever complexity is intelligible for a human being, there’s always a better machine whose output is, for that human, totally random.
My suspect is that you are making the same mistake you are accusing LWers: by reasoning by analogy, you see errors in what really are missing steps in your understanding. What an irony.
While it’s true that humans are Turing complete, there does not exist only computability as a barrier to understanding.
Brains are, compared to some computers, quite slow and imperfect at storage. Let’s say that the output of a super-intelligence would require to be understood, in human terms, the effort of a thousand-years-long computation written with the aid of a billion sheets of paper. While it would not be, in principle, unintelligible, it doesn’t matter because nobody will ever understand it.
You can combine, if you want a more principled approach, Chaitin’s theorem and Blum’s speed-up theorem to show that, whatever complexity is intelligible for a human being, there’s always a better machine whose output is, for that human, totally random.
My suspect is that you are making the same mistake you are accusing LWers: by reasoning by analogy, you see errors in what really are missing steps in your understanding.
What an irony.