This comic by Tim Urban is interesting, but I remember when I first read it, it seemed wrong.
In his framework, I think ASI can only be quantitatively more powerful than human intelligence, not qualitatively.
The reason is simple: humans are already Turing complete. Anything a machine can do, it can only be faster execution of something a human could already do.
I don’t think it has much bearing on the wider discussion of AI/AI-risk, I haven’t heard anybody else think that the distinction of quantitative/qualitative superiority had any bearing on AI risk.
Where can one buy ostrich eggs?