This comic by Tim Urban is interesting, but I remember when I first read it, it seemed wrong.
In his framework, I think ASI can only be quantitatively more powerful than human intelligence, not qualitatively.
The reason is simple: humans are already Turing complete. Anything a machine can do, it can only be faster execution of something a human could already do.
I don’t think it has much bearing on the wider discussion of AI/AI-risk, I haven’t heard anybody else think that the distinction of quantitative/qualitative superiority had any bearing on AI risk.
I don’t think it matters much for practical purposes. It could be that some problems are theoretically solvable by human intelligence but we realistically lack the time to do so in the age of the universe, or that they just can’t be solved by us, and either way an ASI that solves them in a day leaves us in the dust. The reason why becomes secondary at that point.
I feel like one problem with solving problems intelligently is that it’s rarely as easy as tackling a tedious task in small bits—you need an intuition to see the whole path in a sort of coarse light, and then refine on each individual step. So there’s a fast algorithm that goes “I know I can do this, I don’t know how yet” and then we slowly unpack the relevant bits. And I think there might be a qualitative effect to e.g. being able to hold more steps in memory simultaneously or such.
This comic by Tim Urban is interesting, but I remember when I first read it, it seemed wrong.
In his framework, I think ASI can only be quantitatively more powerful than human intelligence, not qualitatively.
The reason is simple: humans are already Turing complete. Anything a machine can do, it can only be faster execution of something a human could already do.
I don’t think it has much bearing on the wider discussion of AI/AI-risk, I haven’t heard anybody else think that the distinction of quantitative/qualitative superiority had any bearing on AI risk.
I don’t think it matters much for practical purposes. It could be that some problems are theoretically solvable by human intelligence but we realistically lack the time to do so in the age of the universe, or that they just can’t be solved by us, and either way an ASI that solves them in a day leaves us in the dust. The reason why becomes secondary at that point.
I feel like one problem with solving problems intelligently is that it’s rarely as easy as tackling a tedious task in small bits—you need an intuition to see the whole path in a sort of coarse light, and then refine on each individual step. So there’s a fast algorithm that goes “I know I can do this, I don’t know how yet” and then we slowly unpack the relevant bits. And I think there might be a qualitative effect to e.g. being able to hold more steps in memory simultaneously or such.