I’m not sure what’s the difference between what you’re saying here and what I said about QNIs. Is it that you expect being able to see the emergent technology before the singular (crossover) point? Actually, the fact you describe DL as “currently useless” makes me think we should be talking about progress as a function of two variables: time and “maturity”, where maturity inhabits, roughly speaking, a scale from “theoretical idea” to “proof of concept” to “beats SOTA in lab conditions” to “commercial product”. In this sense, the “lab progress” curve is already past the DL singularity but the “commercial progress” curve maybe isn’t.
On this model, if post-DL AI technology X appears tomorrow, it will take it some time to span the distance from “theoretical idea” to “commercial product”, in which time we would notice it and update our predictions accordingly. But, two things to note here:
First, it’s not clear which level of maturity is the relevant reference point for AI risk. In particular, I don’t think you need commercial levels of maturity for AI technology to become risky, for the reasons I discussed in my previous comment (and, we can also add regulatory barriers to that list, although I am not convinced they are as important as Yudkowsky seems to believe).
Second, all this doesn’t sound to me like “AI systems will grow relatively continuously and predictably”, although maybe I just interpreted this statement differently from its intent. For instance, I agree that it’s unlikely technology X will emerge specifically in the next year, so progress over the next year should be fairly predictable. On the other hand, I don’t think it would be very surprising if technology X emerges in the next decade.
IIUC, part of what you’re saying can be rephrased as: TAI is unlikely to be created by a small team, since once a small team shows something promising, tonnes of resources will be thrown at them (and at other teams that might be able to copy the technology) and they won’t be a small team anymore. Which sounds plausible, I suppose, but doesn’t make TAI predictable that long in advance.
I’m not sure what’s the difference between what you’re saying here and what I said about QNIs. Is it that you expect being able to see the emergent technology before the singular (crossover) point? Actually, the fact you describe DL as “currently useless” makes me think we should be talking about progress as a function of two variables: time and “maturity”, where maturity inhabits, roughly speaking, a scale from “theoretical idea” to “proof of concept” to “beats SOTA in lab conditions” to “commercial product”. In this sense, the “lab progress” curve is already past the DL singularity but the “commercial progress” curve maybe isn’t.
On this model, if post-DL AI technology X appears tomorrow, it will take it some time to span the distance from “theoretical idea” to “commercial product”, in which time we would notice it and update our predictions accordingly. But, two things to note here:
First, it’s not clear which level of maturity is the relevant reference point for AI risk. In particular, I don’t think you need commercial levels of maturity for AI technology to become risky, for the reasons I discussed in my previous comment (and, we can also add regulatory barriers to that list, although I am not convinced they are as important as Yudkowsky seems to believe).
Second, all this doesn’t sound to me like “AI systems will grow relatively continuously and predictably”, although maybe I just interpreted this statement differently from its intent. For instance, I agree that it’s unlikely technology X will emerge specifically in the next year, so progress over the next year should be fairly predictable. On the other hand, I don’t think it would be very surprising if technology X emerges in the next decade.
IIUC, part of what you’re saying can be rephrased as: TAI is unlikely to be created by a small team, since once a small team shows something promising, tonnes of resources will be thrown at them (and at other teams that might be able to copy the technology) and they won’t be a small team anymore. Which sounds plausible, I suppose, but doesn’t make TAI predictable that long in advance.