If the reason for our technological dominance is due to our ability to process culture, however, then the case for a discontinuous jump in capabilities is weaker. This is because our AI systems can already process culture somewhat efficiently right now (see GPT-2) and there doesn’t seem like a hard separation between “being able to process culture inefficiently” and “able to process culture efficiently” other than the initial jump from not being able to do it at all, which we have already passed.
I think you’re giving GPT-2 too much credit here. I mean, on any dimension of intelligence, you can say there’s a continuum of capabilities along that scale with no hard separations. The more relevant question is, might there be a situation where all the algorithms are like GPT-2, which can only pick up superficial knowledge, and then someone has an algorithmic insight, and now we can make algorithms that, as they read more and more, develop ever deeper and richer conceptual understandings? And if so, how fast could things move after that insight? I don’t think it’s obvious.
I do agree that pretty much everything that might make an AGI suddenly powerful and dangerous is in the category of “taking advantage of the products of human culture”, for example: coding (recursive self-improvement, writing new modules, interfacing with preexisting software and code), taking in human knowledge (reading and deeply understanding books, videos, wikipedia, etc., a.k.a. “content overhang”) , computing hardware (self-reproduction / seizing more computing power , a.k.a. “hardware overhang”), the ability of humans to coordinate and cooperate (social manipulation, earning money, etc.), etc. In all these cases and more, I would agree that one could in principle define a continuum of capabilities from 0 to superhuman with no hard separations, but still think that it’s possible for a new algorithm to jump from “2019-like” (which is more than strictly 0) to “really able to take advantage of this tool like humans can or beyond” in one leap.
I think you’re giving GPT-2 too much credit here. I mean, on any dimension of intelligence, you can say there’s a continuum of capabilities along that scale with no hard separations. The more relevant question is, might there be a situation where all the algorithms are like GPT-2, which can only pick up superficial knowledge, and then someone has an algorithmic insight, and now we can make algorithms that, as they read more and more, develop ever deeper and richer conceptual understandings? And if so, how fast could things move after that insight? I don’t think it’s obvious.
I do agree that pretty much everything that might make an AGI suddenly powerful and dangerous is in the category of “taking advantage of the products of human culture”, for example: coding (recursive self-improvement, writing new modules, interfacing with preexisting software and code), taking in human knowledge (reading and deeply understanding books, videos, wikipedia, etc., a.k.a. “content overhang”) , computing hardware (self-reproduction / seizing more computing power , a.k.a. “hardware overhang”), the ability of humans to coordinate and cooperate (social manipulation, earning money, etc.), etc. In all these cases and more, I would agree that one could in principle define a continuum of capabilities from 0 to superhuman with no hard separations, but still think that it’s possible for a new algorithm to jump from “2019-like” (which is more than strictly 0) to “really able to take advantage of this tool like humans can or beyond” in one leap.
Sorry if I’m misunderstanding your point.