We get AI whose world-model is fully-generally, vastly more precise and comprehensive than that of a human. We go from having AI which is seated in human data and human knowledge, whose performance is largely described in human terms (e.g. “it can do tasks which would take skilled human programmers 60 hours, and it can do these tasks for $100, and it can do them in just a couple hours!”) to being impossible to describe in such terms… e.g. “it can do tasks the methods behind which, and the purpose of which, we simply cannot comprehend, despite having the AI there to explain it to us, because our brains are biological systems, subject to the same kinds of constraints that all such systems are subject to, and therefore we simply cannot conceptualise the majority of logical leaps which one must follow to understand the tasks which AI is now carrying out”.
It looks like vast swathes of philosophical progress, most of which we cannot follow. It looks like branches of mathematics humans cannot participate in. And similar for all areas of research. It looks like commonly-accepted truths being overturned. It looks like these things coming immediately to the AI. The AI does not have to reflect over the course of billions of tokens to overturn philosophy, it just comes naturally to it as a result of having a larger, better-designed brain. Humanity evolved our higher-reasoning faculties over the blink of an eye, with a low population, in an environment which hardly rewarded higher-reasoning. AI can design AI which is not constrained by human data, in other words, intelligence which is created sensibly rather than by happenstance.
Whether we survive this stage comes down to luck. X-risk perspectives on AI safety having fallen by the wayside, we will have to hope that the primitive AI which initiates the recursive self-improvement is able and motivated to ensure that the AI it creates has humanity’s best interests at heart.
We get AI whose world-model is fully-generally, vastly more precise and comprehensive than that of a human. We go from having AI which is seated in human data and human knowledge, whose performance is largely described in human terms (e.g. “it can do tasks which would take skilled human programmers 60 hours, and it can do these tasks for $100, and it can do them in just a couple hours!”) to being impossible to describe in such terms… e.g. “it can do tasks the methods behind which, and the purpose of which, we simply cannot comprehend, despite having the AI there to explain it to us, because our brains are biological systems, subject to the same kinds of constraints that all such systems are subject to, and therefore we simply cannot conceptualise the majority of logical leaps which one must follow to understand the tasks which AI is now carrying out”.
It looks like vast swathes of philosophical progress, most of which we cannot follow. It looks like branches of mathematics humans cannot participate in. And similar for all areas of research. It looks like commonly-accepted truths being overturned. It looks like these things coming immediately to the AI. The AI does not have to reflect over the course of billions of tokens to overturn philosophy, it just comes naturally to it as a result of having a larger, better-designed brain. Humanity evolved our higher-reasoning faculties over the blink of an eye, with a low population, in an environment which hardly rewarded higher-reasoning. AI can design AI which is not constrained by human data, in other words, intelligence which is created sensibly rather than by happenstance.
Whether we survive this stage comes down to luck. X-risk perspectives on AI safety having fallen by the wayside, we will have to hope that the primitive AI which initiates the recursive self-improvement is able and motivated to ensure that the AI it creates has humanity’s best interests at heart.