Once non-superintelligent AGI can build domain specific narrow superintelligences, it can generate synthetic data that seamlessly integrates their capabilities into general intelligence but doesn’t require general superintelligence to generate (possibly as modalities), circumventing the projections from LLM-only growth. In particular, related to what ChristianKl talks about in the other reply, formal proof seems like an important case of this construction, potentially allowing LLMs to suddenly understand textbooks and papers that their general intelligence wouldn’t be sufficient to figure out, opening the way to build on that understanding, while anchored to capabilities of the narrow formal proof superintelligence (built by humans in this case).
I’m also assuming that, at any finite intelligence level, both the “just run more of them” and “just run them faster” approaches cannot be scaled indefinitely, and hit resource limits for the first one, and speed of light times distance between atoms limits for the second.
The point of “just run them faster” is that this circumvents projections based on any particular AGI architectures, because it allows discovering alternative architectures from distant future within months. At which point it’s no longer “just run them faster”, but something much closer to whatever is possible in principle. And because of the contribution of the “just run them faster” phase this doesn’t take decades or centuries. Singularity-grade change happens from both the “just run them faster” phase and the subsequent phase that exploits its discoveries, both taking very little time on human scale.
Once non-superintelligent AGI can build domain specific narrow superintelligences, it can generate synthetic data that seamlessly integrates their capabilities into general intelligence but doesn’t require general superintelligence to generate (possibly as modalities), circumventing the projections from LLM-only growth. In particular, related to what ChristianKl talks about in the other reply, formal proof seems like an important case of this construction, potentially allowing LLMs to suddenly understand textbooks and papers that their general intelligence wouldn’t be sufficient to figure out, opening the way to build on that understanding, while anchored to capabilities of the narrow formal proof superintelligence (built by humans in this case).
The point of “just run them faster” is that this circumvents projections based on any particular AGI architectures, because it allows discovering alternative architectures from distant future within months. At which point it’s no longer “just run them faster”, but something much closer to whatever is possible in principle. And because of the contribution of the “just run them faster” phase this doesn’t take decades or centuries. Singularity-grade change happens from both the “just run them faster” phase and the subsequent phase that exploits its discoveries, both taking very little time on human scale.