With regards to the super-scientist AI (the global human R&D equivalent), wouldn’t we see it coming based on the amount of resources it would need to hire? Are you claiming that it could reach the required AGI capacity in its “brain in a box in a basement” state and only after scale up in terms of resource use? The part I’m most skeptical about remains this idea that the resource use to get to human-level performance is minimal if you just find the right algorithm, because at least in my view it neglects the evaluation step in learning that can be resource intensive from the start and maybe can’t be done “covertly”.
---
That said, I want to stress that I agree with the conclusion:
So we need to be working frantically on technical alignment, sandbox test protocols, and more generally having a plan, right now, long before the future scary paradigm seems obviously on the path to AGI.
(And no, inventing that next AI paradigm is not part of the solution, but rather part of the problem, despite the safety-vibed rhetoric of the researchers who are doing exactly that as we speak—see §1.6.1.)
But then, if AI researchers believe a likely scenario is:
the development of strong superintelligence from a small group working on a new AI paradigm, with essentially no warning and little resources,
Does that imply that the people who work on technical alignment, or at least their allies, need to also put effort to “win the race” for AGI? It seems the idea that “any small group could create this with no warning” could motivate acceleration in that race even from people who are well-meaning in terms of alignment.
With regards to the super-scientist AI (the global human R&D equivalent), wouldn’t we see it coming based on the amount of resources it would need to hire? Are you claiming that it could reach the required AGI capacity in its “brain in a box in a basement” state and only after scale up in terms of resource use? The part I’m most skeptical about remains this idea that the resource use to get to human-level performance is minimal if you just find the right algorithm, because at least in my view it neglects the evaluation step in learning that can be resource intensive from the start and maybe can’t be done “covertly”.
---
That said, I want to stress that I agree with the conclusion:
But then, if AI researchers believe a likely scenario is:
Does that imply that the people who work on technical alignment, or at least their allies, need to also put effort to “win the race” for AGI? It seems the idea that “any small group could create this with no warning” could motivate acceleration in that race even from people who are well-meaning in terms of alignment.