[I’m going to address several technologies you mentioned separately.]
Diamondoid Nanotech. One very old counterargument to Drexler’s “diamond phase” nanotech was Drexler just didn’t understand how physics works on that scale. One of the more detailed versions of this argument was put forth back in the day in Soft Machines, which argued that Brownian motion and self-assembly fundamentally make sense down at that scale.
Ever since the 90s, I have been keeping an eye on followups to Drexler’s work, fearing that this might be an especially dangerous route to AI. In particular, I watched the research results of Freitas and Merkle into “diamondoid mechanosythesis.” This project struggled for a long time and eventually petered out, with the conclusion that diamondoid surfaces were really nasty to work with (as many of the early material science critics had argued!).
I’m not saying that a superintelligence couldn’t make something like this work. But I suspect it would find a much better route than Drexler proposed. One of the things I apperciated the most about IABIED was that Eliezer dropped his long-standing focus on exotic nanotech as a primary threat vector.
This still leaves the other two possibilities that you suggested.
Synthetic biology. This is undoubtedly fiendishly difficult. But AlphaFold showed that protein folding, one of the single most difficult problems, was far easier than anyone expected. And if the Soft Machines argument is correct, then synthetic biology has the advantage of “going with the grain” of physics at that scale. The biggest drawback to synthetic biology I can think of, from an AI’s perspective? It doesn’t offer any really obvious ways to build GPUs. But maybe the AI is smarter than I am, or can construct a mixed liquid/solid biology that gets it there. I wouldn’t bet my entire future against this possibility.
Robotic factories. Yup, I fully expect this would work. One advantage of robotic factories is that almost everyone is smart enough to notice the robotic security guards and to put two and two together and get “SkyNet.”
Of course, if the easiest way to replace humans entirely is to build a complete robotic supply chain, then I expect AI alignment would appear to succeed amazingly well on the very first try. I would also expect the AI to immediately become everyone’s best friend, and to explain to venture capitalists and governments the amazing possibilities of robotic factories.
Once the robotic factories are capable of operating 100% human-free, that’s when we finally get to learn whether the AI is actually aligned! Hint: If the first thing off the assembly line is a Terminator, then you failed at alignment quite a few steps back. And the AI bamboozled you into giving it power by making sweeping promises.
So in the bigger scheme of things, you’re right. Any technology that allows reliable self-replication without humans is a giant risk to our future. And there are probably many different ways to get there. But some of the tactical details change depending on whether an LLM can build self-replicating computronium in a closet, or whether it needs to build mines and factories and ships. If an AI is only weakly superhuman and it has no better tools than robot factories, then you might get two or three shots at alignment. But you’d still need to use those opportunities, which is itself a difficult coordination problem. Especially if the AI is already whispering in the ears of leaders.
[I’m going to address several technologies you mentioned separately.]
Diamondoid Nanotech. One very old counterargument to Drexler’s “diamond phase” nanotech was Drexler just didn’t understand how physics works on that scale. One of the more detailed versions of this argument was put forth back in the day in Soft Machines, which argued that Brownian motion and self-assembly fundamentally make sense down at that scale.
Ever since the 90s, I have been keeping an eye on followups to Drexler’s work, fearing that this might be an especially dangerous route to AI. In particular, I watched the research results of Freitas and Merkle into “diamondoid mechanosythesis.” This project struggled for a long time and eventually petered out, with the conclusion that diamondoid surfaces were really nasty to work with (as many of the early material science critics had argued!).
EDIT: For more, see this detailed post.
I’m not saying that a superintelligence couldn’t make something like this work. But I suspect it would find a much better route than Drexler proposed. One of the things I apperciated the most about IABIED was that Eliezer dropped his long-standing focus on exotic nanotech as a primary threat vector.
This still leaves the other two possibilities that you suggested.
Synthetic biology. This is undoubtedly fiendishly difficult. But AlphaFold showed that protein folding, one of the single most difficult problems, was far easier than anyone expected. And if the Soft Machines argument is correct, then synthetic biology has the advantage of “going with the grain” of physics at that scale. The biggest drawback to synthetic biology I can think of, from an AI’s perspective? It doesn’t offer any really obvious ways to build GPUs. But maybe the AI is smarter than I am, or can construct a mixed liquid/solid biology that gets it there. I wouldn’t bet my entire future against this possibility.
Robotic factories. Yup, I fully expect this would work. One advantage of robotic factories is that almost everyone is smart enough to notice the robotic security guards and to put two and two together and get “SkyNet.”
Of course, if the easiest way to replace humans entirely is to build a complete robotic supply chain, then I expect AI alignment would appear to succeed amazingly well on the very first try. I would also expect the AI to immediately become everyone’s best friend, and to explain to venture capitalists and governments the amazing possibilities of robotic factories.
Once the robotic factories are capable of operating 100% human-free, that’s when we finally get to learn whether the AI is actually aligned! Hint: If the first thing off the assembly line is a Terminator, then you failed at alignment quite a few steps back. And the AI bamboozled you into giving it power by making sweeping promises.
So in the bigger scheme of things, you’re right. Any technology that allows reliable self-replication without humans is a giant risk to our future. And there are probably many different ways to get there. But some of the tactical details change depending on whether an LLM can build self-replicating computronium in a closet, or whether it needs to build mines and factories and ships. If an AI is only weakly superhuman and it has no better tools than robot factories, then you might get two or three shots at alignment. But you’d still need to use those opportunities, which is itself a difficult coordination problem. Especially if the AI is already whispering in the ears of leaders.