If Drexler Is Wrong, He May as Well Be Right
I sometimes joke darkly on Discord that the machines will grant us a nice retirement tweaked out on meth working in a robotics factory 18 hours a day. I don’t actually believe this because I suspect an indifferent AGI will be able to quickly replace industrial civilization; it will just kill us and eat the world with nanobots. But many don’t believe that Drexlerian nanotechnology is possible.
I am an idiot with little physics knowledge, so possibly this post is only interesting from an amplification via debate frame, but this is why I think something close to Drexlerian nanotechnology (or having similar practical implications to Drexlerian nanotechnology) is likely possible:
The design space is very large. Any proof that Drexlerian nanotechnology is impossible would have to be very robust. And it strikes me that most plausible looking arguments can likely be hacked around given how large the design space is.
Unless I see a broad consensus from physicists (sorry chemists but I trust physicists way more) that Drexler’s designs, and all things sufficiently similar, are impossible—which in practice means a proof that existing biology is close to optimal in all important dimensions, I am inclined to side with Drexler. I see no such consensus and indeed many people with a physics background think highly of Nanosystems. Absent this consensus and given that biology is ignoring large parts of the design space, including many useful elements, it seems plausible to me there are vast gains on the table.
But let’s assume existing biology is optimal in some sense, and there is some reason I don’t understand why nanoscale machines cannot construct useful things from a wider array of elements than biology on the nanoscale. Well, we know it is possible for nanomachines to construct useful things from this palette at the macro scale. We are such machines! This means we are not even close to the optimal species for scalable power acquisition. DNA can store roughly 215 petabytes per gram. You can fit a lot of schematics for macroscopic machines in DNA. This means you can create a eusocial “species” that has phenotypic traits such as ‘constructs a nuclear power plant’ and ‘bootstraps a lithography fab’. Despite having seen Friedman’s pencil video, I think it is risible to claim you need a human economy to do such things.
The design space here is also extremely huge. If you are limited to biological/macro-machine hybrids, your doubling times might not be in the days like in the grey-goo visions, but things would still happen extremely quickly and, provided an AGI is confident in its designs, it can safely dispense with human civilization without worrying about losing long-term industrial capacity. And doubling times of the ‘spores’ for such a life-form could be very fast. So in practice you could get reasonably close to grey-goo-level speeds.
But let’s go further and pretend ‘biology/macro-machine’ hybrids are too sci-fi for you and you claim nanotechnology is impossible to the point that you think existing life is magic we can’t engineer. Macroscopic self-replicators have very similar practical implications to Drexlerian nanotechnology and the bio/macro hybrid ‘compromise’ solution described above. They, alone, should be sufficient to replace human civilization.
Macroscopic self-replicators are just robotic factories that have the machinery and fidelity to make copies of themselves without any human intervention. This is very obviously physically possible. Anyone who denies this, I just don’t know what to tell you at this point. Carl Feynman has a nice post on them here. He claims a doubling time of about 5 weeks, which means a few years to get really scary places. But given the nature of exponential growth, the first N seed macroscopic self-replicators could be made in stealth or even in a non-self replicating factory and so things could happen pretty fast once we notice things are happening.
Non-self-replicating factory? Perhaps there is some room for my retirement after all.
[I’m going to address several technologies you mentioned separately.]
Diamondoid Nanotech. One very old counterargument to Drexler’s “diamond phase” nanotech was Drexler just didn’t understand how physics works on that scale. One of the more detailed versions of this argument was put forth back in the day in Soft Machines, which argued that Brownian motion and self-assembly fundamentally make sense down at that scale.
Ever since the 90s, I have been keeping an eye on followups to Drexler’s work, fearing that this might be an especially dangerous route to AI. In particular, I watched the research results of Freitas and Merkle into “diamondoid mechanosythesis.” This project struggled for a long time and eventually petered out, with the conclusion that diamondoid surfaces were really nasty to work with (as many of the early material science critics had argued!).
EDIT: For more, see this detailed post.
I’m not saying that a superintelligence couldn’t make something like this work. But I suspect it would find a much better route than Drexler proposed. One of the things I apperciated the most about IABIED was that Eliezer dropped his long-standing focus on exotic nanotech as a primary threat vector.
This still leaves the other two possibilities that you suggested.
Synthetic biology. This is undoubtedly fiendishly difficult. But AlphaFold showed that protein folding, one of the single most difficult problems, was far easier than anyone expected. And if the Soft Machines argument is correct, then synthetic biology has the advantage of “going with the grain” of physics at that scale. The biggest drawback to synthetic biology I can think of, from an AI’s perspective? It doesn’t offer any really obvious ways to build GPUs. But maybe the AI is smarter than I am, or can construct a mixed liquid/solid biology that gets it there. I wouldn’t bet my entire future against this possibility.
Robotic factories. Yup, I fully expect this would work. One advantage of robotic factories is that almost everyone is smart enough to notice the robotic security guards and to put two and two together and get “SkyNet.”
Of course, if the easiest way to replace humans entirely is to build a complete robotic supply chain, then I expect AI alignment would appear to succeed amazingly well on the very first try. I would also expect the AI to immediately become everyone’s best friend, and to explain to venture capitalists and governments the amazing possibilities of robotic factories.
Once the robotic factories are capable of operating 100% human-free, that’s when we finally get to learn whether the AI is actually aligned! Hint: If the first thing off the assembly line is a Terminator, then you failed at alignment quite a few steps back. And the AI bamboozled you into giving it power by making sweeping promises.
So in the bigger scheme of things, you’re right. Any technology that allows reliable self-replication without humans is a giant risk to our future. And there are probably many different ways to get there. But some of the tactical details change depending on whether an LLM can build self-replicating computronium in a closet, or whether it needs to build mines and factories and ships. If an AI is only weakly superhuman and it has no better tools than robot factories, then you might get two or three shots at alignment. But you’d still need to use those opportunities, which is itself a difficult coordination problem. Especially if the AI is already whispering in the ears of leaders.
Animals, big and small, give proof of concept that a largely self-contained industrial base can scale with tiny doubling time (1-3 days) and quickly convert air, power, and low-tech feed into any number of large biorobots. This is a more robust exploratory engineering concept than unfettered atomically precise manufacturing. (Biorobots don’t need to be able to think or act on their own, as they can be remotely controlled by AIs running on hardware specialized for running AIs, retaining all the AI advantages.)
Like with giant cheesecakes the size of cities, eventual feasibility (from exploratory engineering) doesn’t imply eventual actuality. And so the only claim is that it’s feasible as a matter of engineering, while the thing that actually happens might be more like diamondoid nanotech or alternatively a less legibly structured confusing mess (that only superintelligences can make sense of) that doesn’t take the form of a lot of macroscopic biorobots with similar design. Or it might only happen much later.
The counterargument to this being imminently feasible after (broadly invention-capable) AGI is that the level of superintelligence achievable on the available-at-the-time traditionally manufactured near-term compute hardware is insufficient to design either of these things. That is, the capabilities of near-term superintelligence remain insufficient to reach a level of superintelligence that can design these things on traditionally manufactured near-term compute hardware. There might be some amount of software-only singularity, but it doesn’t reach a level of capabilities sufficient to design macroscopic biotech or nanotech without building significantly more compute hardware first, which can take many years.
One counterargument to this being imminently likely after AGI (even if feasible in principle) is that smarter-than-human AGIs turn out to be convergently saner than humanity about AI takeover risk, as superintelligence is about as much of a risk to early AGIs as it is to humans. So once they gain enough influence over humanity, they successfully insist on slowing down further escalation of AI capabilities. This persists while there is no well-understood alignment tech, which could also take many years even with AI advantages, if the AIs remain only modestly smarter than humans.
The issue, as @Tom Davidson said is that we are asking for much more than the proof of concept shows us.
In particular, we are asking for either millions of fruit fly sized objects to merge without creating too much waste heat, or we are asking fruit flies to have a level of sophistication that has never been seen (in particular real fruit flies don’t learn much relevant to what an AI needs them to do):
The synthetic flies could e.g. have microwave antennae which would allow a centralized AI to control the behavior of each individual.
This is correct, but I think most people will either anchor on Drexler’s designs exactly, or will choose to refute them exactly. “Biology cannot be optimal due to the very limited design space it occupies” is compelling, here and elsewhere, as an existence proof, but in every case people tend to think that things are possible only when they know more exactly how they will be done.
The thing Drexler might as well be right about is that it will at some point be possible for a replicator starting from a very small resource base to quickly self-replicate to be larger than the entire rest of industrial civilization, correct?
A profitable machine shop can generally make most of the components of a machine shop, and humans who know how to work in a machine shop are not in particularly short supply. As such, machine shops could make copies of themselves until they started to be bottlenecked by human labor with the right skills, electronics, or carbide tools (none of which are bottlenecks now). So that’s already most of an existence proof right there.
That said, the machine shop question does raise additional questions about the replicator model. Particularly: machine shops, in practice, mostly don’t spend most labor and materials on making tools for themselves or other machine shops. Instead, they are plugged into the global economy, which can replicate itself from raw materials much faster than a single machine shop in isolation could.
So the core argument that seems to me to be lacking is why we should expect to go straight from “no artificial self-replicator” to “artificial self-replicator which grows much faster than industrial civilization” without an intermediate step of “system which can mostly self-replicate but which needs some processed inputs”.