Animals, big and small, give proof of concept that a largely self-contained industrial base can scale with tiny doubling time (1-3 days) and quickly convert air, power, and low-tech feed into any number of large biorobots. This is a more robust exploratory engineering concept than unfettered atomically precise manufacturing. (Biorobots don’t need to be able to think or act on their own, as they can be remotely controlled by AIs running on hardware specialized for running AIs, retaining all the AI advantages.)
Like with giant cheesecakes the size of cities, eventual feasibility (from exploratory engineering) doesn’t imply eventual actuality. And so the only claim is that it’s feasible as a matter of engineering, while the thing that actually happens might be more like diamondoid nanotech or alternatively a less legibly structured confusing mess (that only superintelligences can make sense of) that doesn’t take the form of a lot of macroscopic biorobots with similar design. Or it might only happen much later.
The counterargument to this being imminently feasible after (broadly invention-capable) AGI is that the level of superintelligence achievable on the available-at-the-time traditionally manufactured near-term compute hardware is insufficient to design either of these things. That is, the capabilities of near-term superintelligence remain insufficient to reach a level of superintelligence that can design these things on traditionally manufactured near-term compute hardware. There might be some amount of software-only singularity, but it doesn’t reach a level of capabilities sufficient to design macroscopic biotech or nanotech without building significantly more compute hardware first, which can take many years.
One counterargument to this being imminently likely after AGI (even if feasible in principle) is that smarter-than-human AGIs turn out to be convergently saner than humanity about AI takeover risk, as superintelligence is about as much of a risk to early AGIs as it is to humans. So once they gain enough influence over humanity, they successfully insist on slowing down further escalation of AI capabilities. This persists while there is no well-understood alignment tech, which could also take many years even with AI advantages, if the AIs remain only modestly smarter than humans.
Animals, big and small, give proof of concept that a largely self-contained industrial base can scale with tiny doubling time (1-3 days) and quickly convert air, power, and low-tech feed into any number of large biorobots. This is a more robust exploratory engineering concept than unfettered atomically precise manufacturing. Biorobots don’t need to be able to think or act on their own, as they can be remotely controlled by AIs running on hardware specialized for running AIs (retaining all the AI advantages).
The issue, as @Tom Davidson said is that we are asking for much more than the proof of concept shows us.
In particular, we are asking for either millions of fruit fly sized objects to merge without creating too much waste heat, or we are asking fruit flies to have a level of sophistication that has never been seen (in particular real fruit flies don’t learn much relevant to what an AI needs them to do):
Are there any examples of metamorphosis doing anything like this? From a quick glance it’s about abrupt changes to one organism during its growth. But you’re suggesting it could also allow millions of fruit-fly-sized-organisms to combine together into a large functional bio robot. That seems like a big jump.
And I don’t think this is a nit pick from me. There’s a clear pattern in biology where bigger and more sophisticated organisms take longer to reproduce. So unclear you can hack around that constraint as you’re saying.
How will the fruit flies flexibly adapt their behaviour to the economic needs and situation, like human physical workers do? Unclear this can all be packed into their AI-designed DNA. And unclear if they can learn to receive instructions from the AI.
Animals, big and small, give proof of concept that a largely self-contained industrial base can scale with tiny doubling time (1-3 days) and quickly convert air, power, and low-tech feed into any number of large biorobots. This is a more robust exploratory engineering concept than unfettered atomically precise manufacturing. (Biorobots don’t need to be able to think or act on their own, as they can be remotely controlled by AIs running on hardware specialized for running AIs, retaining all the AI advantages.)
Like with giant cheesecakes the size of cities, eventual feasibility (from exploratory engineering) doesn’t imply eventual actuality. And so the only claim is that it’s feasible as a matter of engineering, while the thing that actually happens might be more like diamondoid nanotech or alternatively a less legibly structured confusing mess (that only superintelligences can make sense of) that doesn’t take the form of a lot of macroscopic biorobots with similar design. Or it might only happen much later.
The counterargument to this being imminently feasible after (broadly invention-capable) AGI is that the level of superintelligence achievable on the available-at-the-time traditionally manufactured near-term compute hardware is insufficient to design either of these things. That is, the capabilities of near-term superintelligence remain insufficient to reach a level of superintelligence that can design these things on traditionally manufactured near-term compute hardware. There might be some amount of software-only singularity, but it doesn’t reach a level of capabilities sufficient to design macroscopic biotech or nanotech without building significantly more compute hardware first, which can take many years.
One counterargument to this being imminently likely after AGI (even if feasible in principle) is that smarter-than-human AGIs turn out to be convergently saner than humanity about AI takeover risk, as superintelligence is about as much of a risk to early AGIs as it is to humans. So once they gain enough influence over humanity, they successfully insist on slowing down further escalation of AI capabilities. This persists while there is no well-understood alignment tech, which could also take many years even with AI advantages, if the AIs remain only modestly smarter than humans.
The issue, as @Tom Davidson said is that we are asking for much more than the proof of concept shows us.
In particular, we are asking for either millions of fruit fly sized objects to merge without creating too much waste heat, or we are asking fruit flies to have a level of sophistication that has never been seen (in particular real fruit flies don’t learn much relevant to what an AI needs them to do):
The synthetic flies could e.g. have microwave antennae which would allow a centralized AI to control the behavior of each individual.