That is, you’d need very strongly ASI-level understanding of biology to accomplish this
That’s in some sense close to the premise, though I think fast high-fidelity chemistry/biology simulators (or specialized narrow AIs) should be sufficient to get this done even at near-human level, with enough subjective time and simulation compute. My point is that “fruit flies”/biorobots should be an entry on a list that contains both traditional robots and nanotech as relevant for post-AGI industry scaling. There are some perceived difficulties with proper nanotech that don’t apply to this biorobot concept.
In the other direction, a sufficiently effective software-only singularity would directly produce strong ASIs on existing hardware, without needing more compute manufactured first, and so won’t need to bother with human labor or traditional robots, which again doesn’t fit the list in this post. So the premise from the post is more that software-only singularity somewhat fizzles, and then AGI-supercharged industry “slowly” scales to build more compute, until enough time has passed and enough compute has been manufactured that nanotech-level things can be developed. In this setting, the question is whether macroscopic biotech could be unlocked even earlier.
(So I’m not making a general/unconditional prediction in this thread. Outside the above premises I’m expecting a software-only singularity that produces strong ASI on existing hardware without having much use for scaling traditional industry first, though it might also start scaling initially for some months to 1-2 years, perhaps mostly to keep the humans distracted, or because AGIs were directly prompted by humans to make this happen.)
Given the premises, I guess I’m willing to grant that this isn’t a silly extrapolation, and absent them it seems like you basically agree with the post?
However, I have a few notes on why I’d reject your premises.
On your first idea, I think high-fidelity biology simulators require so much understanding of biology that they are subsequent to ASI, rather than a replacement. And even then, you’re still trying to find something by searching an exponential design space—which is nontrivial even for AGI with feasible amounts of “unlimited” compute. Not only that, but the thing you’re looking for needs to do a bunch of stuff that probably isn’t feasible due to fundamental barriers (Not identical to the ones listed there, but closely related to them.)
On your second idea, a software-only singularity assumes that there is a giant compute overhang for some specific buildable general AI that doesn’t even require specialized hardware. Maybe so, but I’m skeptical; the brain can’t be simulated directly via Deep NNs, which is what current hardware is optimized for. And if some other hardware architecture using currently feasible levels of compute is devised, there still needs to be a massive build-out of these new chips—which then allows “enough compute has been manufactured that nanotech-level things can be developed.” But that means you again assume that arbitrary nanotech is feasible, which could be true, but as the other link notes, certainly isn’t anything like obvious.
(It’s useful to clearly distinguish exploration of what follows from some premises, and views on whether the premises are important/likely/feasible. Issues with the latter are no reason at all to hesitate or hedge with the former.)
But that means you again assume that arbitrary nanotech is feasible, which could be true, but as the other link notes, certainly isn’t anything like obvious.
I mentioned arbitrary nanotech, but it’s not doing any work there as an assumption. So it being infeasible doesn’t change the point about macroscopic biotech possibly being first, which is technically still the case if nanotech doesn’t follow at all.
Various claims that nanotech isn’t feasible are indeed the major reason I thought about this macroscopic biotech thing, since existing biology is a proof of concept, so some of the arguments against feasibility of nanotech clearly don’t transfer. It still needs to be designed, and the difficulty of that is unclear, but there seem to be fewer reasons to suspect it’s not feasible (at a given level of capabilities).
The macroscopic biotech that accomplishes what you’re positing is addressed in the first part, and the earlier comment where I note that you’re assuming ASI level understanding of bio for exploring an exponential design space for something that isn’t guaranteed to be possible. The difficulty isn’t unclear, it’s understood not to bebfeasible.
Fwiw, I’m happy to grant some chance that we skip the “robot” phase and go straight to nanotech or advanced small-scale biotech. The three stages of the post weren’t meant to preclude skipping a stage, and i agree with you that we should broaden our ‘nanotech’ category to include small-scale biotech
That’s in some sense close to the premise, though I think fast high-fidelity chemistry/biology simulators (or specialized narrow AIs) should be sufficient to get this done even at near-human level, with enough subjective time and simulation compute. My point is that “fruit flies”/biorobots should be an entry on a list that contains both traditional robots and nanotech as relevant for post-AGI industry scaling. There are some perceived difficulties with proper nanotech that don’t apply to this biorobot concept.
In the other direction, a sufficiently effective software-only singularity would directly produce strong ASIs on existing hardware, without needing more compute manufactured first, and so won’t need to bother with human labor or traditional robots, which again doesn’t fit the list in this post. So the premise from the post is more that software-only singularity somewhat fizzles, and then AGI-supercharged industry “slowly” scales to build more compute, until enough time has passed and enough compute has been manufactured that nanotech-level things can be developed. In this setting, the question is whether macroscopic biotech could be unlocked even earlier.
(So I’m not making a general/unconditional prediction in this thread. Outside the above premises I’m expecting a software-only singularity that produces strong ASI on existing hardware without having much use for scaling traditional industry first, though it might also start scaling initially for some months to 1-2 years, perhaps mostly to keep the humans distracted, or because AGIs were directly prompted by humans to make this happen.)
Given the premises, I guess I’m willing to grant that this isn’t a silly extrapolation, and absent them it seems like you basically agree with the post?
However, I have a few notes on why I’d reject your premises.
On your first idea, I think high-fidelity biology simulators require so much understanding of biology that they are subsequent to ASI, rather than a replacement. And even then, you’re still trying to find something by searching an exponential design space—which is nontrivial even for AGI with feasible amounts of “unlimited” compute. Not only that, but the thing you’re looking for needs to do a bunch of stuff that probably isn’t feasible due to fundamental barriers (Not identical to the ones listed there, but closely related to them.)
On your second idea, a software-only singularity assumes that there is a giant compute overhang for some specific buildable general AI that doesn’t even require specialized hardware. Maybe so, but I’m skeptical; the brain can’t be simulated directly via Deep NNs, which is what current hardware is optimized for. And if some other hardware architecture using currently feasible levels of compute is devised, there still needs to be a massive build-out of these new chips—which then allows “enough compute has been manufactured that nanotech-level things can be developed.” But that means you again assume that arbitrary nanotech is feasible, which could be true, but as the other link notes, certainly isn’t anything like obvious.
(It’s useful to clearly distinguish exploration of what follows from some premises, and views on whether the premises are important/likely/feasible. Issues with the latter are no reason at all to hesitate or hedge with the former.)
I mentioned arbitrary nanotech, but it’s not doing any work there as an assumption. So it being infeasible doesn’t change the point about macroscopic biotech possibly being first, which is technically still the case if nanotech doesn’t follow at all.
Various claims that nanotech isn’t feasible are indeed the major reason I thought about this macroscopic biotech thing, since existing biology is a proof of concept, so some of the arguments against feasibility of nanotech clearly don’t transfer. It still needs to be designed, and the difficulty of that is unclear, but there seem to be fewer reasons to suspect it’s not feasible (at a given level of capabilities).
The macroscopic biotech that accomplishes what you’re positing is addressed in the first part, and the earlier comment where I note that you’re assuming ASI level understanding of bio for exploring an exponential design space for something that isn’t guaranteed to be possible. The difficulty isn’t unclear, it’s understood not to bebfeasible.
Fwiw, I’m happy to grant some chance that we skip the “robot” phase and go straight to nanotech or advanced small-scale biotech. The three stages of the post weren’t meant to preclude skipping a stage, and i agree with you that we should broaden our ‘nanotech’ category to include small-scale biotech