The “fruit flies” are the source of growth, so the relevant anchor is how long it takes to manufacture a lot of them. Let’s say there are 1000 “flies” 1 mg each to start, doubling in number every 2 days, and we want to produce 10 billion 100 kg robots (approximately the total mass of all humans and cattle), which is 1e15x more mass and will take 100 days to produce. Anchoring to the animal kingdom, metamorphosis takes a few days to weeks, which doesn’t significantly affect the total time.
things will try to eat and parasitize the things you build
I’m assuming the baseline of existing animals such as cattle that are doing OK, not completely novel design.
When do you expect the first attempt?
I’m not assuming a software-only singularity (substantially increasing compute efficiency and intelligence of AIs), only ~100x faster-than-human automated R&D that’s necessary for it (but possibly not sufficient). The AIs instead develop better biology modeling software (and all the theory that requires), to the point where only months’ worth of compute and actual experiments would be necessary to fix important discrepancies with ground truth, making engineering of a wide variety of functional biological robots feasible.
So the overall prediction is that 1-2 years from hitting automated R&D, even without a software-only singularity, there could be a ~humanity-sized workforce that is instantly customizable for any physical manipulation purpose and could subsequently double every 2 days, constrained only by raw materials and ability to convert them into power plants and factories to sustain themselves.
What does it mean that the fruit flies are a source of growth? Is the idea to use them as raw biomass?
Because if the goal is “get a billion metric tons of dry biomass”, I expect already straightforward to use agricultural waste. Global rice straw (the stuff left in the fields after the rice is harvested) production alone is already about a billion metric tons per year. At $165 / ton, it would be a bit pricy—twice the $100B OpenAI is immediately deploying for their Stargate data center—but very much manageable for a big company if the expected payoff was there.
I don’t think raw biomass is a meaningful bottleneck. If your timelines had a couple year period “time it takes for the AI to establish control over a billion tons of biomass” I think you should remove that period from your timeline.
The AIs instead develop better biology modeling software (and all the theory that requires), to the point where only months’ worth of compute and actual experiments would be necessary to fix important discrepancies with ground truth, making engineering of a wide variety of functional biological robots feasible.
I think this is where the crux is. In software, you can take a system with a bug, determine where the bug is, fix it, deploy the fix, and have the fixed version running minutes to hours after you identified the bug.
With engineered biological systems, some “bugs” don’t manifest until the system has been running for weeks or months. Your cycle time, then, is not the generational time of your “fruit flies”, but the time it takes between when you start assembling a biorobot and when that biorobot starts doing useful work.
Maybe the crux is that you expect that it is feasible to construct biological simulations which perform well for long-term modeling, even when modeling something where that simulation has not been tuned to match observational data from that domain, and I expect that not to be a thing that is available at near-future levels of compute.
there could be a ~humanity-sized workforce that is instantly customizable for any physical manipulation purpose and could subsequently double every 2 days, constrained only by raw materials and ability to convert them into power plants and factories to sustain themselves
I mean if you have a billion tons of biomass sitting around and the ability to “program” that biomass into biorobots, I don’t think it particularly makes sense to talk about the “doubling time” of biorobots—biorobots aren’t a meaningful bottleneck to the production of more biorobots, so once you have one you can go straight to having a billion. I think the difficult part is the bit where you get one biorobot that functions how you want it to.
The manufacturing process starts with AI-designed DNA, which is used to produce a few “flies” using scarce biotech equipment, and then those flies are used to manufacture a billion tons of cells with AI-designed DNA within ~100 days, using only low-tech inputs like cheap feed. The cells form the bodies of the “flies”, and the “flies” can assemble into large robots using something like metamorphosis, repurposing the cells. So by flies being the source of growth I mean the growth of the amount of high tech capital that is the cells with AI-programmed DNA capable of assembling into functional large robots.
I don’t think raw biomass is a meaningful bottleneck.
The whole point of going with biological robots and then “fruit flies” is that this removes bottlenecks on volume production, once the cells have been designed and the first few “flies” can be manufactured. And once there is a billion tons of robots, they can rapidly set up more infrastructure than the human industry would be able to, and continue the process.
biorobots aren’t a meaningful bottleneck to the production of more biorobots
That’s why you need the “flies”. Since megafauna can’t double their numbers every 2 days, without the “flies” the number of biorobots would be a bottleneck to production.
With engineered biological systems, some “bugs” don’t manifest until the system has been running for weeks or months.
The point of developing simulation software/models is to become able to run accurate high-speed simulations of biological organisms, in order to fix bugs in cell/fly/robot designs faster than the experiments on them could be performed in the physical world. Feedback from the physical experiments that will still be performed is probably more useful for fixing the errors in the simulation software/models, rather than for directly fixing the errors in cell/fly/robot designs.
Maybe the crux is that you expect that it is feasible to construct biological simulations which perform well for long-term modeling, even when modeling something where that simulation has not been tuned to match observational data from that domain, and I expect that not to be a thing that is available at near-future levels of compute.
Sure, this seems similar to intuitions about impossibility of software-only singularity. The point of the macroscopic biotech example is that it doesn’t depend on either superintelligence or much higher compute efficiency in AI. But it does depend on high efficiency/accuracy long horizon simulations of large biological organisms based on engineered DNA being possible to develop in 100-200 years of human-equivalent theory/software progress with limited opportunity to run physical experiments.
Feedback from the physical experiments that will still be performed is probably more useful for fixing the errors in the simulation software/models, rather than for directly fixing the errors in cell/fly/robot designs.
This is something I want to poke at a bit, because it seems like a pretty core disagreement.
In a completely different domain, do you expect something like DGMR (DeepMind’s precipitation nowcasting ML thingy., basically a GAN over weather radar maps) would work better than non-ML weather models to predict US weather after being trained only on UK weather? I expect not, and I don’t expect the reason it wouldn’t work is anything like “the ML engineers weren’t clever enough”.
The bio simulations need to be more clever than ML on bio data, they need to incorporate feedback from simulations of more basic/fundamental/general principles of chemistry and physics. Making this possible is what the 100-200 subjective years of R&D are for.
I’m not confident 100-200 subjective years of R&D help enough, for the same reason I don’t think it would be enough for US weather forecasting to have 100-200 years to look at and build models of UK weather data in order to predict US weather well enough to make money in our crop futures markets. Training on UK data would definitely help more than zero at predicting US weather, but “more than zero” is not the bar.
Similarly, 200 years of improvements to biological simulations would help more than zero with predicting the behavior of engineered biosystems, but that’s not the bar. The bar is “build a functional general purpose biorobot more quickly and cheaply than the boring robotics/integration with world economy path”. I don’t think human civilization minus AI is on track to be able to do that in the next 200 years.
Similarly, 200 years of improvements to biological simulations would help more than zero with predicting the behavior of engineered biosystems, but that’s not the bar. The bar is “build a functional general purpose biorobot more quickly and cheaply than the boring robotics/integration with world economy path”. I don’t think human civilization minus AI is on track to be able to do that in the next 200 years.
I don’t think it’s on track to do so, but this is mostly because of the coming population decline meaning regression in tech is very likely.
If I instead assumed that the human population would expand in a similar manner to the AI population, and was willing to rewrite/ignore regulations, I’d put a >90% chance that we could build bio-robots more quickly and cheaply than the boring robotics path in 100-200 years.
The “fruit flies” are the source of growth, so the relevant anchor is how long it takes to manufacture a lot of them. Let’s say there are 1000 “flies” 1 mg each to start, doubling in number every 2 days, and we want to produce 10 billion 100 kg robots (approximately the total mass of all humans and cattle), which is 1e15x more mass and will take 100 days to produce. Anchoring to the animal kingdom, metamorphosis takes a few days to weeks, which doesn’t significantly affect the total time.
I’m assuming the baseline of existing animals such as cattle that are doing OK, not completely novel design.
I’m not assuming a software-only singularity (substantially increasing compute efficiency and intelligence of AIs), only ~100x faster-than-human automated R&D that’s necessary for it (but possibly not sufficient). The AIs instead develop better biology modeling software (and all the theory that requires), to the point where only months’ worth of compute and actual experiments would be necessary to fix important discrepancies with ground truth, making engineering of a wide variety of functional biological robots feasible.
So the overall prediction is that 1-2 years from hitting automated R&D, even without a software-only singularity, there could be a ~humanity-sized workforce that is instantly customizable for any physical manipulation purpose and could subsequently double every 2 days, constrained only by raw materials and ability to convert them into power plants and factories to sustain themselves.
What does it mean that the fruit flies are a source of growth? Is the idea to use them as raw biomass?
Because if the goal is “get a billion metric tons of dry biomass”, I expect already straightforward to use agricultural waste. Global rice straw (the stuff left in the fields after the rice is harvested) production alone is already about a billion metric tons per year. At $165 / ton, it would be a bit pricy—twice the $100B OpenAI is immediately deploying for their Stargate data center—but very much manageable for a big company if the expected payoff was there.
I don’t think raw biomass is a meaningful bottleneck. If your timelines had a couple year period “time it takes for the AI to establish control over a billion tons of biomass” I think you should remove that period from your timeline.
I think this is where the crux is. In software, you can take a system with a bug, determine where the bug is, fix it, deploy the fix, and have the fixed version running minutes to hours after you identified the bug.
With engineered biological systems, some “bugs” don’t manifest until the system has been running for weeks or months. Your cycle time, then, is not the generational time of your “fruit flies”, but the time it takes between when you start assembling a biorobot and when that biorobot starts doing useful work.
Maybe the crux is that you expect that it is feasible to construct biological simulations which perform well for long-term modeling, even when modeling something where that simulation has not been tuned to match observational data from that domain, and I expect that not to be a thing that is available at near-future levels of compute.
I mean if you have a billion tons of biomass sitting around and the ability to “program” that biomass into biorobots, I don’t think it particularly makes sense to talk about the “doubling time” of biorobots—biorobots aren’t a meaningful bottleneck to the production of more biorobots, so once you have one you can go straight to having a billion. I think the difficult part is the bit where you get one biorobot that functions how you want it to.
The manufacturing process starts with AI-designed DNA, which is used to produce a few “flies” using scarce biotech equipment, and then those flies are used to manufacture a billion tons of cells with AI-designed DNA within ~100 days, using only low-tech inputs like cheap feed. The cells form the bodies of the “flies”, and the “flies” can assemble into large robots using something like metamorphosis, repurposing the cells. So by flies being the source of growth I mean the growth of the amount of high tech capital that is the cells with AI-programmed DNA capable of assembling into functional large robots.
The whole point of going with biological robots and then “fruit flies” is that this removes bottlenecks on volume production, once the cells have been designed and the first few “flies” can be manufactured. And once there is a billion tons of robots, they can rapidly set up more infrastructure than the human industry would be able to, and continue the process.
That’s why you need the “flies”. Since megafauna can’t double their numbers every 2 days, without the “flies” the number of biorobots would be a bottleneck to production.
The point of developing simulation software/models is to become able to run accurate high-speed simulations of biological organisms, in order to fix bugs in cell/fly/robot designs faster than the experiments on them could be performed in the physical world. Feedback from the physical experiments that will still be performed is probably more useful for fixing the errors in the simulation software/models, rather than for directly fixing the errors in cell/fly/robot designs.
Sure, this seems similar to intuitions about impossibility of software-only singularity. The point of the macroscopic biotech example is that it doesn’t depend on either superintelligence or much higher compute efficiency in AI. But it does depend on high efficiency/accuracy long horizon simulations of large biological organisms based on engineered DNA being possible to develop in 100-200 years of human-equivalent theory/software progress with limited opportunity to run physical experiments.
This is something I want to poke at a bit, because it seems like a pretty core disagreement.
In a completely different domain, do you expect something like DGMR (DeepMind’s precipitation nowcasting ML thingy., basically a GAN over weather radar maps) would work better than non-ML weather models to predict US weather after being trained only on UK weather? I expect not, and I don’t expect the reason it wouldn’t work is anything like “the ML engineers weren’t clever enough”.
The bio simulations need to be more clever than ML on bio data, they need to incorporate feedback from simulations of more basic/fundamental/general principles of chemistry and physics. Making this possible is what the 100-200 subjective years of R&D are for.
I’m not confident 100-200 subjective years of R&D help enough, for the same reason I don’t think it would be enough for US weather forecasting to have 100-200 years to look at and build models of UK weather data in order to predict US weather well enough to make money in our crop futures markets. Training on UK data would definitely help more than zero at predicting US weather, but “more than zero” is not the bar.
Similarly, 200 years of improvements to biological simulations would help more than zero with predicting the behavior of engineered biosystems, but that’s not the bar. The bar is “build a functional general purpose biorobot more quickly and cheaply than the boring robotics/integration with world economy path”. I don’t think human civilization minus AI is on track to be able to do that in the next 200 years.
I don’t think it’s on track to do so, but this is mostly because of the coming population decline meaning regression in tech is very likely.
If I instead assumed that the human population would expand in a similar manner to the AI population, and was willing to rewrite/ignore regulations, I’d put a >90% chance that we could build bio-robots more quickly and cheaply than the boring robotics path in 100-200 years.