What 3d printing aesthetic? I understand the core step of drexelerian nanoassembly is a target molecule is being physically held in basically a mechanical jig. And feedstock gas is introduced—obviously filtered enough it’s pure element wise though nanotechnology only operates on electron cloud identity like all chemistry—is introduced to the target mechanically. The feedstock molecules are chosen where bonding is energetically favorable. A chemical bond happens, and the new molecule is sent somewhere else in the factory.
The key note is that the proposal is to use machinery to completely enclose and limit the chemistry to the reaction you wanted. And the machinery doing this is specialized—it immediately starts working on the exact same bonding step again. It’s similar to how nature does it except that biological enzymes are floppy, which lets errors happen, and they rely on the properties of water to “cage” molecules and otherwise act as part of the chemistry, vs the drexler way would require an actual physical tool to be robotically pressed into place, forcing there to be exactly one possible bond.
Did you read his books? I skimmed them and recall no discussion of direct printing, chemistry can’t do that.
So a nanoforge at a higher level is all these assembly lines that produce exactly one product. The larger molecules being worked on can be sent down variant paths and at the larger subsystem and robotic machinery assembly levels there are choices. At the point there are choices these are big subassemblies of hundreds of Daltons, just like how nature strings peptides out of fairly bulky amino acids.
Primarily though you should realize that while a nanoforge would be a colossal machine made of robotics, it can only make this limited “menu” of molecules and robotic parts, and in turn almost all these parts are used in itself. When it isn’t copying itself, it can make products, but those products are all just remixes from this limited menu.
It’s not an entirely blind process, robotic assembly stations can sense if a large molecule is there, and they are shaped to fit only one molecule, so factory logic including knowing if a line is “dead” is possible. (Dead lines can’t be repaired so you have to be able to route copies of what they were producing from other lines, and this slows the whole nanoforge down as it ‘ages’ - it has to construct another complete nanoforge before something critical fails and it ceases to function).
Similarly other forms of errors may be reportable.
What I like about the nanoforge hypothesis is that we can actually construct fairly simply programmatic goals for a super intelligent narrow AI to follow to describe what this machine is, as well as a whole tree of subtotals*. For every working nanoforge there is this immense combinatorial space of designs that won’t work, and this is recursively true down to the smallest discrete parts, as an optimization problem there is a lot of coupling. The small molecule level robotic assembly stations need to reuse as many parts as possible between them, because this shrinks the size and complexity of the the overall machine for instance.
This doesn’t subdivide well between design teams of humans.
Another coupling example: suppose you discover a way to construct an electric motor at the nanoscale and it scores the best on a goal heuristic, after years of work.
You then find it can’t be integrated into the housing another team was working on.
For an AI this is not a major problem—you simply need to remember the 1 million other motor and housing candidates you designed in simulation and begin combinatorially checking how they match up. In fact you never really commit to a possibility but always just maintain lists of possibilities as you work towards the end goal.
I have seen human teams at Intel do this but they would have a list length of 2. “If this doesn’t work here’s the backup”.
Right, by 3d printing I mean the individual steps of adding atoms at precise locations.
Like in the video you linked elsewhere—acetylene is going to leak through the seal, or it’s going to dissociate from where it’s supposed to sit, and then it’s going to at best get adsorbed onto your machinery before getting very slowly pumped out. But even adsorbed gas changes the local electron density, which changes how atoms bond.
The machinery may sense when it’s totally gummed up, but it can’t sense if unluckily adsorbed gas has changed the location of the carbon atoms it’s holding by 10 pm, introducing a small but unacceptable probability of failing to bond, or bonding to the wrong site. And downstream, atoms in the wrong place means higher chance of the machinery bonding to the product, then ripping atoms off of both when the machinery keeps moving.
Individual steps of adding atoms is what you do in organic synthesis with catalysts all the time. This is just trying to make side reactions very very rare, and to do that one step is to not use solvents because they are chaotic. Bigger enclosing catalysts instead.
Countless rare probability events will cause a failure. Machinery has to do sufficient work during it’s lifetime to contribute enough new parts to compensate for the failures. It does not need to be error free just low enough error to be usable.
The current hypothesis for life is that very poor quality replicators—basically naked RNA—evolved in a suitable host environment and were able to do exactly this, copying themselves slightly faster than they degrade. This is laboratory verified.
So far I haven’t really heard any objections other than we are really far from the infrastructure needed to build something like this. Tentatively I assume the order or dependent technology nodes is :
Human level narrow AI → general purpose robotics → general purpose robotic assembly at macroscale → self replicating macroscale robotics → narrow AI research systems → very large scale research complexes operated by narrow AI.
The fundamental research algorithm is this:
The AI needs a simulation to determine if a candidate design is likely to work or not. So the pipeline is
This is recursive of course, you predict n frames in advance by using the prior predicted frames.
The way an AI can do science is the following:
(1) identify simulation environment frames of interest to the task of it’s end goal with high uncertainty
(2) propose experiments to reduce uncertainty
(3) sort the experiments by a heuristic of cost, information gain
Perform the top 1000 or so experiments in parallel, and update the model, back to the beginning.
All experiments are obviously robotic, ideally with heterogeneous equipment. (different brand of robot, different apparatus, different facility, different funding source, different software stack)
Anyways that’s how you unlock nanoforges—build thousands or millions of STMs and investigate this in parallel. Likely not achievable without the dependent tech nodes above.
The current model is that individual research groups have what, 1-10 STMs? A small team of a few grad students? And they encrypt their results in a “research paper” deliberately designed to be difficult for humans to read even if they are well educated? So even if there were a million labs investigating nanotechology, nearly all the papers all of them write are not read by any but a few of the others. Negative results and raw data are seldom published so each lab is repeating the same mistakes others made thousands of times already.
This doesn’t work. It only worked for lower hanging fruit. It’s the model you discover radioactivity or the transistor with, not the model you build an industrial complex than crams most of the complexity of earth’s industrial chain into a small box.
What 3d printing aesthetic? I understand the core step of drexelerian nanoassembly is a target molecule is being physically held in basically a mechanical jig. And feedstock gas is introduced—obviously filtered enough it’s pure element wise though nanotechnology only operates on electron cloud identity like all chemistry—is introduced to the target mechanically. The feedstock molecules are chosen where bonding is energetically favorable. A chemical bond happens, and the new molecule is sent somewhere else in the factory.
The key note is that the proposal is to use machinery to completely enclose and limit the chemistry to the reaction you wanted. And the machinery doing this is specialized—it immediately starts working on the exact same bonding step again. It’s similar to how nature does it except that biological enzymes are floppy, which lets errors happen, and they rely on the properties of water to “cage” molecules and otherwise act as part of the chemistry, vs the drexler way would require an actual physical tool to be robotically pressed into place, forcing there to be exactly one possible bond.
Did you read his books? I skimmed them and recall no discussion of direct printing, chemistry can’t do that.
So a nanoforge at a higher level is all these assembly lines that produce exactly one product. The larger molecules being worked on can be sent down variant paths and at the larger subsystem and robotic machinery assembly levels there are choices. At the point there are choices these are big subassemblies of hundreds of Daltons, just like how nature strings peptides out of fairly bulky amino acids.
Primarily though you should realize that while a nanoforge would be a colossal machine made of robotics, it can only make this limited “menu” of molecules and robotic parts, and in turn almost all these parts are used in itself. When it isn’t copying itself, it can make products, but those products are all just remixes from this limited menu.
It’s not an entirely blind process, robotic assembly stations can sense if a large molecule is there, and they are shaped to fit only one molecule, so factory logic including knowing if a line is “dead” is possible. (Dead lines can’t be repaired so you have to be able to route copies of what they were producing from other lines, and this slows the whole nanoforge down as it ‘ages’ - it has to construct another complete nanoforge before something critical fails and it ceases to function).
Similarly other forms of errors may be reportable.
What I like about the nanoforge hypothesis is that we can actually construct fairly simply programmatic goals for a super intelligent narrow AI to follow to describe what this machine is, as well as a whole tree of subtotals*. For every working nanoforge there is this immense combinatorial space of designs that won’t work, and this is recursively true down to the smallest discrete parts, as an optimization problem there is a lot of coupling. The small molecule level robotic assembly stations need to reuse as many parts as possible between them, because this shrinks the size and complexity of the the overall machine for instance.
This doesn’t subdivide well between design teams of humans.
Another coupling example: suppose you discover a way to construct an electric motor at the nanoscale and it scores the best on a goal heuristic, after years of work.
You then find it can’t be integrated into the housing another team was working on.
For an AI this is not a major problem—you simply need to remember the 1 million other motor and housing candidates you designed in simulation and begin combinatorially checking how they match up. In fact you never really commit to a possibility but always just maintain lists of possibilities as you work towards the end goal.
I have seen human teams at Intel do this but they would have a list length of 2. “If this doesn’t work here’s the backup”.
Right, by 3d printing I mean the individual steps of adding atoms at precise locations.
Like in the video you linked elsewhere—acetylene is going to leak through the seal, or it’s going to dissociate from where it’s supposed to sit, and then it’s going to at best get adsorbed onto your machinery before getting very slowly pumped out. But even adsorbed gas changes the local electron density, which changes how atoms bond.
The machinery may sense when it’s totally gummed up, but it can’t sense if unluckily adsorbed gas has changed the location of the carbon atoms it’s holding by 10 pm, introducing a small but unacceptable probability of failing to bond, or bonding to the wrong site. And downstream, atoms in the wrong place means higher chance of the machinery bonding to the product, then ripping atoms off of both when the machinery keeps moving.
Individual steps of adding atoms is what you do in organic synthesis with catalysts all the time. This is just trying to make side reactions very very rare, and to do that one step is to not use solvents because they are chaotic. Bigger enclosing catalysts instead.
Countless rare probability events will cause a failure. Machinery has to do sufficient work during it’s lifetime to contribute enough new parts to compensate for the failures. It does not need to be error free just low enough error to be usable.
The current hypothesis for life is that very poor quality replicators—basically naked RNA—evolved in a suitable host environment and were able to do exactly this, copying themselves slightly faster than they degrade. This is laboratory verified.
So far I haven’t really heard any objections other than we are really far from the infrastructure needed to build something like this. Tentatively I assume the order or dependent technology nodes is :
Human level narrow AI → general purpose robotics → general purpose robotic assembly at macroscale → self replicating macroscale robotics → narrow AI research systems → very large scale research complexes operated by narrow AI.
The fundamental research algorithm is this:
The AI needs a simulation to determine if a candidate design is likely to work or not. So the pipeline is
(sim frame) → engine stage 1 → neural network engine → predicted frames, uncertainty
This is recursive of course, you predict n frames in advance by using the prior predicted frames.
The way an AI can do science is the following:
(1) identify simulation environment frames of interest to the task of it’s end goal with high uncertainty
(2) propose experiments to reduce uncertainty
(3) sort the experiments by a heuristic of cost, information gain
Perform the top 1000 or so experiments in parallel, and update the model, back to the beginning.
All experiments are obviously robotic, ideally with heterogeneous equipment. (different brand of robot, different apparatus, different facility, different funding source, different software stack)
Anyways that’s how you unlock nanoforges—build thousands or millions of STMs and investigate this in parallel. Likely not achievable without the dependent tech nodes above.
The current model is that individual research groups have what, 1-10 STMs? A small team of a few grad students? And they encrypt their results in a “research paper” deliberately designed to be difficult for humans to read even if they are well educated? So even if there were a million labs investigating nanotechology, nearly all the papers all of them write are not read by any but a few of the others. Negative results and raw data are seldom published so each lab is repeating the same mistakes others made thousands of times already.
This doesn’t work. It only worked for lower hanging fruit. It’s the model you discover radioactivity or the transistor with, not the model you build an industrial complex than crams most of the complexity of earth’s industrial chain into a small box.