Individual steps of adding atoms is what you do in organic synthesis with catalysts all the time. This is just trying to make side reactions very very rare, and to do that one step is to not use solvents because they are chaotic. Bigger enclosing catalysts instead.
Countless rare probability events will cause a failure. Machinery has to do sufficient work during it’s lifetime to contribute enough new parts to compensate for the failures. It does not need to be error free just low enough error to be usable.
The current hypothesis for life is that very poor quality replicators—basically naked RNA—evolved in a suitable host environment and were able to do exactly this, copying themselves slightly faster than they degrade. This is laboratory verified.
So far I haven’t really heard any objections other than we are really far from the infrastructure needed to build something like this. Tentatively I assume the order or dependent technology nodes is :
Human level narrow AI → general purpose robotics → general purpose robotic assembly at macroscale → self replicating macroscale robotics → narrow AI research systems → very large scale research complexes operated by narrow AI.
The fundamental research algorithm is this:
The AI needs a simulation to determine if a candidate design is likely to work or not. So the pipeline is
This is recursive of course, you predict n frames in advance by using the prior predicted frames.
The way an AI can do science is the following:
(1) identify simulation environment frames of interest to the task of it’s end goal with high uncertainty
(2) propose experiments to reduce uncertainty
(3) sort the experiments by a heuristic of cost, information gain
Perform the top 1000 or so experiments in parallel, and update the model, back to the beginning.
All experiments are obviously robotic, ideally with heterogeneous equipment. (different brand of robot, different apparatus, different facility, different funding source, different software stack)
Anyways that’s how you unlock nanoforges—build thousands or millions of STMs and investigate this in parallel. Likely not achievable without the dependent tech nodes above.
The current model is that individual research groups have what, 1-10 STMs? A small team of a few grad students? And they encrypt their results in a “research paper” deliberately designed to be difficult for humans to read even if they are well educated? So even if there were a million labs investigating nanotechology, nearly all the papers all of them write are not read by any but a few of the others. Negative results and raw data are seldom published so each lab is repeating the same mistakes others made thousands of times already.
This doesn’t work. It only worked for lower hanging fruit. It’s the model you discover radioactivity or the transistor with, not the model you build an industrial complex than crams most of the complexity of earth’s industrial chain into a small box.
Individual steps of adding atoms is what you do in organic synthesis with catalysts all the time. This is just trying to make side reactions very very rare, and to do that one step is to not use solvents because they are chaotic. Bigger enclosing catalysts instead.
Countless rare probability events will cause a failure. Machinery has to do sufficient work during it’s lifetime to contribute enough new parts to compensate for the failures. It does not need to be error free just low enough error to be usable.
The current hypothesis for life is that very poor quality replicators—basically naked RNA—evolved in a suitable host environment and were able to do exactly this, copying themselves slightly faster than they degrade. This is laboratory verified.
So far I haven’t really heard any objections other than we are really far from the infrastructure needed to build something like this. Tentatively I assume the order or dependent technology nodes is :
Human level narrow AI → general purpose robotics → general purpose robotic assembly at macroscale → self replicating macroscale robotics → narrow AI research systems → very large scale research complexes operated by narrow AI.
The fundamental research algorithm is this:
The AI needs a simulation to determine if a candidate design is likely to work or not. So the pipeline is
(sim frame) → engine stage 1 → neural network engine → predicted frames, uncertainty
This is recursive of course, you predict n frames in advance by using the prior predicted frames.
The way an AI can do science is the following:
(1) identify simulation environment frames of interest to the task of it’s end goal with high uncertainty
(2) propose experiments to reduce uncertainty
(3) sort the experiments by a heuristic of cost, information gain
Perform the top 1000 or so experiments in parallel, and update the model, back to the beginning.
All experiments are obviously robotic, ideally with heterogeneous equipment. (different brand of robot, different apparatus, different facility, different funding source, different software stack)
Anyways that’s how you unlock nanoforges—build thousands or millions of STMs and investigate this in parallel. Likely not achievable without the dependent tech nodes above.
The current model is that individual research groups have what, 1-10 STMs? A small team of a few grad students? And they encrypt their results in a “research paper” deliberately designed to be difficult for humans to read even if they are well educated? So even if there were a million labs investigating nanotechology, nearly all the papers all of them write are not read by any but a few of the others. Negative results and raw data are seldom published so each lab is repeating the same mistakes others made thousands of times already.
This doesn’t work. It only worked for lower hanging fruit. It’s the model you discover radioactivity or the transistor with, not the model you build an industrial complex than crams most of the complexity of earth’s industrial chain into a small box.