I really like this post, I hope to see more like it on less wrong, and I strong-upvoted it.
Thanks, glad you liked it. You made quite the comment here, but I’ll try to respond to most of it.
Metal surfaces: Why not just build up the metal object in an oxygen-free environment, then add on an external passivation layer at then end?
To build up metal, you need to carry metal atoms somehow. That requires moving ions, because otherwise there’s no motive force for the transfer, plus your carrier would probably be stuck to the metal.
Without proteins carrying ions in water, this is difficult. The best version of what you’re proposing is probably directed electrochemical deposition in some solvent that has a wide electrochemical window and can dissolve some metal ions. Such solvents would denature proteins.
Inputs and outputs need to be transferred between compartments. Cells do use “airlock” type structures for transferring material, but some leakage would be inevitable.
The passivation layer could be engineered to be more stable than the naturally occurring oxidation layer the metal would normally have. There would still be a minimum size for metal objects, of course. (Or more precisely, a minimal curvature.) Corrosion could definitely be a problem, but Cathodic protection might help in some cases.
It’s true that proteins can be designed to bind strongly to metal oxide surfaces and inhibit corrosion fairly well. That’s actually an interesting research topic that might be useful for steel. But even that isn’t good enough on such a small scale, and you’d need to fully cover all exposed surfaces.
The only other option for “engineering” more-stable surfaces is metal nitrides or carbides, but that requires high temperatures, it’s not something enzymes can do.
Cathodic protection doesn’t help here. It doesn’t maintain a perfect equilibrium, and objects would still do Ostwald ripening and tend to become more spherical.
Agree that electrostatic motors are the way to go here. I’m not sure the power supply necessarily has to be an ion gradient, nor that the motor would otherwise need to be made from metal. Metal might be actively bad, because it allows the electrons to slosh around a lot.
I’m not sure what you mean by electrons “sloshing around”.
What about this general scheme for a motor?: Wheel-shaped molecules, with sites that can hold electrons. A high voltage coming from a power supply deposits electrons on one wheel, and produces holes on the other wheel. The corresponding sites are attracted to each other and once they get close enough, the electrons jump into the holes, filling them. Switching is determined by the proximity of various sites on the wheels as they move relative to each other. Considering how electrons are able to jump around between sites in the electron transport chain, this doesn’t seem impossible.
It’s certainly possible to make electromechanical computers with relays. And it’s possible to use MEMS electrostatic actuators for relays. They’re just not as good as semiconductors for computers. The MEMS relay approach is actually used in some devices for handling high-frequency radio signals.
Consider the analogous versions of ionic and electrostatic motors, and think about what’s better. Ionic motors use tubes filled with water instead of conductive wires with insulation; those transmit signals slower but are easier to make. Ionic motors can dump ions into solution instead of needing a conductor at a lower voltage. Ionic motors don’t have to deal with possible unintentional electrolysis. Ion gates are much easier to make with proteins than electrical switches.
Electrostatic motors are generally switched for each rotational step, but consider: If you want to compete with the energy usage of ionic motors, you can only use a few electron-volts per rotation. Semiconductor switches and relays are not so good for measuring out individual electrons.
Instead of sites that hold electrons, why not use sites that hold ions?
An intermediate design that I can imagine is a block with a series of tubes and chambers embedded in it. (The bulk of the block can hold electronics.) Most of the tubes are filled with water, so nanobot components can happily bounce around in them. But lots of components are also mounted to the walls of the tubes. You can’t clump together if you’re bonded to the wall of your tube. A small minority of tubes can be filled with gas, or even under vacuum, for any weird processes that may require those conditions. Pumping energy is volume times pressure, so the energy requirements could be reasonable as long as the volume is small.
That’s basically the same proposal as having gas-filled compartments in large cells, so this applies:
Any self-replicating cell must move material between outside and multiple compartments. Gas leakage by the transporters would be inevitable. Cellular vacuum pumps would require too much energy and may be impossible. Also, strongly binding the compounds used (eg CO2) to carriers at every step would require too much energy. (“Too much energy” means too much to be competitive with normal biological processes.)
If the objects bound to the walls of gas-filled compartments have movable arms, on a small enough scale, those arms would also get stuck to (or away from) the walls by electrostatic and dispersion forces.
The reason for doing things at high temperatures is to do reactions with a high activation energy. If we’re designing custom catalysts (artificial enzymes) for our nanobots, we can probably finesse it so that the enzyme coaxes the reactants into the high-energy intermediate state, even if the ambient temperature is low (via coupling to a more favorable reaction, for example).
Yes, enzymes can catalyze some difficult reactions. The main tools they use for that are hydrogen bonding patterns that stabilize specific conformations, and electrostatic fields around the active site. There are also p450 oxidases that put a reactive site in a hydrophobic pocket, and use it to oxidize hydrocarbons in a semi-controlled way to create a way to process them.
But enzymes aren’t magic. They have limitations. For example, methane can be metabolized by some bacteria, but it’s always oxidized to methanol in a reaction that consumes NAD(P)H. Half the energy of the methane is wasted to get it to a state that can be metabolized, and there’s just no way around that.
Another notable limitation of enzymes is the difficulty of making aliphatic hydrocarbons. That’s why hydrophobic stuff is almost always fatty acids or terpenes from DMAPP.
I’m fairly familiar with protein mechanisms and their limitations. Is there some other type of mechanism you’re proposing for low-temperature catalysis, something that enzymes don’t already use?
Also, covalent single-bonds can rotate, so there’s nothing preventing the existence of a covalently bonded structure that can also exhibit conformational changes.
Yes, proteins are covalently bonded. They’re also non-covalently bonded. If all their structure was covalent, then they wouldn’t be able to do conformational changes. And because some of it is non-covalent, they denature at high temperature.
Also, I’d guess that the stupidity of evolution has left a lot of low hanging fruit for humans. For example, rather than trying to do a reaction with proteins, we can do it with a group of complicated catalysts synthesized by proteins.
I think the word for that is “cofactor”.
I’d bet on diamond synthesis still being possible somehow, but it does seem like a genuinely complicated question, so I’ll have to look into it further.
OK. Maybe you’ll learn something from the attempt.
Doesn’t have to be rigid, can still be connection based. For example, there could be simple protein-based building blocks that act like legos. An assembly head can assemble these, and move around on the surface of the part it’s building by accepting signals that correspond to “move 1 block left”, etc. Position is always exactly known, not because there’s a rigid beam anywhere in the system, but because we know the exact integer number of steps the assembly head has moved since the start.
OK, suppose you have a linear motor (like myosin) which is controlled by a signal (like a DNA sequence) that indicates a series of movements. (Something more computer-like would be less efficient than that). Also remember that on a molecular scale, energy-efficient = reversible. ATPase spins in both directions.
Compared to coding for a protein sequence, you’re using more information and more energy to do this. It’s also rather difficult to get single-protein-spacing-level control.
So, you’re imagining something like a protein with regularly spaced sites that can be attached to, and something that travels along it, with an enzyme-like tooltip that can bind to those sites to do a reaction that connects something. And that is...actually similar to how cytoskeletons work, but obviously they’re not directly controlled by DNA or RNA.
my general picture is that whenever a cell decides to make proteins somewhere and then transport them somewhere else, that costs genome space, which is very limited, so the cell can’t do that very often. Nanobots genuinely have different constraints from life here, in particular they have cheaper genome space, and so they can have custom designed pipes for every type of protein they use, and the pipe leads right to the chamber where that protein is being used. Huge information cost, but if it makes things work way better, it’s probably worthwhile for nanobots. I totally believe that life is using those techniques exactly as much as is optimal for it, though.
Specifying positions with positioners would require more bits of information than coding for proteins. DNA has high information density, and something much more compact than that wouldn’t have strong enough binding to be read accurately.
In what sense would nanobots have “cheaper genome space” than current cells? What mechanism do you envision being used for information storage?
I also don’t know why you’re scoffing at the potential for building computers here. The supposed “embodied” computation of existing cells is currently computing things that are keeping us alive, which is great, but you can’t exactly solve any other important problems on it. It’s not a flexible universal computer, in the sense of a Turing machine that can run any program.
If the point of your nanobots is to be “like current life, but worse, except it also produces a computer” then I think the usual word for that is “neurons”. The resulting computer would need to be better than current systems.
You want to grow brains that work more like CPUs. The computational paradigm used by CPUs is used because it’s conceptually easy to program, but it has some problems. Error tolerance is very poor; CPUs can be crashed by a single bit-flip from cosmic rays. CPUs also have less computational capacity than GPUs. Brains work more like...neural networks; perhaps that’s where the name came from.
Drexler envisioned such mechanical computers being used to control internal processes as well; that’s why I made the comparison. According to some people, this would be an advantage over how cells work for controlling internal operations, but I disagree.
That framing is unnatural to me. I see “solving a problem” as being more like solving several mazes simultaneously. Finding or seeing dead ends in a maze is both a type of progress towards solving the maze and a type of progress towards knowing if the maze is solvable.