Getting to the point where mechanical engineering is “easy to verify” seems extremely challenging to me. I used to work in manufacturing. Basically everyone I know in the field has completely valid complaints about mechanical engineers who are mostly familiar with CAD, simulations, and textbook formulas, because they design parts that ignore real world manufacturing constraints. AI that designs with simulations seems likely to produce the same result.
Additionally, I would guess that today’s humanoid robots are already good enough on the mechanical side, and they could become self replicating if they were just more intelligent and dextrous.
One example of the sort of problem that could be difficult to simulate: I was working on a process where a robot automatically loaded parts into a CNC machine. The CNC machine produced metal chips as it removed material from the part. The chips would typically be cleared away by a stream of coolant from a mounted hose. Under certain angles of the hose, chips would accumulate in the wrong locations over the course of multiple hours, interfering with the robot’s placement of the part. Even if the hoses were initially positioned correctly, they could move because someone bumped it when inspecting something or due to vibration.
Simulating how chips come off the part, how coolant flow moves them in the machine, etc, requires an incredible level of fidelity in the simulation and could be potentially intractable to simulate. And this is a very constrained manufacturing task that doesn’t really have to interact with the real world at all.
In general, prototyping something that works is just pretty easy. The challenge is more:
How to manufacture something that will be reliable over the course of many years, even when falling, being exposed to dust and water, etc?
How to manufacture something efficiently at a good price and quality?
It does feel like current models are much better at software than hardware (especially after reading your post).
Do you think the difficulty (of simulating machines in the real world) is due to a lack of compute, or a lack of data?
E.g. if someone wanted to make a simulation of the CNC machine which includes material accumulating due to bad hose angles, would the main difficulty be a lack of computing power, or the tediousness of uploading the entire machine onto the simulation, and adding random forces/damage to account for people bumping into hoses?
Everything about high fidelity simulations would be a pain. For the chips thing, you would have to simulate how chips get thrown as the cutting tool removes material. I wouldn’t be surprised if accurately modeling this required going down the level of atoms, especially as there’s many types of material, cutting tools, cutting tool geometry, etc. This would be insanely expensive and annoying. The simulation also wouldn’t exactly match the real world, basically ever. The cutting edge of the tool very slowly wears, so even if the simulation was perfect at the beginning, it would be inaccurate once the tool begins to wear.
You could probably develop some heuristics that don’t require as accurate of simulation, but it would still be a lot of work and wouldn’t exactly match the real world. Many important forces like friction and elasticity are really difficult to simulate. And making CAD models of everything is super tedious, so we mostly make models that are good enough, never exact.
Have you heard the idea where you just train the model on a range of constants if your constants are off from the physical world? If the coefficient of friction changed a bit in the real world, I doubt humans would suddenly forget how to move, and instead would adjust pretty quickly. Making a model tolerant to the plausible range of sim2real errors might be possible without having an accurate simulation or hand-crafted heuristics.
Yeah, this seems like a reasonable way to train a model that controls a robot. I was addressing the verifier for mechanical designs, and I’m not sure if it’s possible to verify mechanical designs to the same level as the output of computer programs.
:) yes, we shouldn’t be sure what is possible. All we know is that currently computer programs can be verified very easily, and currently mechanical designs are verified so poorly that good designs in simulations may be useless in real life. But things are changing rapidly.
How exact a simulation do you think we need, in order to avoid most conceptual problems (not robustness/accuracy problems)?
Let’s define a “conceptual problem” as a problem where the AI’s design ignores a real world constraint as if it doesn’t exist, because the phenomena only occurs in the real world not the simulation. This renders the AI’s design useless in real life.
Let’s define a “robustness/accuracy problem” as a problem where the AI’s design assumes the simulation is perfect and lacks the robust qualities to survive small assumption mistakes.
Robustness may be improved by requiring the AI’s design to work in different versions of simulations, which vary from each other by as much as real life varies from them.
Accuracy may be improved when adapting the AI’s design to real life. If the AI invents a world of machines capable of self replication (much faster than the human economy but slower than biological life) that works in the simulation but not the real world due to small inaccuracies (rather than conceptual problems), then adapting those machines to the real world may take a lot of work, but much less work than inventing such machines from scratch.
What do you think the probability is, that people can achieve simulations without debilitating conceptual problems (before it is too late to matter)?[1]
I’m not sure—I only worked in a pretty narrow range of the manufacturing / engineering space, and I know there’s a ton of domains out there that I’m not familiar with.
I’m also don’t think most of the problems are conceptual in the first place. As Elon Musk likes to say, making a working prototype is easy, and manufacturing at scale is at least 10-100x harder. Although maybe conceptual work would be required for building self replicating machines that only take raw material as input. I would typically think about robots achieving self replication by just building more robot factories. It seems pretty challenging for a self replicating machine to produce microchips or actuators from raw material, but maybe there’s a way to get around this.
Oops I think I’m using the wrong terminology because I’m not familiar with the industry.
When I say self replicating machine, I am referring to a robot factory. Maybe “self replicating factory” would be a better description.
Biological cells (which self reproduce) are less like machines and more like factories, and the incredible world of complex proteins inside a cell are like the sea of machines inside a factory.
I think a robot factory which doesn’t need human input, can operate at a scale somewhere between human factories and biological cells, and potentially self replicate far faster than the human economy (20 years), but slower than a biological cell (20 minutes or 0.00004 years).
Smaller machines operate faster. An object 1,000,000 times smaller, is 1,000,000 times quicker to move a bodylength at the same speed/energy density, or 10,000 quicker at the same power density, or 1000 times quicker at the same acceleration. It can endure 1,000,000 times more acceleration with the same damage. (Bending/cutting is still 1 times the speed at the same power density, but our economy would grow many times faster if that became the only bottleneck)
Getting to the point where mechanical engineering is “easy to verify” seems extremely challenging to me. I used to work in manufacturing. Basically everyone I know in the field has completely valid complaints about mechanical engineers who are mostly familiar with CAD, simulations, and textbook formulas, because they design parts that ignore real world manufacturing constraints. AI that designs with simulations seems likely to produce the same result.
Additionally, I would guess that today’s humanoid robots are already good enough on the mechanical side, and they could become self replicating if they were just more intelligent and dextrous.
One example of the sort of problem that could be difficult to simulate: I was working on a process where a robot automatically loaded parts into a CNC machine. The CNC machine produced metal chips as it removed material from the part. The chips would typically be cleared away by a stream of coolant from a mounted hose. Under certain angles of the hose, chips would accumulate in the wrong locations over the course of multiple hours, interfering with the robot’s placement of the part. Even if the hoses were initially positioned correctly, they could move because someone bumped it when inspecting something or due to vibration.
Simulating how chips come off the part, how coolant flow moves them in the machine, etc, requires an incredible level of fidelity in the simulation and could be potentially intractable to simulate. And this is a very constrained manufacturing task that doesn’t really have to interact with the real world at all.
In general, prototyping something that works is just pretty easy. The challenge is more:
How to manufacture something that will be reliable over the course of many years, even when falling, being exposed to dust and water, etc?
How to manufacture something efficiently at a good price and quality?
etc
I had some discussion on AI and the physical world here: https://www.lesswrong.com/posts/r3NeiHAEWyToers4F/frontier-ai-models-still-fail-at-basic-physical-tasks-a
Thank you for the real world feedback!
It does feel like current models are much better at software than hardware (especially after reading your post).
Do you think the difficulty (of simulating machines in the real world) is due to a lack of compute, or a lack of data?
E.g. if someone wanted to make a simulation of the CNC machine which includes material accumulating due to bad hose angles, would the main difficulty be a lack of computing power, or the tediousness of uploading the entire machine onto the simulation, and adding random forces/damage to account for people bumping into hoses?
Everything about high fidelity simulations would be a pain. For the chips thing, you would have to simulate how chips get thrown as the cutting tool removes material. I wouldn’t be surprised if accurately modeling this required going down the level of atoms, especially as there’s many types of material, cutting tools, cutting tool geometry, etc. This would be insanely expensive and annoying. The simulation also wouldn’t exactly match the real world, basically ever. The cutting edge of the tool very slowly wears, so even if the simulation was perfect at the beginning, it would be inaccurate once the tool begins to wear.
You could probably develop some heuristics that don’t require as accurate of simulation, but it would still be a lot of work and wouldn’t exactly match the real world. Many important forces like friction and elasticity are really difficult to simulate. And making CAD models of everything is super tedious, so we mostly make models that are good enough, never exact.
Have you heard the idea where you just train the model on a range of constants if your constants are off from the physical world? If the coefficient of friction changed a bit in the real world, I doubt humans would suddenly forget how to move, and instead would adjust pretty quickly. Making a model tolerant to the plausible range of sim2real errors might be possible without having an accurate simulation or hand-crafted heuristics.
Yeah, this seems like a reasonable way to train a model that controls a robot. I was addressing the verifier for mechanical designs, and I’m not sure if it’s possible to verify mechanical designs to the same level as the output of computer programs.
:) yes, we shouldn’t be sure what is possible. All we know is that currently computer programs can be verified very easily, and currently mechanical designs are verified so poorly that good designs in simulations may be useless in real life. But things are changing rapidly.
How exact a simulation do you think we need, in order to avoid most conceptual problems (not robustness/accuracy problems)?
Let’s define a “conceptual problem” as a problem where the AI’s design ignores a real world constraint as if it doesn’t exist, because the phenomena only occurs in the real world not the simulation. This renders the AI’s design useless in real life.
Let’s define a “robustness/accuracy problem” as a problem where the AI’s design assumes the simulation is perfect and lacks the robust qualities to survive small assumption mistakes.
Robustness may be improved by requiring the AI’s design to work in different versions of simulations, which vary from each other by as much as real life varies from them.
Accuracy may be improved when adapting the AI’s design to real life. If the AI invents a world of machines capable of self replication (much faster than the human economy but slower than biological life) that works in the simulation but not the real world due to small inaccuracies (rather than conceptual problems), then adapting those machines to the real world may take a lot of work, but much less work than inventing such machines from scratch.
What do you think the probability is, that people can achieve simulations without debilitating conceptual problems (before it is too late to matter)?[1]
I’m currently at 50% but might change it a lot after thinking more
I’m not sure—I only worked in a pretty narrow range of the manufacturing / engineering space, and I know there’s a ton of domains out there that I’m not familiar with.
I’m also don’t think most of the problems are conceptual in the first place. As Elon Musk likes to say, making a working prototype is easy, and manufacturing at scale is at least 10-100x harder. Although maybe conceptual work would be required for building self replicating machines that only take raw material as input. I would typically think about robots achieving self replication by just building more robot factories. It seems pretty challenging for a self replicating machine to produce microchips or actuators from raw material, but maybe there’s a way to get around this.
Oops I think I’m using the wrong terminology because I’m not familiar with the industry.
When I say self replicating machine, I am referring to a robot factory. Maybe “self replicating factory” would be a better description.
Biological cells (which self reproduce) are less like machines and more like factories, and the incredible world of complex proteins inside a cell are like the sea of machines inside a factory.
I think a robot factory which doesn’t need human input, can operate at a scale somewhere between human factories and biological cells, and potentially self replicate far faster than the human economy (20 years), but slower than a biological cell (20 minutes or 0.00004 years).
Smaller machines operate faster. An object 1,000,000 times smaller, is 1,000,000 times quicker to move a bodylength at the same speed/energy density, or 10,000 quicker at the same power density, or 1000 times quicker at the same acceleration. It can endure 1,000,000 times more acceleration with the same damage. (Bending/cutting is still 1 times the speed at the same power density, but our economy would grow many times faster if that became the only bottleneck)