It does feel like current models are much better at software than hardware (especially after reading your post).
Do you think the difficulty (of simulating machines in the real world) is due to a lack of compute, or a lack of data?
E.g. if someone wanted to make a simulation of the CNC machine which includes material accumulating due to bad hose angles, would the main difficulty be a lack of computing power, or the tediousness of uploading the entire machine onto the simulation, and adding random forces/damage to account for people bumping into hoses?
Everything about high fidelity simulations would be a pain. For the chips thing, you would have to simulate how chips get thrown as the cutting tool removes material. I wouldn’t be surprised if accurately modeling this required going down the level of atoms, especially as there’s many types of material, cutting tools, cutting tool geometry, etc. This would be insanely expensive and annoying. The simulation also wouldn’t exactly match the real world, basically ever. The cutting edge of the tool very slowly wears, so even if the simulation was perfect at the beginning, it would be inaccurate once the tool begins to wear.
You could probably develop some heuristics that don’t require as accurate of simulation, but it would still be a lot of work and wouldn’t exactly match the real world. Many important forces like friction and elasticity are really difficult to simulate. And making CAD models of everything is super tedious, so we mostly make models that are good enough, never exact.
Have you heard the idea where you just train the model on a range of constants if your constants are off from the physical world? If the coefficient of friction changed a bit in the real world, I doubt humans would suddenly forget how to move, and instead would adjust pretty quickly. Making a model tolerant to the plausible range of sim2real errors might be possible without having an accurate simulation or hand-crafted heuristics.
Yeah, this seems like a reasonable way to train a model that controls a robot. I was addressing the verifier for mechanical designs, and I’m not sure if it’s possible to verify mechanical designs to the same level as the output of computer programs.
:) yes, we shouldn’t be sure what is possible. All we know is that currently computer programs can be verified very easily, and currently mechanical designs are verified so poorly that good designs in simulations may be useless in real life. But things are changing rapidly.
How exact a simulation do you think we need, in order to avoid most conceptual problems (not robustness/accuracy problems)?
Let’s define a “conceptual problem” as a problem where the AI’s design ignores a real world constraint as if it doesn’t exist, because the phenomena only occurs in the real world not the simulation. This renders the AI’s design useless in real life.
Let’s define a “robustness/accuracy problem” as a problem where the AI’s design assumes the simulation is perfect and lacks the robust qualities to survive small assumption mistakes.
Robustness may be improved by requiring the AI’s design to work in different versions of simulations, which vary from each other by as much as real life varies from them.
Accuracy may be improved when adapting the AI’s design to real life. If the AI invents a world of machines capable of self replication (much faster than the human economy but slower than biological life) that works in the simulation but not the real world due to small inaccuracies (rather than conceptual problems), then adapting those machines to the real world may take a lot of work, but much less work than inventing such machines from scratch.
What do you think the probability is, that people can achieve simulations without debilitating conceptual problems (before it is too late to matter)?[1]
I’m not sure—I only worked in a pretty narrow range of the manufacturing / engineering space, and I know there’s a ton of domains out there that I’m not familiar with.
I’m also don’t think most of the problems are conceptual in the first place. As Elon Musk likes to say, making a working prototype is easy, and manufacturing at scale is at least 10-100x harder. Although maybe conceptual work would be required for building self replicating machines that only take raw material as input. I would typically think about robots achieving self replication by just building more robot factories. It seems pretty challenging for a self replicating machine to produce microchips or actuators from raw material, but maybe there’s a way to get around this.
Oops I think I’m using the wrong terminology because I’m not familiar with the industry.
When I say self replicating machine, I am referring to a robot factory. Maybe “self replicating factory” would be a better description.
Biological cells (which self reproduce) are less like machines and more like factories, and the incredible world of complex proteins inside a cell are like the sea of machines inside a factory.
I think a robot factory which doesn’t need human input, can operate at a scale somewhere between human factories and biological cells, and potentially self replicate far faster than the human economy (20 years), but slower than a biological cell (20 minutes or 0.00004 years).
Smaller machines operate faster. An object 1,000,000 times smaller, is 1,000,000 times quicker to move a bodylength at the same speed/energy density, or 10,000 quicker at the same power density, or 1000 times quicker at the same acceleration. It can endure 1,000,000 times more acceleration with the same damage. (Bending/cutting is still 1 times the speed at the same power density, but our economy would grow many times faster if that became the only bottleneck)
Thank you for the real world feedback!
It does feel like current models are much better at software than hardware (especially after reading your post).
Do you think the difficulty (of simulating machines in the real world) is due to a lack of compute, or a lack of data?
E.g. if someone wanted to make a simulation of the CNC machine which includes material accumulating due to bad hose angles, would the main difficulty be a lack of computing power, or the tediousness of uploading the entire machine onto the simulation, and adding random forces/damage to account for people bumping into hoses?
Everything about high fidelity simulations would be a pain. For the chips thing, you would have to simulate how chips get thrown as the cutting tool removes material. I wouldn’t be surprised if accurately modeling this required going down the level of atoms, especially as there’s many types of material, cutting tools, cutting tool geometry, etc. This would be insanely expensive and annoying. The simulation also wouldn’t exactly match the real world, basically ever. The cutting edge of the tool very slowly wears, so even if the simulation was perfect at the beginning, it would be inaccurate once the tool begins to wear.
You could probably develop some heuristics that don’t require as accurate of simulation, but it would still be a lot of work and wouldn’t exactly match the real world. Many important forces like friction and elasticity are really difficult to simulate. And making CAD models of everything is super tedious, so we mostly make models that are good enough, never exact.
Have you heard the idea where you just train the model on a range of constants if your constants are off from the physical world? If the coefficient of friction changed a bit in the real world, I doubt humans would suddenly forget how to move, and instead would adjust pretty quickly. Making a model tolerant to the plausible range of sim2real errors might be possible without having an accurate simulation or hand-crafted heuristics.
Yeah, this seems like a reasonable way to train a model that controls a robot. I was addressing the verifier for mechanical designs, and I’m not sure if it’s possible to verify mechanical designs to the same level as the output of computer programs.
:) yes, we shouldn’t be sure what is possible. All we know is that currently computer programs can be verified very easily, and currently mechanical designs are verified so poorly that good designs in simulations may be useless in real life. But things are changing rapidly.
How exact a simulation do you think we need, in order to avoid most conceptual problems (not robustness/accuracy problems)?
Let’s define a “conceptual problem” as a problem where the AI’s design ignores a real world constraint as if it doesn’t exist, because the phenomena only occurs in the real world not the simulation. This renders the AI’s design useless in real life.
Let’s define a “robustness/accuracy problem” as a problem where the AI’s design assumes the simulation is perfect and lacks the robust qualities to survive small assumption mistakes.
Robustness may be improved by requiring the AI’s design to work in different versions of simulations, which vary from each other by as much as real life varies from them.
Accuracy may be improved when adapting the AI’s design to real life. If the AI invents a world of machines capable of self replication (much faster than the human economy but slower than biological life) that works in the simulation but not the real world due to small inaccuracies (rather than conceptual problems), then adapting those machines to the real world may take a lot of work, but much less work than inventing such machines from scratch.
What do you think the probability is, that people can achieve simulations without debilitating conceptual problems (before it is too late to matter)?[1]
I’m currently at 50% but might change it a lot after thinking more
I’m not sure—I only worked in a pretty narrow range of the manufacturing / engineering space, and I know there’s a ton of domains out there that I’m not familiar with.
I’m also don’t think most of the problems are conceptual in the first place. As Elon Musk likes to say, making a working prototype is easy, and manufacturing at scale is at least 10-100x harder. Although maybe conceptual work would be required for building self replicating machines that only take raw material as input. I would typically think about robots achieving self replication by just building more robot factories. It seems pretty challenging for a self replicating machine to produce microchips or actuators from raw material, but maybe there’s a way to get around this.
Oops I think I’m using the wrong terminology because I’m not familiar with the industry.
When I say self replicating machine, I am referring to a robot factory. Maybe “self replicating factory” would be a better description.
Biological cells (which self reproduce) are less like machines and more like factories, and the incredible world of complex proteins inside a cell are like the sea of machines inside a factory.
I think a robot factory which doesn’t need human input, can operate at a scale somewhere between human factories and biological cells, and potentially self replicate far faster than the human economy (20 years), but slower than a biological cell (20 minutes or 0.00004 years).
Smaller machines operate faster. An object 1,000,000 times smaller, is 1,000,000 times quicker to move a bodylength at the same speed/energy density, or 10,000 quicker at the same power density, or 1000 times quicker at the same acceleration. It can endure 1,000,000 times more acceleration with the same damage. (Bending/cutting is still 1 times the speed at the same power density, but our economy would grow many times faster if that became the only bottleneck)