Ahh for MD I mostly used DFT with VASP or CP2K, but then I was not working on the same problems. For thorny issues (biggish and plain DFT fails, but no MD) I had good results using hybrid functionals and tuning the parameters to match some result of higher level methods. Did you try meta-GGAs like SCAN? Sometimes they are suprisingly decent where PBE fails catastrophically...
For the most part we’re avoiding/designing around compute constraints. Build up our experimental validation and characterization capabilities so that we can “see” what happens in the mechsyn lab ex post facto. Design reaction sequences with large thermodynamic gradients, program the probe to avoid as much as possible side reaction configurations, and then characterize the result to see if it was what you were hoping for. Use the lab as a substitute for simulation.
It honestly feels like we’ve got a better chance of just trying to build the thing and repeating what works and modifying what doesn’t, than even a theoretically optimal machine learning algorithm could do using simulation-first design. Our competition went the AI/ML approach and it killed them. That’s part of why the whole AGI-will-design-nanotech thing bugs me so much. An immense amount of money and time has been wasted on that approach, which if better invested could have gotten us working nano-machinery by now. It really is an idea that eats smart people.
You could also try to fit an ML potential to some expensive method, but it’s very easy to produce very wrong things if you don’t know what you’re doing (I wouldn’t be able for one)
These are potentially tremendously helpful. But in the context of AI x-risk it’s still not enough to be concerning. A force field that gives accurate results 90% of the time would tremendously accelerate experimental efforts. But it wouldn’t be reliable enough to one-shot nanotech as part of a deceptive turn.
Ahh for MD I mostly used DFT with VASP or CP2K, but then I was not working on the same problems. For thorny issues (biggish and plain DFT fails, but no MD) I had good results using hybrid functionals and tuning the parameters to match some result of higher level methods. Did you try meta-GGAs like SCAN? Sometimes they are suprisingly decent where PBE fails catastrophically...
For the most part we’re avoiding/designing around compute constraints. Build up our experimental validation and characterization capabilities so that we can “see” what happens in the mechsyn lab ex post facto. Design reaction sequences with large thermodynamic gradients, program the probe to avoid as much as possible side reaction configurations, and then characterize the result to see if it was what you were hoping for. Use the lab as a substitute for simulation.
It honestly feels like we’ve got a better chance of just trying to build the thing and repeating what works and modifying what doesn’t, than even a theoretically optimal machine learning algorithm could do using simulation-first design. Our competition went the AI/ML approach and it killed them. That’s part of why the whole AGI-will-design-nanotech thing bugs me so much. An immense amount of money and time has been wasted on that approach, which if better invested could have gotten us working nano-machinery by now. It really is an idea that eats smart people.
You could also try to fit an ML potential to some expensive method, but it’s very easy to produce very wrong things if you don’t know what you’re doing (I wouldn’t be able for one)
It’s coming along quite fast. Here’s the latest on ML-trained molecular dynamics force fields that (supposedly) approach ab initio quality:
https://www.nature.com/articles/s41524-024-01205-w
These are potentially tremendously helpful. But in the context of AI x-risk it’s still not enough to be concerning. A force field that gives accurate results 90% of the time would tremendously accelerate experimental efforts. But it wouldn’t be reliable enough to one-shot nanotech as part of a deceptive turn.