Thanks! I will savor the warm feeling that I generated an idea in a field I didn’t study that the people who study the field also consider hopeful. :D
Okay, if someone understands the topic, could you please tell me what exactly is the problem; why this wasn’t already solved? -- Is the space of realistically simple equations still too large? Is it a mathematical problem to predict the chemical properties from the equations? Are we missing sufficiently precise data about the chemical properties of large atoms? Are the relativistic effects even for large atoms too small? Is there so much noise that you can actually generate too many different sets of equations fitting the data, with no quick way to filter out the more hopeful ones? All of the above? Something else?
Noise is certainly a problem, but the biggest problem for any sort of atomic modelling is that you quickly run into an n-body problem. Each one of of n electrons in an atom interacts with every other electron in that atom and so to describe the behavior of each electron you end up with a set of 70 something coupled differential equations. As a consequence, even if you just want a good approximation of the wavefunction, you have to search through a 3n dimensional Hilbert space and even with a preponderance of good experimental data there’s not really a good way to get around the curse of dimensionality.
Am I understanding the relevance of the curse of dimensionality to this correctly: Generally, our goal is to find a simple pattern in some high-dimensional data. However, due to the high dimensionality there are exponentially many possible data points and, practically, we can only observe a very small fraction of that, so curse is that we are often left with an immense list of candidates for the true pattern. All we can do is to limit this list of candidates with certain heuristic priors, for example that the true pattern is a smooth, compact manifold (that worked well e.g. for relativity and machine learning, but for example quantum mechanics looks more like that the true pattern is not smooth but consists of individual particles).
The true pattern (i.e. the many-particle wavefunction) is smooth. The issue is that the pattern depends on the positions of every electron in the atom. The variational principle gives us a measure of the goodness of the wavefunction, but it doesn’t give us a way to find consistent sets of positions. We have to rely on numerical methods to find self-consistent solutions for the set of differential equations, but it’s ludicrously expensive to try to sample the solution space given the dimensionality of that space.
It’s really difficult to solve large systems of coupled differential equations. You run into different issues depending on how you attempt to solve them. For most machine-learning type approaches, those issues manifest themselves via the curse of dimensionality.
I fail to see how this would be qualitatively different from how physics has always been done. We’ve always been using computers to generate new laws to fit observations, except in the past those computers have been our brains, and in the past half-century they’ve increasingly been our brains augmented with artificial computing machines.
Our current lack of progress in physics doesn’t stem from lack of ideas, or even lack of ability to come up with theoretical predictions. We have plenty of ideas. Our lack of progress stems from lack of experimental data. We have a large number of competing explanations and they all work in the same in the infrared limit (physicist-speak for ‘everyday low-energy conditions’) but they have subtle differences in the high-energy limit. Our two main routes to physical evidence have been particle physics measurements and cosmological data. We are not yet able to probe to high enough energies in particle physics to sort out the various theories, and we have far too many uncertainties in cosmological data to substantially help us out.
Maybe better AI in the future will help us with this, but it would have to be incredibly powerful AI.
We have a large number of competing explanations and they all work in the same in the infrared limit (physicist-speak for ‘everyday low-energy conditions’) but they have subtle differences in the high-energy limit.
What are you talking about? I don’t think that’s true at all.
Added: I suppose the parameters of the standard model are subtle difference in the high energy domain, but I don’t think that’s what you mean.
People are already attempting that since 2009 or so:
https://scholar.google.com/scholar?cluster=11583184257062107912
https://scholar.google.com/scholar?cluster=4202198002835248331
(Click /Cited by \d+/ to go down the rabbit hole.)
Thanks! I will savor the warm feeling that I generated an idea in a field I didn’t study that the people who study the field also consider hopeful. :D
Okay, if someone understands the topic, could you please tell me what exactly is the problem; why this wasn’t already solved? -- Is the space of realistically simple equations still too large? Is it a mathematical problem to predict the chemical properties from the equations? Are we missing sufficiently precise data about the chemical properties of large atoms? Are the relativistic effects even for large atoms too small? Is there so much noise that you can actually generate too many different sets of equations fitting the data, with no quick way to filter out the more hopeful ones? All of the above? Something else?
Noise is certainly a problem, but the biggest problem for any sort of atomic modelling is that you quickly run into an n-body problem. Each one of of n electrons in an atom interacts with every other electron in that atom and so to describe the behavior of each electron you end up with a set of 70 something coupled differential equations. As a consequence, even if you just want a good approximation of the wavefunction, you have to search through a 3n dimensional Hilbert space and even with a preponderance of good experimental data there’s not really a good way to get around the curse of dimensionality.
Am I understanding the relevance of the curse of dimensionality to this correctly: Generally, our goal is to find a simple pattern in some high-dimensional data. However, due to the high dimensionality there are exponentially many possible data points and, practically, we can only observe a very small fraction of that, so curse is that we are often left with an immense list of candidates for the true pattern. All we can do is to limit this list of candidates with certain heuristic priors, for example that the true pattern is a smooth, compact manifold (that worked well e.g. for relativity and machine learning, but for example quantum mechanics looks more like that the true pattern is not smooth but consists of individual particles).
The true pattern (i.e. the many-particle wavefunction) is smooth. The issue is that the pattern depends on the positions of every electron in the atom. The variational principle gives us a measure of the goodness of the wavefunction, but it doesn’t give us a way to find consistent sets of positions. We have to rely on numerical methods to find self-consistent solutions for the set of differential equations, but it’s ludicrously expensive to try to sample the solution space given the dimensionality of that space.
It’s really difficult to solve large systems of coupled differential equations. You run into different issues depending on how you attempt to solve them. For most machine-learning type approaches, those issues manifest themselves via the curse of dimensionality.
I fail to see how this would be qualitatively different from how physics has always been done. We’ve always been using computers to generate new laws to fit observations, except in the past those computers have been our brains, and in the past half-century they’ve increasingly been our brains augmented with artificial computing machines.
Our current lack of progress in physics doesn’t stem from lack of ideas, or even lack of ability to come up with theoretical predictions. We have plenty of ideas. Our lack of progress stems from lack of experimental data. We have a large number of competing explanations and they all work in the same in the infrared limit (physicist-speak for ‘everyday low-energy conditions’) but they have subtle differences in the high-energy limit. Our two main routes to physical evidence have been particle physics measurements and cosmological data. We are not yet able to probe to high enough energies in particle physics to sort out the various theories, and we have far too many uncertainties in cosmological data to substantially help us out.
Maybe better AI in the future will help us with this, but it would have to be incredibly powerful AI.
What are you talking about? I don’t think that’s true at all.
Added: I suppose the parameters of the standard model are subtle difference in the high energy domain, but I don’t think that’s what you mean.