An interesting analogy, closer to ML, would be to look at neuroscience. It’s an older field than ML, and it seems that the physics perspective has been fairly productive, even though not successful at providing a grand unified theory of cognition yet. Some examples:
Using methods from electric circuits to explain neurons (Hodgkin-Huxley model, cable theory)
Dynamical systems to explain phenomena like synchronization in neuronal oscillations (ex: Kuramoto model)
Ising models to model some collective behaviour of neurons
Information theory is commonly used in neuroscience to analyze neural data and model the brain (ex: efficient coding hypothesis)
Attempts at general theories of cognition like predictive processing, or the free energy principle which also have a strong physics inspiration (drawing from statistical physics, and the least action principle)
I can recommend the book Models of the Mind, from Grace Lindsay, which gives an overview of the many way physics contributed to neuroscience.
In principle, one might think that it would be easier to make progress using a physics perspective on AI than in neuroscience, for example because it is easier to do experiments in AI (in neuroscience we do not have access to the value of the weights, we do not always have access to all the neurons, and often it is not possible to intervene on the system).
Thanks for the recommendation! The pathways of scientific progress here seem very interesting (for example: physics → neuro → AI → … v. physics → AI → neuro → …), particularly if we think about feeding back between experimental and theoretical support to build up a general understanding. Physics is really good at fitting theories together in a mosaic—at a large scale you have a nice picture of the universe, and the tiles (theories) fit together but aren’t part of the same continuous picture, allowing for some separation between different regimes of validity. It’s not a perfect analogy, but it says something about physics’ ability to split the difference between reductionism and emergence. It would be nice to have a similar picture in neuroscience (and AI), though this might be more difficult.
The debate on AI’s physical nature is fascinating. While AI itself is a set of algorithms, it relies on physical infrastructure—hardware, servers, and energy. Some argue that AI “exists” within digital frameworks but lacks tangible form. The philosophical implications are worth exploring. If you’re interested in writing about AI or tech advancements, GradesFixer https://gradesfixer.com/blog/best-literary-topics-for-writing-essays-that-will-be-popular-in-2025/ has a great collection of literary topics that will be trending in 2025. Their resource on best literary topics can help refine your arguments and structure your thoughts effectively.
An interesting analogy, closer to ML, would be to look at neuroscience. It’s an older field than ML, and it seems that the physics perspective has been fairly productive, even though not successful at providing a grand unified theory of cognition yet. Some examples:
Using methods from electric circuits to explain neurons (Hodgkin-Huxley model, cable theory)
Dynamical systems to explain phenomena like synchronization in neuronal oscillations (ex: Kuramoto model)
Ising models to model some collective behaviour of neurons
Information theory is commonly used in neuroscience to analyze neural data and model the brain (ex: efficient coding hypothesis)
Attempts at general theories of cognition like predictive processing, or the free energy principle which also have a strong physics inspiration (drawing from statistical physics, and the least action principle)
I can recommend the book Models of the Mind, from Grace Lindsay, which gives an overview of the many way physics contributed to neuroscience.
In principle, one might think that it would be easier to make progress using a physics perspective on AI than in neuroscience, for example because it is easier to do experiments in AI (in neuroscience we do not have access to the value of the weights, we do not always have access to all the neurons, and often it is not possible to intervene on the system).
Thanks for the recommendation! The pathways of scientific progress here seem very interesting (for example: physics → neuro → AI → … v. physics → AI → neuro → …), particularly if we think about feeding back between experimental and theoretical support to build up a general understanding. Physics is really good at fitting theories together in a mosaic—at a large scale you have a nice picture of the universe, and the tiles (theories) fit together but aren’t part of the same continuous picture, allowing for some separation between different regimes of validity. It’s not a perfect analogy, but it says something about physics’ ability to split the difference between reductionism and emergence. It would be nice to have a similar picture in neuroscience (and AI), though this might be more difficult.
The debate on AI’s physical nature is fascinating. While AI itself is a set of algorithms, it relies on physical infrastructure—hardware, servers, and energy. Some argue that AI “exists” within digital frameworks but lacks tangible form. The philosophical implications are worth exploring. If you’re interested in writing about AI or tech advancements, GradesFixer https://gradesfixer.com/blog/best-literary-topics-for-writing-essays-that-will-be-popular-in-2025/ has a great collection of literary topics that will be trending in 2025. Their resource on best literary topics can help refine your arguments and structure your thoughts effectively.