I think this post is referring to “high energy” not in terms of electrochemical neural activity but instead as a metaphor for optimization in machine learning.
Machine learning is the process of minimizing an error function. We can conceptualize this error function as a potential gradient such as a gravity well or electrostatic potential. Minimizing the energy of a particle in this potential gradient is mathematically equivalent to minimizing the error function. The advantage of referring to this as “energy” instead of “error” is it lets you borrow other terms like kinetic energy (in both the classical and quantum sense) which makes search algorithms intuitively easy to understand. The post is referring to this kind of entropic energy.
I think this post is referring to “high energy” not in terms of electrochemical neural activity but instead as a metaphor for optimization in machine learning.
Machine learning is the process of minimizing an error function. We can conceptualize this error function as a potential gradient such as a gravity well or electrostatic potential. Minimizing the energy of a particle in this potential gradient is mathematically equivalent to minimizing the error function. The advantage of referring to this as “energy” instead of “error” is it lets you borrow other terms like kinetic energy (in both the classical and quantum sense) which makes search algorithms intuitively easy to understand. The post is referring to this kind of entropic energy.