That’s not unreasonable as a quick summary of the principle.
I would say there is more to what makes a living system alive than just following the free energy principle per se. For instance, the robot would also need to scavenge for material and energy resources to incorporate into itself for maintenance, repair, and/or reproduction. Just correcting its gait when thrown off balance allows it minimize a sort of behavioral free energy, but that’s not enough to count as alive.
But if you want to put amoebas and humans in the same qualitative category of “agency”, then you need a framework that is general enough to capture the commonalities of interest. And yes, under such a broad umbrella, artificial control systems and dynamically balancing walking robots would be included.
The free energy principle applies to a lot of systems, not just living or agentic. I see it more as a way to systematize our approach to understanding a system or process rather than an explanation in and of itself. By focusing on how a system maintains set points (e.g., homeostasis) and minimizes prediction error (e.g., unsupervised learning), I think we would be better positioned to figure out what real agents are actually doing in a way that could inform the both the design and alignment of AGI.
To be honest, when I talk about the “free energy principle”, I typically have in mind a certain class of algorithmic implementations of it, involving generative models and using maximum likelihood estimation through online gradient descent to minimize their prediction errors. Something like https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000211
That’s not unreasonable as a quick summary of the principle.
I would say there is more to what makes a living system alive than just following the free energy principle per se. For instance, the robot would also need to scavenge for material and energy resources to incorporate into itself for maintenance, repair, and/or reproduction. Just correcting its gait when thrown off balance allows it minimize a sort of behavioral free energy, but that’s not enough to count as alive.
But if you want to put amoebas and humans in the same qualitative category of “agency”, then you need a framework that is general enough to capture the commonalities of interest. And yes, under such a broad umbrella, artificial control systems and dynamically balancing walking robots would be included.
The free energy principle applies to a lot of systems, not just living or agentic. I see it more as a way to systematize our approach to understanding a system or process rather than an explanation in and of itself. By focusing on how a system maintains set points (e.g., homeostasis) and minimizes prediction error (e.g., unsupervised learning), I think we would be better positioned to figure out what real agents are actually doing in a way that could inform the both the design and alignment of AGI.
To be honest, when I talk about the “free energy principle”, I typically have in mind a certain class of algorithmic implementations of it, involving generative models and using maximum likelihood estimation through online gradient descent to minimize their prediction errors. Something like https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000211