RSS

Free En­ergy Principle

TagLast edit: 29 Oct 2025 22:53 UTC by Ariel Cheng

The Free Energy Principle (FEP) states that self-organizing systems which maintain a separation from their environments via a Markov blanket—including the brain and other physical systems—minimize their variational free energy (VFE) and expected free energy (EFE) via perception and action respectively[1]. Unlike in other theories of agency, under FEP, action and perception are unified as inference problems under similar objectives. In some cases, variational free energy reduces to prediction error, which is the difference between the predictions made about the environment and the actual outcomes experienced. The mathematical content of FEP is based on Variational Bayesian methods.

Although FEP has an extremely broad scope, it makes a number of very specific assumptions[2] that may restrict its applicability to real-world systems. Ongoing theoretical work attempts to reformulate the theory to hold under more realistic assumptions. Some progress has been made: newer formulations of FEP, unlike their predecessors, do not assume a constant Markov blanket (but rather, some Markov blanket trajectory)[3] and do not assume the existence of a non-equilibrium steady state[4].

FEP has been influential in neuroscience and neuropsychology and more recently has been used to describe systems on all spatiotemporal scales, from cells and biological species to AIs and societies.

Process theories

Since FEP is an unfalsifiable mathematical principle, it does not make sense to ask whether FEP is true (because it is true mathematically given the assumptions.) Rather, it makes sense to ask whether its assumptions hold for a given system, and, if so, how that system implements the minimization of VFE and EFE. Unlike the FEP itself, a proposal of how some particular system minimizes VFE and EFE—a process theory—is falsifiable.

There are two FEP process theories most relevant to neuroscience.[5] Predictive processing is a process theory of how VFE is minimized in brains during perception. Active Inference (AIF) is a process theory of the “action” part of FEP, which can also be seen as an agent architecture.

It has been argued[6] that AIF as an agent architecture manages the model complexity (i.e. the bias-variance tradeoff) and the exploration-exploitation tradeoff in a principled way; favours explicit, disentangled, and hence more interpretable belief representations; and is amenable for working within hierarchical systems of collective intelligence (which are seen as Active Inference agents themselves[7]). Building ecosystems of hierarchical collective intelligence can be seen as a proposed solution for and an alternative conceptualization of the general problem of alignment.

Connections to other theories

While some proponents of AIF believe that it is a more principled rival to Reinforcement Learning (RL), it has been shown that AIF is formally equivalent to the control-as-inference formulation of RL.[8] Additionally, AIF also recovers Bayes-optimal reinforcement learning, optimal control theory, and Bayesian Decision Theory (aka EDT) under different simplifying assumptions[9][10].

AIF is an energy-based model of intelligence. This likens FEP/​Active Inference to Bengio’s GFlowNets[11] and LeCun’s Joint Embedding Predictive Architecture (JEPA)[12], which are also energy-based.

References

  1. ^

    EFE is closely related to, and can be derived from, VFE. Action does not always minimize EFE; in some cases, it minimizes generalized free energy (a closely related quantity). See this figure for a brief overview.

  2. ^

    E.g. (1) sensory, active, internal and external states have independent random fluctuations; (2) there exists an injective map between the mode of internal states and mode of external states; …etc.

  3. ^

    Beck, Jeff, and Ramstead, Maxwell JD. “Dynamic Markov Blanket Detection for Macroscopic Physics Discovery.” arXiv preprint arXiv:2502.21217 (2025).

  4. ^

    Friston, K., Da Costa, L., Sakthivadivel, D. A., Heins, C., Pavliotis, G. A., Ramstead, M., & Parr, T. (2023). “Path integrals, particular kinds, and strange things.” Physics of Life Reviews, 47, 35-62.

  5. ^
  6. ^

    Friston, Karl J., Maxwell JD Ramstead, Alex B. Kiefer, Alexander Tschantz, Christopher L. Buckley, Mahault Albarracin, Riddhi J. Pitliya et al. “Designing Ecosystems of Intelligence from First Principles.” arXiv preprint arXiv:2212.01354 (2022).

  7. ^

    Kaufmann, Rafael, Pranav Gupta, and Jacob Taylor. “An active inference model of collective intelligence.” Entropy 23, no. 7 (2021): 830.

  8. ^

    Millidge, Beren, Alexander Tschantz, Anil K. Seth, and Christopher L. Buckley. “On the relationship between active inference and control as inference.” In International workshop on active inference, pp. 3-11. Cham: Springer International Publishing, 2020.

  9. ^

    Parr, Thomas, Giovanni Pezzulo, and Karl J. Friston. Active inference: the free energy principle in mind, brain, and behavior. MIT Press, 2022.

  10. ^

    Friston, Karl, Lancelot Da Costa, Danijar Hafner, Casper Hesp, and Thomas Parr. “Sophisticated inference.” Neural Computation 33, no. 3 (2021): 713-763.

  11. ^

    Bengio, Yoshua. “GFlowNet Tutorial.” (2022).

  12. ^

    LeCun, Yann. “A path towards autonomous machine intelligence.” preprint posted on openreview (2022).

Ac­tive In­fer­ence as a for­mal­i­sa­tion of in­stru­men­tal convergence

Roman Leventov26 Jul 2022 17:55 UTC
12 points
2 comments3 min readLW link
(direct.mit.edu)

God Help Us, Let’s Try To Un­der­stand Fris­ton On Free Energy

Scott Alexander5 Mar 2018 6:00 UTC
54 points
44 comments14 min readLW link
(slatestarcodex.com)

The two con­cep­tions of Ac­tive In­fer­ence: an in­tel­li­gence ar­chi­tec­ture and a the­ory of agency

Roman Leventov16 Nov 2022 9:30 UTC
18 points
0 comments4 min readLW link

Neu­ral An­neal­ing: Toward a Neu­ral The­ory of Every­thing (cross­post)

Michael Edward Johnson29 Nov 2019 17:31 UTC
87 points
29 comments40 min readLW link3 reviews

Prop­er­ties of cur­rent AIs and some pre­dic­tions of the evolu­tion of AI from the per­spec­tive of scale-free the­o­ries of agency and reg­u­la­tive development

Roman Leventov20 Dec 2022 17:13 UTC
33 points
3 comments36 min readLW link

The Way You Go Depends A Good Deal On Where You Want To Get: FEP min­i­mizes sur­prise about ac­tions us­ing prefer­ences about the fu­ture as *ev­i­dence*

Christopher King27 Apr 2025 21:55 UTC
10 points
5 comments5 min readLW link

Why I’m not into the Free En­ergy Principle

Steven Byrnes2 Mar 2023 19:27 UTC
162 points
55 comments9 min readLW link1 review

How evolu­tion­ary lineages of LLMs can plan their own fu­ture and act on these plans

Roman Leventov25 Dec 2022 18:11 UTC
39 points
16 comments8 min readLW link

Power-Seek­ing = Min­imis­ing free energy

Jonas Hallgren22 Feb 2023 4:28 UTC
23 points
10 comments7 min readLW link

A Prince, a Pau­per, Power, Panama

Alok Singh27 Sep 2022 7:10 UTC
10 points
0 comments1 min readLW link
(alok.github.io)

«Boundaries», Part 3a: Defin­ing bound­aries as di­rected Markov blankets

Andrew_Critch30 Oct 2022 6:31 UTC
90 points
20 comments15 min readLW link

Refine­ment of Ac­tive In­fer­ence agency ontology

Roman Leventov15 Dec 2023 9:31 UTC
16 points
0 comments5 min readLW link
(arxiv.org)

All the posts I will never write

Alexander Gietelink Oldenziel14 Aug 2022 18:29 UTC
54 points
8 comments8 min readLW link

Cri­tique of some re­cent philos­o­phy of LLMs’ minds

Roman Leventov20 Jan 2023 12:53 UTC
52 points
8 comments20 min readLW link

Biolog­i­cal Holism: A New Paradigm?

Waddington9 May 2021 22:42 UTC
6 points
9 comments19 min readLW link

LOVE in a sim­box is all you need

jacob_cannell28 Sep 2022 18:25 UTC
67 points
73 comments44 min readLW link1 review

AXRP Epi­sode 32 - Un­der­stand­ing Agency with Jan Kulveit

DanielFilan30 May 2024 3:50 UTC
20 points
0 comments53 min readLW link

Pro­posal for im­prov­ing the global on­line dis­course through per­son­al­ised com­ment or­der­ing on all websites

Roman Leventov6 Dec 2023 18:51 UTC
35 points
21 comments6 min readLW link

In­tro­duc­tion to the Free-En­ergy The­ory of Mind

IAFF-User-17724 Dec 2016 1:15 UTC
0 points
0 comments1 min readLW link
(medium.com)

The cir­cu­lar prob­lem of epistemic irresponsibility

Roman Leventov31 Oct 2022 17:23 UTC
5 points
2 comments8 min readLW link

Multi-agent pre­dic­tive minds and AI alignment

Jan_Kulveit12 Dec 2018 23:48 UTC
63 points
18 comments10 min readLW link

Agent Boundaries Aren’t Markov Blan­kets. [Un­less they’re non-causal; see com­ments.]

abramdemski20 Nov 2023 18:23 UTC
82 points
11 comments2 min readLW link

Top Left Mood

Jacob Falkovich24 Jul 2018 14:35 UTC
17 points
2 comments1 min readLW link
(putanumonit.com)

Free-en­ergy, re­in­force­ment, and utility

IAFF-User-17726 Dec 2016 23:02 UTC
0 points
0 comments1 min readLW link
(medium.com)

Men­tal health benefits and down­sides of psychedelic use in ACX read­ers: sur­vey results

RationalElf25 Oct 2021 22:55 UTC
119 points
18 comments10 min readLW link

My com­pu­ta­tional frame­work for the brain

Steven Byrnes14 Sep 2020 14:19 UTC
157 points
26 comments13 min readLW link1 review

Let There be Sound: A Fris­to­nian Med­i­ta­tion on Creativity

jollybard4 Jul 2020 3:33 UTC
3 points
2 comments1 min readLW link
(jollybard.wordpress.com)

Gaia Net­work: a prac­ti­cal, in­cre­men­tal path­way to Open Agency Architecture

20 Dec 2023 17:11 UTC
22 points
8 comments16 min readLW link

Wor­ri­some mi­s­un­der­stand­ing of the core is­sues with AI transition

Roman Leventov18 Jan 2024 10:05 UTC
5 points
2 comments4 min readLW link

A fu­ture for neuroscience

Mike Johnson19 Aug 2018 23:58 UTC
22 points
12 comments19 min readLW link

Pre­dic­tive Pro­cess­ing, Hetero­sex­u­al­ity and Delu­sions of Grandeur

lsusr17 Dec 2022 7:37 UTC
37 points
13 comments5 min readLW link

Grounded Ghosts in the Ma­chine—Fris­ton Blan­kets, Mir­ror Neu­rons, and the Quest for Co­op­er­a­tive AI

Davidmanheim10 Apr 2025 10:15 UTC
9 points
0 comments9 min readLW link
(davidmanheim.com)

A re­ply to Byrnes on the Free En­ergy Principle

Roman Leventov3 Mar 2023 13:03 UTC
27 points
16 comments14 min readLW link

Res­o­nant Cas­cade Mu­ta­tion Ar­chi­tec­ture: A Quan­tum Con­cept for Syn­the­siz­ing Ele­ments from Helium ⸻ By David Patterson

DSPatterson 51 Tech13 Jun 2025 19:47 UTC
1 point
0 comments1 min readLW link

En­ergy-Based Trans­form­ers are Scal­able Learn­ers and Thinkers

Matrice Jacobine8 Jul 2025 13:44 UTC
7 points
5 comments1 min readLW link
(energy-based-transformers.github.io)

Gaia Net­work: An Illus­trated Primer

18 Jan 2024 18:23 UTC
3 points
2 comments15 min readLW link

Align­ing an H-JEPA agent via train­ing on the out­puts of an LLM-based “ex­em­plary ac­tor”

Roman Leventov29 May 2023 11:08 UTC
12 points
10 comments30 min readLW link

A short ‘deriva­tion’ of Watan­abe’s Free En­ergy Formula

Wuschel Schulz29 Jan 2024 23:41 UTC
13 points
6 comments7 min readLW link

Ap­ply to the Con­cep­tual Boundaries Work­shop for AI Safety

Chris Lakin27 Nov 2023 21:04 UTC
50 points
0 comments3 min readLW link

Agent mem­branes and causal distance

Chris Lakin2 Jan 2024 22:43 UTC
20 points
3 comments3 min readLW link

Clar­ify­ing the free en­ergy prin­ci­ple (with quotes)

Ryo 29 Oct 2023 16:03 UTC
8 points
0 comments9 min readLW link

AI 2027: What Su­per­in­tel­li­gence Looks Like

3 Apr 2025 16:23 UTC
669 points
222 comments41 min readLW link
(ai-2027.com)

The Hyper­com­plex Si­mu­la­tion Hy­poth­e­sis: Uni­verse as an Ex­plo­ra­tory Eng­ine of Life and Consciousness

Rodrigo Valero20 May 2025 11:52 UTC
1 point
0 comments1 min readLW link

A po­ten­tially rele­vant ex­change I had re­cently with ChatGPT 4o

roan22 Feb 2025 7:59 UTC
1 point
0 comments1 min readLW link

A multi-dis­ci­plinary view on AI safety research

Roman Leventov8 Feb 2023 16:50 UTC
46 points
4 comments26 min readLW link

FixDT

abramdemski30 Nov 2023 21:57 UTC
65 points
15 comments14 min readLW link1 review

For­mal­iz­ing «Boundaries» with Markov blankets

Chris Lakin19 Sep 2023 21:01 UTC
23 points
20 comments3 min readLW link

Why Si­mu­la­tor AIs want to be Ac­tive In­fer­ence AIs

10 Apr 2023 18:23 UTC
96 points
9 comments8 min readLW link1 review

Goal al­ign­ment with­out al­ign­ment on episte­mol­ogy, ethics, and sci­ence is futile

Roman Leventov7 Apr 2023 8:22 UTC
20 points
2 comments2 min readLW link

The De­mon of Interrelation

Jack6 Jun 2025 8:19 UTC
−2 points
0 comments8 min readLW link

«Boundaries/​Mem­branes» and AI safety compilation

Chris Lakin3 May 2023 21:41 UTC
56 points
17 comments8 min readLW link

Vi­pas­sana Med­i­ta­tion and Ac­tive In­fer­ence: A Frame­work for Un­der­stand­ing Suffer­ing and its Cessation

sturb21 Mar 2024 12:32 UTC
50 points
8 comments19 min readLW link
No comments.