Free En­ergy Principle

TagLast edit: 26 Dec 2022 6:19 UTC by Roman Leventov

The Free Energy Principle (FEP) is a principle that suggests that dynamic systems, including the brain and other physical systems, are organized to minimize prediction errors, or the difference between the predictions made about the environment and the actual outcomes experienced. According to the FEP, dynamic systems encode information about their environment in a way to reduce surprisal from its input. The FEP proposes that dynamic systems are motivated to minimize prediction errors in order to maintain stability within the environment. FEP has been influential in neuroscience and neuropsychology and more recently has been used to describe systems on all spatiotemporal scales, from cells and biological species to AIs and societies.

FEP gives rise to Active Inference[1]: a process theory of agency, that can be seen both as an explanatory theory and as an agent architecture. In the latter sense, Active Inference rivals Reinforcement Learning. It has been argued[2] that Active Inference as an agent architecture manages the model complexity (i. e., the bias-variance tradeoff) and the exploration-exploitation tradeoff in a principled way, favours explicit, disentangled, and hence more interpretable belief representations, and is amenable for working within hierarchical systems of collective intelligence (which are seen as Active Inference agents themselves[3]). Building ecosystems of hierarchical collective intelligence can be seen as a proposed solution for and an alternative conceptualisation of the general problem of alignment.

FEP/​Active Inference is an energy-based model of intelligence: a FEP agent minimises an informational quantity called variational free energy (VFE), and Active Inference nuances this picture further, modelling agents as minimising an informational quantity called expected free energy (EFE), which is derived from VFE. This likens FEP/​Active Inference to Bengio’s GFlowNets[4] and LeCun’s Joint Embedding Predictive Architecture (JEPA)[5], which are also energy-based. On the other hand, this distinguishes FEP/​Active Inference from Reinforcement Learning, which is a reward-based model of agency, and, more generally, utility-maximising decision theories.

Active Inference is one of the most general theories of agency. It can be seen as a generalisation of the predictive coding theory of brain function (or, the Bayesian Brain hypothesis). Specifically, while predictive coding explains the agent’s perception as Bayesian inference, Active Inference models both prediction and action as inference under the single unifying objective: minimisation of the agent’s VFE or EFE. Active Inference also recovers Bayes-optimal reinforcement learning, optimal control theory, and Bayesian Decision Theory (aka EDT) under different simplifying assumptions[1][6].

The mathematical content of Active Inference is based on Variational Bayesian methods.


  1. ^

    Parr, Thomas, Giovanni Pezzulo, and Karl J. Friston. Active inference: the free energy principle in mind, brain, and behavior. MIT Press, 2022.

  2. ^

    Friston, Karl J., Maxwell JD Ramstead, Alex B. Kiefer, Alexander Tschantz, Christopher L. Buckley, Mahault Albarracin, Riddhi J. Pitliya et al. “Designing Ecosystems of Intelligence from First Principles.” arXiv preprint arXiv:2212.01354 (2022).

  3. ^

    Kaufmann, Rafael, Pranav Gupta, and Jacob Taylor. “An active inference model of collective intelligence.” Entropy 23, no. 7 (2021): 830.

  4. ^

    Bengio, Yoshua. “GFlowNet Tutorial.” (2022).

  5. ^

    LeCun, Yann. “A path towards autonomous machine intelligence.” preprint posted on openreview (2022).

  6. ^

    Friston, Karl, Lancelot Da Costa, Danijar Hafner, Casper Hesp, and Thomas Parr. “Sophisticated inference.” Neural Computation 33, no. 3 (2021): 713-763.

Neu­ral An­neal­ing: Toward a Neu­ral The­ory of Every­thing (cross­post)

Michael Edward Johnson29 Nov 2019 17:31 UTC
83 points
28 comments40 min readLW link3 reviews

The two con­cep­tions of Ac­tive In­fer­ence: an in­tel­li­gence ar­chi­tec­ture and a the­ory of agency

Roman Leventov16 Nov 2022 9:30 UTC
15 points
0 comments4 min readLW link

Ac­tive In­fer­ence as a for­mal­i­sa­tion of in­stru­men­tal convergence

Roman Leventov26 Jul 2022 17:55 UTC
11 points
2 comments3 min readLW link

Prop­er­ties of cur­rent AIs and some pre­dic­tions of the evolu­tion of AI from the per­spec­tive of scale-free the­o­ries of agency and reg­u­la­tive development

Roman Leventov20 Dec 2022 17:13 UTC
26 points
2 comments36 min readLW link

God Help Us, Let’s Try To Un­der­stand Fris­ton On Free Energy

Scott Alexander5 Mar 2018 6:00 UTC
47 points
43 comments14 min readLW link

Why I’m not into the Free En­ergy Principle

Steven Byrnes2 Mar 2023 19:27 UTC
133 points
38 comments9 min readLW link

How evolu­tion­ary lineages of LLMs can plan their own fu­ture and act on these plans

Roman Leventov25 Dec 2022 18:11 UTC
26 points
15 comments8 min readLW link

Men­tal health benefits and down­sides of psychedelic use in ACX read­ers: sur­vey results

RationalElf25 Oct 2021 22:55 UTC
113 points
18 comments10 min readLW link

«Boundaries», Part 3a: Defin­ing bound­aries as di­rected Markov blankets

Andrew_Critch30 Oct 2022 6:31 UTC
62 points
13 comments15 min readLW link

LOVE in a sim­box is all you need

jacob_cannell28 Sep 2022 18:25 UTC
64 points
69 comments44 min readLW link

My com­pu­ta­tional frame­work for the brain

Steven Byrnes14 Sep 2020 14:19 UTC
150 points
26 comments13 min readLW link1 review

Biolog­i­cal Holism: A New Paradigm?

Waddington9 May 2021 22:42 UTC
3 points
9 comments19 min readLW link

Pre­dic­tive Pro­cess­ing, Hetero­sex­u­al­ity and Delu­sions of Grandeur

lsusr17 Dec 2022 7:37 UTC
36 points
12 comments5 min readLW link

A Prince, a Pau­per, Power, Panama

Alok Singh27 Sep 2022 7:10 UTC
10 points
0 comments1 min readLW link

The cir­cu­lar prob­lem of epistemic irresponsibility

Roman Leventov31 Oct 2022 17:23 UTC
5 points
2 comments8 min readLW link

Multi-agent pre­dic­tive minds and AI alignment

Jan_Kulveit12 Dec 2018 23:48 UTC
60 points
18 comments10 min readLW link

All the posts I will never write

Alexander Gietelink Oldenziel14 Aug 2022 18:29 UTC
53 points
8 comments8 min readLW link

A fu­ture for neuroscience

Mike Johnson19 Aug 2018 23:58 UTC
22 points
12 comments19 min readLW link

Let There be Sound: A Fris­to­nian Med­i­ta­tion on Creativity

jollybard4 Jul 2020 3:33 UTC
3 points
2 comments1 min readLW link

In­tro­duc­tion to the Free-En­ergy The­ory of Mind

IAFF-User-17724 Dec 2016 1:15 UTC
0 points
0 comments1 min readLW link

Free-en­ergy, re­in­force­ment, and utility

IAFF-User-17726 Dec 2016 23:02 UTC
0 points
0 comments1 min readLW link

Top Left Mood

Jacob Falkovich24 Jul 2018 14:35 UTC
17 points
2 comments1 min readLW link

Cri­tique of some re­cent philos­o­phy of LLMs’ minds

Roman Leventov20 Jan 2023 12:53 UTC
49 points
8 comments20 min readLW link

A multi-dis­ci­plinary view on AI safety research

Roman Leventov8 Feb 2023 16:50 UTC
36 points
4 comments26 min readLW link

Power-Seek­ing = Min­imis­ing free energy

Jonas Hallgren22 Feb 2023 4:28 UTC
19 points
4 comments7 min readLW link

A re­ply to Byrnes on the Free En­ergy Principle

Roman Leventov3 Mar 2023 13:03 UTC
24 points
16 comments14 min readLW link
No comments.