CAI augments the ‘natural’ probabilistic graphical model with exogenous optimality variables. 4 . In contrast, AIF leaves the structure of the graphical model unaltered and instead encodes value into the generative model directly. These two approaches lead to significant differences between their respective functionals. AIF, by contaminating the veridical generative model with value-imbuing biases, loses a degree of freedom compared to CAI which maintains a strict separation between the veridical generative model of the environment and its goals. In POMDPs, this approach results in CAI being sensitive to an ‘observation-ambiguity’ term which is absent in the AIF formulation. Secondly, the different methods for encoding the probability of goals – likelihoods in CAI and priors in AIF – lead to different exploratory terms in the objective functionals. Specifically, AIF is endowed with an expected information gain that CAI lacks. AIF approaches thus lend themselves naturally to goal-directed exploration whereas CAI mandates only random, entropy-maximizing exploration.
These different ways of encoding goals into probabilistic models also lend themselves to more philosophical interpretations. CAI, by viewing goals as an additional exogenous factor in an otherwise unbiased inference process, maintains a clean separation between veridical perception and control, thus maintaining the modularity thesis of separate perception and action modules (Baltieri & Buckley, 2018). This makes CAI approaches consonant with mainstream views in machine learning that see the goal of perception as recovering veridical representations of the world, and control as using this world-model to plan actions. In contrast, AIF elides these clean boundaries between unbiased perception and action by instead positing that biased perception (Tschantz, Seth, & Buckley, 2020) is crucial to adaptive action. Rather than maintaining an unbiased world model that predicts likely consequences, AIF instead maintains a biased generative model which preferentially predicts our preferences being fulfilled. Active-inference thus aligns closely with enactive and embodied approaches (Baltieri & Buckley, 2019; Clark, 2015) to cognition, which view the action-perception loop as a continual flow rather than a sequence of distinct stages.
Nice, CAI is another similar approach, kind of in between the three already mentioned. I think “losing a degree of freedom” is very much a good thing, both computationally and functionally.
See https://arxiv.org/abs/2006.12964:
Nice, CAI is another similar approach, kind of in between the three already mentioned. I think “losing a degree of freedom” is very much a good thing, both computationally and functionally.