[Question] What Decision Theory is Implied By Predictive Processing?

At a fairly abstract/​stylized level, predictive processing models human cognition and behavior as always minimizing predictive error. Sometimes, the environment is “fixed” and our internal models are updated to match it—e.g. when I see my untied shoelace, my internal model updates to include an untied shoelace. Other times, our internal model is “fixed”, and we act on the environment to make it better match the model—e.g. “wanting food” is internally implemented as a strong expectation that I’m going to eat soon, which in turn makes me seek out food in order to make that expectation true. Rather than having a utility function that values food or anything like that, the decision theory implied by predictive processing just has a model in which we obtain food, and we try to make the model match reality.

Abstracting out the key idea: we pack all of the complicated stuff into our world-model, hardcode some things into our world-model which we want to be true, then generally try to make the model match reality.

While making the model match reality, there will be knobs we can turn both “in the model” (i.e. updates) and “in reality” (i.e. actions); there’s no hard separation between the two. There will be things in both map and reality which we can change, and there will be things in both map and reality which we can’t change. It’s all treated the same. At first glance, that looks potentially quite useful for embedded agency.

(My own interest in this was piqued partly because a predictive-processing-like decision theory seems likely to produce abstraction boundaries which look like Cartesian boundaries. As in that post, it seems like some of the intuitive arguments we make around decision theories would naturally drop out of a predictive-processing-like decision theory.)

What problems does such a decision theory run into? What sort of things can we hardcode into our world-model without breaking it altogether? What things must be treated as “fixed” when making the model match reality? Does such an approach have any “invariant” implications, i.e. implications independent of which model we’re trying to match? What further requirements are there on the target model in order for a predictive-processing-style agent to have “good” behavior, in the ways characterized by other decision theories?

This is intended to be an open-ended research question, but off-the-cuff thoughts and links to relevant work are welcome.