You’re discussing PP as a possible model for AI, whereas I posit PP as a model for animal brains. The main difference is that animal brains are evolved and occur inside bodies.
So, for your project of re-writing rationality in PP, would PP constitute a model of human irrationality, and how to rectify it, in contrast to ideal rationality (which would not be well-described by PP)?
Or would you employ PP both as a model which explains human irrationality and as an ideal rationality notion, so that we can use it both as the framework in which we describe irrationality and as the framework in which we can understand what better rationality would be?
Evolution is the answer to the dark room problem. You come with prebuilt hardware that is adapted a certain adaptive niche, which is equivalent to modeling it. Your legs are a model of the shape of the ground and the size of your evolutionary territory. Your color vision is a model of berries in a bush, and your fingers that pick them. Your evolved body is a hyperprior you can’t update away. In a sense, you’re predicting all the things that are adaptive: being full of good food, in the company of allies and mates, being vigorous and healthy, learning new things. Lying hungry in a dark room creates a persistent error in your highest-order predictive models (the evolved ones) that you can’t change.
Am I right in inferring from this that your preferred version of PP is one where we explicitly plan to minimize prediction error, as opposed to the Active Inference model (which instead minimizes KL divergence)? Or do you endorse an Active Inference type model?
This explanation in terms of evolution makes the PP theory consistent with observations, but does not give me a reason to believe PP. The added complexity to the prior is similar to the added complexity of other kinds of machinery to implement drives, so as yet I see no reason to prefer this explanation to other possibly explanations of what’s going on in the brain.
My remarks about problems with different versions of PP can each be patched in various ways; these are not supposed to be “gotcha” arguments in the sense of “PP can’t explain this! / PP can’t deal with this!”. Rather, I’m trying to boggle at why PP looks promising in the first place, as a hypothesis to raise to our attention.
Each of the arguments I mentioned are about one way I might see that someone might think PP is doing some work for us, and why I don’t see that as a promising avenue.
So I remain curious what the generators of your view are.
So, for your project of re-writing rationality in PP, would PP constitute a model of human irrationality, and how to rectify it, in contrast to ideal rationality (which would not be well-described by PP)?
Or would you employ PP both as a model which explains human irrationality and as an ideal rationality notion, so that we can use it both as the framework in which we describe irrationality and as the framework in which we can understand what better rationality would be?
Am I right in inferring from this that your preferred version of PP is one where we explicitly plan to minimize prediction error, as opposed to the Active Inference model (which instead minimizes KL divergence)? Or do you endorse an Active Inference type model?
This explanation in terms of evolution makes the PP theory consistent with observations, but does not give me a reason to believe PP. The added complexity to the prior is similar to the added complexity of other kinds of machinery to implement drives, so as yet I see no reason to prefer this explanation to other possibly explanations of what’s going on in the brain.
My remarks about problems with different versions of PP can each be patched in various ways; these are not supposed to be “gotcha” arguments in the sense of “PP can’t explain this! / PP can’t deal with this!”. Rather, I’m trying to boggle at why PP looks promising in the first place, as a hypothesis to raise to our attention.
Each of the arguments I mentioned are about one way I might see that someone might think PP is doing some work for us, and why I don’t see that as a promising avenue.
So I remain curious what the generators of your view are.