Another source of prediction error arises not from the mismatch between model and reality, but from tension between internal models.
Is this a standard element of Predictive Processing, or are you generalized / analogizing the theory in a general way?
I’m familiar with the prediction error that results in diffs between sense data and generative models, but not between different generative models.
I agree that these are importantly different, and easily conflated!