So I’ve got a hypothetical model for how boundary-breaking can be measured in terms of the predictability of world models of the future.
The hypothesis is that the higher the effect of a system with a boundary on the predictability of your future world models, the higher the reward prediction error (or whatever equivalent measure you want to use) is if the boundary is broken.
Example: If you believe that your community are all focused and working together towards one common goal, then you are willing to give a lot away to other people if you believe the community will socially reciprocate. When the boundary of social reciprocity is broken, the future worlds where you were counting on reciprocity being intact turn negative in EV compared to before, and reward prediction error happens on your prediction of future worlds.
The higher the importance of the boundary-breaking, the higher the change in potential worlds and EV. This in turn means higher reward prediction error.
So I’ve got a hypothetical model for how boundary-breaking can be measured in terms of the predictability of world models of the future.
The hypothesis is that the higher the effect of a system with a boundary on the predictability of your future world models, the higher the reward prediction error (or whatever equivalent measure you want to use) is if the boundary is broken.
Example: If you believe that your community are all focused and working together towards one common goal, then you are willing to give a lot away to other people if you believe the community will socially reciprocate. When the boundary of social reciprocity is broken, the future worlds where you were counting on reciprocity being intact turn negative in EV compared to before, and reward prediction error happens on your prediction of future worlds.
The higher the importance of the boundary-breaking, the higher the change in potential worlds and EV. This in turn means higher reward prediction error.