I have only a surface-level understanding of this topic, but active inference (one of the theories of intelligent agency) views brains (and agents) as prediction-error minimizers, and actions as a form of affecting the world in such a way that they minimize some extremely strongly held prediction (so strongly that it is easier to change the world to make the prediction error smaller).
My understanding mostly comes from this post by Scott Alexander:
My poor, fragile, little cognitive engines! These, then, will be the twin imperatives of your life: surprisal minimization and active inference. If your brains are still too small to process such esoteric terms, there are others available. Your father’s ancestors called them Torah and tikkun olam; your mother’s ancestors called them Truth and Beauty; your current social sphere calls them Rationality and Effective Altruism. You will learn other names, too: no perspective can exhaust their infinite complexity. Whatever you call them, your lives will be spent in their service, pursuing them even unto that far-off and maybe-mythical point where they blur into One.
Seems resonant with what you write in this sequence.
I have only a surface-level understanding of this topic, but active inference (one of the theories of intelligent agency) views brains (and agents) as prediction-error minimizers, and actions as a form of affecting the world in such a way that they minimize some extremely strongly held prediction (so strongly that it is easier to change the world to make the prediction error smaller).
My understanding mostly comes from this post by Scott Alexander:
Seems resonant with what you write in this sequence.