I have an impression that David Deutsch’s critical rationalism, in the idea of “absolutely everything in a mind can be critiqued”, describes something like the model you’re point at. Unfortunately I don’t know of writing about this and don’t remember why I have this impression.
I do see the inverse side: a single fixed goal would be something in the mind that’s not open to critique, hence not truly generally intelligent from a Deutschian perspective (I would guess; I don’t actually know his work well).
To expand on the “not truly generally intelligent” point: one way this could look is if the goal included some tacit assumptions about the universe that turned out later not to be true in general—e.g. if the agent’s goal was something involving increasingly long-range simultaneous coordination, before the discovery of relativity—and if the goal were really unchangeable, then it would bar or at least complicate the agent’s updating to a new, truer ontology.
I have an impression that David Deutsch’s critical rationalism, in the idea of “absolutely everything in a mind can be critiqued”, describes something like the model you’re point at. Unfortunately I don’t know of writing about this and don’t remember why I have this impression.
I do see the inverse side: a single fixed goal would be something in the mind that’s not open to critique, hence not truly generally intelligent from a Deutschian perspective (I would guess; I don’t actually know his work well).
To expand on the “not truly generally intelligent” point: one way this could look is if the goal included some tacit assumptions about the universe that turned out later not to be true in general—e.g. if the agent’s goal was something involving increasingly long-range simultaneous coordination, before the discovery of relativity—and if the goal were really unchangeable, then it would bar or at least complicate the agent’s updating to a new, truer ontology.