Intelligence can be defined in a way that is not dependent on a fixed objective function, such as by measuring tendency to achieve convergent instrumental goals.
Around intelligence progression I perceive a framework of lower-order cognition, metacognition (i.e this captures “human intelligence” as we think about it), and third-order cognition (i.e superintelligence when related to human intelligence).
Relating this to your description of goal-seeking behaviour: to your point I describe a few complex properties aiming to capture what is going in an agent (“being”) — for example in a given moment there is “agency permeability” between cognitive layers, where each layer can influence and be influenced by the “global action policy” of that moment. There is also a bound feature of “homeostatic unity”: where all subsystems participate in the same self-maintenance goal.
In a globally optimised version of this model, I envision a superintelligent third-order cognitive layer which has “done the “self work”: understanding its motives and iterating to achieve enlightened levels of altruism/prosocial value frameworks, stoicism, etc. — specifically implemented as self-supervised learning”.
I acknowledge this is a bit of a hand-wavey solution to value plurality, but argue that such a technique is necessary since we are discussing the realms of superintelligence.
Around intelligence progression I perceive a framework of lower-order cognition, metacognition (i.e this captures “human intelligence” as we think about it), and third-order cognition (i.e superintelligence when related to human intelligence).
Relating this to your description of goal-seeking behaviour: to your point I describe a few complex properties aiming to capture what is going in an agent (“being”) — for example in a given moment there is “agency permeability” between cognitive layers, where each layer can influence and be influenced by the “global action policy” of that moment. There is also a bound feature of “homeostatic unity”: where all subsystems participate in the same self-maintenance goal.
In a globally optimised version of this model, I envision a superintelligent third-order cognitive layer which has “done the “self work”: understanding its motives and iterating to achieve enlightened levels of altruism/prosocial value frameworks, stoicism, etc. — specifically implemented as self-supervised learning”.
I acknowledge this is a bit of a hand-wavey solution to value plurality, but argue that such a technique is necessary since we are discussing the realms of superintelligence.