Shameless self-writing promotion as your comments caught my attention (first this, now this comment on cognitive manifolds): I wrote about how we might model superintelligence as “meta metacognition” (possible parallel to your “manifold approximating our metamanifold”) — see third order cognition.
I need to create a distilled write-up as the post isn’t too easily readable… it’s long so please just skim if you are interested. The main takeaway though is that if we do model digital intelligence this way, we can try to precisely talk about how it relates to human intelligence and explore those factors within alignment research/misalignment scenarios.
I determine these factors to describe the relationship: 1) second-order identity coupling, 2) lower-order irreconcilability, 3) bidirectional integration with lower-order cognition, 4) agency permeability, 5) normative closure, 6) persistence conditions, 7) boundary conditions, 8) homeostatic unity.
Shameless self-writing promotion as your comments caught my attention (first this, now this comment on cognitive manifolds): I wrote about how we might model superintelligence as “meta metacognition” (possible parallel to your “manifold approximating our metamanifold”) — see third order cognition.
I need to create a distilled write-up as the post isn’t too easily readable… it’s long so please just skim if you are interested. The main takeaway though is that if we do model digital intelligence this way, we can try to precisely talk about how it relates to human intelligence and explore those factors within alignment research/misalignment scenarios.
I determine these factors to describe the relationship: 1) second-order identity coupling, 2) lower-order irreconcilability, 3) bidirectional integration with lower-order cognition, 4) agency permeability, 5) normative closure, 6) persistence conditions, 7) boundary conditions, 8) homeostatic unity.