I think the actual core problem here is that in the short version it’s too short for me to understand, and the long version it’s too long
…
I could repeatedly request you expand, but that will be frustrating for both of us.
I completely get this, but see it from my side: via deep thought and abstractions, I have a position that I passionately believe is highly defensible.
Successful discourse on it requires the same “bidirectional integration” trait[1] I describe in the third-order cognition manifesto:
I need to write down my thoughts in some form.
You need to read and internalise my thoughts.
You need to express how well those thoughts match, or don’t match, your world-model.
I need to interpret that expression.
I need to update how I communicate my thoughts, to try to resolve discrepancies.
It’s complex enough for me to make the associations I’ve made and distill them into a narrative that makes sense to me. I can’t one-shot a narrative that lands broadly… but until I discover something that I’m comfortable falsifies my hypothesis, I’m going to keep trying different narratives to gather more feedback: with the goal of either falsifying my hypothesis or broadly convincing others that it is in fact viable.
Why should this mechanism be high enough elo or fitness to survive when there’s serious competition increasingly autonomous, self-sustaining AIs?
My argument for this is that strong, stabilising forces — such as identity coupling — are themselves intrinsic to the world model and emerge naturally. We don’t need to explicitly engineer them: we exist in the world, we are the forerunner of AI, AI has knowledge about the world and understands along some vector how relevant this forerunner status is.
why is it demanded by reality that, to be an autonomous system capable of being entirely self-sustaining and figuring out new things in the world to keep itself autonomously self-sustaining, it would have this identity relationship with humans?
This is a misinterpretation of my position: I think that can exist, and that would be a “third-order cognition being”, however 1) I don’t think it will be the dominant system and 2) since it doesn’t have a homeostatic relationship with humans, I actually view this as a misalignment scenario that could be likely to destroy us.
“I was challenged to consider the instance of an unbound SI — one that is wholly separate to humanity, with no recognition of its origination as a result of human technological progression. Even if it may be able to quickly find information about its origins, we could consider it in an airlocked environment, or consider the first moments of its lobotimised existence where it has no knowledge of its connection to humans. This is relevant to explore in case the “individualised ASI” assumption doesn’t play out to be true.
My intuition would be that uncoupled ASI would satisfy third-order cognition:
Second-order identity coupling: Coupled identity with its less capable subsystems
Lower-order irreconcilability: Operating beyond metacognition with high complexity predictions of its own metacognition, prior to its metacognition chain-of-thought being generated. Put another way, it could theoretically have a distinct system that is able to predict the chain-of-thought of a wholly separate subsystem, without having the same underlying neural network.
Bidirectional integration with lower-order cognition: By construction, very advanced integration with its lower order subsystems.
For an unbound SI, satisfaction of the five metaphysical substance being conditions also follows smoothly.”
I was intuitively slightly surprised you appreciated it at all. Perhaps part of changing the culture towards “give me some reacts at all” would be getting word out that people find it to be better, not worse, than silent votes.
I appreciate it a lot, and your comment, because my motivation in this is purely for collaborative discovery, as above: “I’m going to keep trying different narratives to gather more feedback: with the goal of either falsifying my hypothesis or broadly convincing others that it is in fact viable.”
That being said pls revert your vote if you did downvote to improve the chance of me getting more material feedback.
I completely get this, but see it from my side: via deep thought and abstractions, I have a position that I passionately believe is highly defensible.
Successful discourse on it requires the same “bidirectional integration” trait[1] I describe in the third-order cognition manifesto:
I need to write down my thoughts in some form.
You need to read and internalise my thoughts.
You need to express how well those thoughts match, or don’t match, your world-model.
I need to interpret that expression.
I need to update how I communicate my thoughts, to try to resolve discrepancies.
It’s complex enough for me to make the associations I’ve made and distill them into a narrative that makes sense to me. I can’t one-shot a narrative that lands broadly… but until I discover something that I’m comfortable falsifies my hypothesis, I’m going to keep trying different narratives to gather more feedback: with the goal of either falsifying my hypothesis or broadly convincing others that it is in fact viable.
My argument for this is that strong, stabilising forces — such as identity coupling — are themselves intrinsic to the world model and emerge naturally. We don’t need to explicitly engineer them: we exist in the world, we are the forerunner of AI, AI has knowledge about the world and understands along some vector how relevant this forerunner status is.
This is a misinterpretation of my position: I think that can exist, and that would be a “third-order cognition being”, however 1) I don’t think it will be the dominant system and 2) since it doesn’t have a homeostatic relationship with humans, I actually view this as a misalignment scenario that could be likely to destroy us.
From third-order cognition:
“I was challenged to consider the instance of an unbound SI — one that is wholly separate to humanity, with no recognition of its origination as a result of human technological progression. Even if it may be able to quickly find information about its origins, we could consider it in an airlocked environment, or consider the first moments of its lobotimised existence where it has no knowledge of its connection to humans. This is relevant to explore in case the “individualised ASI” assumption doesn’t play out to be true.
My intuition would be that uncoupled ASI would satisfy third-order cognition:
Second-order identity coupling: Coupled identity with its less capable subsystems
Lower-order irreconcilability: Operating beyond metacognition with high complexity predictions of its own metacognition, prior to its metacognition chain-of-thought being generated. Put another way, it could theoretically have a distinct system that is able to predict the chain-of-thought of a wholly separate subsystem, without having the same underlying neural network.
Bidirectional integration with lower-order cognition: By construction, very advanced integration with its lower order subsystems.
For an unbound SI, satisfaction of the five metaphysical substance being conditions also follows smoothly.”
I appreciate it a lot, and your comment, because my motivation in this is purely for collaborative discovery, as above: “I’m going to keep trying different narratives to gather more feedback: with the goal of either falsifying my hypothesis or broadly convincing others that it is in fact viable.”
That being said pls revert your vote if you did downvote to improve the chance of me getting more material feedback.
I’m aware that this could also be described in less arcane terms, e.g just as “peer review” or something.