My examples of subagents appearing to mysteriously answer questions was meant to suggest that there are subtle things that IFS explains/predicts, which aren’t automatically explained in other models. Examples of phenomena that contradict IFS model would be even more useful, though I’m failing to think of what those would look like.
Examples of phenomena that contradict IFS model would be even more useful, though I’m failing to think of what those would look like.
The reason it’s hard to think of what it would look like is that viewing things through an agentic lens makes you miss those counterexamples by confabulation. For example, you can trivially explain any behavior by postulating a part that wants to do that behavior. In contrast, simpler models have to be grounded in something more concrete (such as what the rule(s) are and how they were learned), which means such models are less flexible… and thus more likely to tell us something useful about the world, due to what they rule out.
My examples of subagents appearing to mysteriously answer questions was meant to suggest that there are subtle things that IFS explains/predicts, which aren’t automatically explained in other models
I don’t see that, though: if you ask people questions, they are mysteriously able to answer them, and the answers can often be startling, and true. While I can’t speak to your subjective experience, I can describe something from mine that sounds similar: if somebody asks me a question under certain circumstances, I find myself listening to an answer coming out of my mouth that I did not know before, and which I do not experience myself as knowing until after I hear myself say it.
This state of affairs does not involve IFS-style parts, so ISTM that IFS does not add anything special here to the idea that questions can trigger the appearance of information in the mind that one did not explicitly know beforehand… Almost as if it had just been prepared fresh by a chef in the kitchen, vs. something that was already in our refrigerator of knowledge. ;-)
My examples of subagents appearing to mysteriously answer questions was meant to suggest that there are subtle things that IFS explains/predicts, which aren’t automatically explained in other models. Examples of phenomena that contradict IFS model would be even more useful, though I’m failing to think of what those would look like.
The reason it’s hard to think of what it would look like is that viewing things through an agentic lens makes you miss those counterexamples by confabulation. For example, you can trivially explain any behavior by postulating a part that wants to do that behavior. In contrast, simpler models have to be grounded in something more concrete (such as what the rule(s) are and how they were learned), which means such models are less flexible… and thus more likely to tell us something useful about the world, due to what they rule out.
I don’t see that, though: if you ask people questions, they are mysteriously able to answer them, and the answers can often be startling, and true. While I can’t speak to your subjective experience, I can describe something from mine that sounds similar: if somebody asks me a question under certain circumstances, I find myself listening to an answer coming out of my mouth that I did not know before, and which I do not experience myself as knowing until after I hear myself say it.
This state of affairs does not involve IFS-style parts, so ISTM that IFS does not add anything special here to the idea that questions can trigger the appearance of information in the mind that one did not explicitly know beforehand… Almost as if it had just been prepared fresh by a chef in the kitchen, vs. something that was already in our refrigerator of knowledge. ;-)