The content of this and the other comment thread seems to be overlapping, so I’ll consolidate (pun intended) my responses to this one. Before we go on, let me check that I’ve correctly understood what I take to be your points.
Does the following seem like a fair summary of what you are saying?
Re: IFS as a reductionist model:
Good reductionism involves breaking down complex things into simpler parts. IFS “breaks down” behavior into mini-people inside our heads, each mini-person being equally complex as a full psyche. This isn’t simplifying anything.
Talking about subagents/parts or using intentional language causes people to assign things properties that they actually don’t have. If you say that a thermostat “wants” the temperature to be something in particular, or that a part “wants” to keep you safe, then you will predict its behavior to be more flexible and strategic than it really is.
The real mechanisms behind emotional issues aren’t really doing anything agentic, such as strategically planning ahead for the purpose of achieving a goal. Rather they are relatively simple rules which are used to trigger built-in subsystems that have evolved to run particular kinds of action patterns (punishing, protesting, idealistic virtue signalling, etc.). The various rules in question are built up / selected for using different reinforcement learning mechanisms, and define when the subsystems should be activated (in response to what kind of a cue) and how (e.g. who should be the target of the punishing).
Reinforcement learning does not need to have global coherence. Seemingly contradictory behaviors can be explained by e.g. a particular action being externally reinforced or becoming self-reinforcing, all the while it causes globally negative consequences despite being locally positive.
On the other hand, IFS assumes that there is dedicated hardware for each instance of an action pattern: each part corresponds to something like an evolved module in the brain, and each instance of a negative behavior/emotion corresponds to a separate part.
The assumption of dedicated hardware for each instance of an action pattern is multiplying entities beyond necessity. The kinds of reinforcement learning systems that have been described can generate the same kinds of behaviors with much less dedicated hardware. You just need the learning systems, which then learn rules for when and how to trigger a much small number of dedicated subsystems.
The assumption of dedicated hardware for each instance of an action pattern also contradicts the reconsolidation model, because if each part was a piece of built-in hardware, then you couldn’t just entirely change its behavior through changing earlier learning.
Everything in IFS could be described more simply in terms of if-then rules, reinforcement learning etc.; if you do this, you don’t need the metaphor of “parts”, and you also have a more correct model which does actual reduction to simpler components.
Re: the practical usefulness of IFS as a therapeutic approach:
Approaching things from an IFS framework can be useful when working with clients with severe trauma, or other cases when the client is not ready/willing to directly deal with some of their material. However, outside that context (and even within it), IFS has a number of issues which make it much less effective than a non-parts-based approach.
Thinking about experiences like “being in distress” or “inner criticism” as parts that can be changed suggests that one could somehow completely eliminate those. But while triggers to pre-existing brain systems can be eliminated or changed, those brain systems themselves cannot. This means that it’s useless to try to get rid of such experiences entirely. One should rather focus on the memories which shape the rules that activate such systems.
Knowing this also makes it easier to unblend, because you understand that what is activated is a more general subsystem, rather than a very specific part.
If you experience your actions and behaviors being caused by subagents with their own desires, you will feel less in control of your life and more at the mercy of your subagents. This is a nice crutch for people with denial issues who want to disclaim their own desires, but not a framework that would enable you to actually have more control over your life.
“Negotiating with parts” buys into the above denial, and has you do playacting inside your head without really getting into the memories which created the schemas in the first place. If you knew about reconsolidation, you could just target the memories directly, and bypass all of the extra hassle.
“Developing self-leadership” involves practicing a desired behavior so that it could override an old one; this is what Unlocking the Emotional Brain calls a counteractive strategy, and is fragile in all the ways that UtEB describes. It would be much more effective to just use a reconsolidation-based approach.
IFS makes it hard to surface the assumptions behind behavior, because one is stuck in the frame of negotiating with mini-people inside one’s head, rather than looking at the underlying memories and assumptions. Possibly an experienced IFS therapist can help look for those assumptions, but then one might as well use a non-parts-based framework.
Even when the therapist does know what to look for, the fact that IFS does not have a direct model of evidence and counterevidence makes it hard to find the interventions which will actually trigger reconsolidation. Rather one just acts out various behaviors which may trigger reconsolidation if they happen to hit the right pattern.
Besides the issue with luck, IFS does not really have the concept of a schema which keeps interpreting behaviors in the light of its existing model, and thus filtering out all the counter-evidence that the playacting might otherwise have contained. To address this you need to target the problematic schema directly, which requires you to actually know about this kind of a thing and be able to use reconsolidation techniques directly.
Excellent summary! There are a couple of areas where you may have slightly over-stated my claims, though:
IFS “breaks down” behavior into mini-people inside our heads, each mini-person being equally complex as a full psyche.
I wouldn’t say that IFS claims each mini-person is equally complex, only that the reduction here is just a separation of goals or concerns, and does not reduce the complexity of having agency. And this is particularly important because it is the elimination of the idea of smart or strategic agency that allows one to actually debug brains.
Compare to programming: when writing a program, one intends for it to behave in a certain way. Yet bugs exist, because the mapping of intention to actual rules for behavior is occasionally incomplete or incorrectly matched to the situation in which the program operates.
But, so long as the programmer thinks of the program as acting according to the programmer’s intention (as opposed to whatever the programmer actually wrote), it is hard for that programmer to actually debug the program. Debugging requires the programmer to discard any mental models of what the program is “supposed to” do, in order to observe what the program is actually doing… which might be quite wrong and/or stupid.
In the same way, I believe that ascribing “agency” to subsets of human behavior is a similar instance of being blinded by an abstraction that doesn’t match the actual thing. We’re made up of lots of code, and our problems can be considered bugs in the code… even if the behavior the code produces was “working as intended” when it was written. ;-)
On the other hand, IFS assumes that there is dedicated hardware for each instance of an action pattern: each part corresponds to something like an evolved module in the brain, and each instance of a negative behavior/emotion corresponds to a separate part.
I don’t claim that IFS assumes dedicated per-instance hardware; but it seems kind of implied. My understanding is that IFS at least assumes that parts are agents that 1) do things, 2) can be conversed with as if they were sentient, and 3) can be reasoned or negotiated with. That’s more than enough to view it as not reducing “agency”.
But the article that we are having this discussion on does try to a model a system with dedicated agents actually existing (whether in hardware or software), so at least that model is introducing dedicated entities beyond necessity. ;)
Besides the issue with luck, IFS does not really have the concept of a schema which keeps interpreting behaviors in the light of its existing model, and thus filtering out all the counter-evidence that the playacting might otherwise have contained. To address this you need to target the problematic schema directly, which requires you to actually know about this kind of a thing and be able to use reconsolidation techniques directly.
Technically, it’s possible to change people without intentionally using reconsolidation or a technique that works by directly attempting it. It happens by accident all the time, after all!
And it’s quite possible for an IFS therapist to notice the filtering or distortions taking place, if they’re skilled and paying attention. Presumably, they would assign it to a part and then engage in negotiation or an attempt to “heal” said part, which then might or might not result in reconsolidation.
So I’m not claiming that IFS can’t work in such cases, only that to work, it requires an observant therapist. But such a good therapist could probably get results with any therapy model that gave them sufficient freedom to notice and address the issue, no matter what terminology was used to describe the issue, or the method of addressing it.
As the authors of UTEB put it:
Transformational change of the kind addressed here—the true disappearance of long-standing, distressing emotional learning—of course occurs at times in all sorts of psychotherapies that involve no design or intention to implement the transformation sequence by creating juxtaposition experiences.
After all, reconsolidation isn’t some super-secret special hack or unintended brain exploit, it’s how the brain normally updates its predictive models, and it’s supposed to happen automatically. It’s just that once a model pushes the prior probability of something high (or low) enough, your brain starts throwing out each instance of a conflicting event, even if considered collectively they would be reason to make a major update in the probability.
The content of this and the other comment thread seems to be overlapping, so I’ll consolidate (pun intended) my responses to this one. Before we go on, let me check that I’ve correctly understood what I take to be your points.
Does the following seem like a fair summary of what you are saying?
Re: IFS as a reductionist model:
Good reductionism involves breaking down complex things into simpler parts. IFS “breaks down” behavior into mini-people inside our heads, each mini-person being equally complex as a full psyche. This isn’t simplifying anything.
Talking about subagents/parts or using intentional language causes people to assign things properties that they actually don’t have. If you say that a thermostat “wants” the temperature to be something in particular, or that a part “wants” to keep you safe, then you will predict its behavior to be more flexible and strategic than it really is.
The real mechanisms behind emotional issues aren’t really doing anything agentic, such as strategically planning ahead for the purpose of achieving a goal. Rather they are relatively simple rules which are used to trigger built-in subsystems that have evolved to run particular kinds of action patterns (punishing, protesting, idealistic virtue signalling, etc.). The various rules in question are built up / selected for using different reinforcement learning mechanisms, and define when the subsystems should be activated (in response to what kind of a cue) and how (e.g. who should be the target of the punishing).
Reinforcement learning does not need to have global coherence. Seemingly contradictory behaviors can be explained by e.g. a particular action being externally reinforced or becoming self-reinforcing, all the while it causes globally negative consequences despite being locally positive.
On the other hand, IFS assumes that there is dedicated hardware for each instance of an action pattern: each part corresponds to something like an evolved module in the brain, and each instance of a negative behavior/emotion corresponds to a separate part.
The assumption of dedicated hardware for each instance of an action pattern is multiplying entities beyond necessity. The kinds of reinforcement learning systems that have been described can generate the same kinds of behaviors with much less dedicated hardware. You just need the learning systems, which then learn rules for when and how to trigger a much small number of dedicated subsystems.
The assumption of dedicated hardware for each instance of an action pattern also contradicts the reconsolidation model, because if each part was a piece of built-in hardware, then you couldn’t just entirely change its behavior through changing earlier learning.
Everything in IFS could be described more simply in terms of if-then rules, reinforcement learning etc.; if you do this, you don’t need the metaphor of “parts”, and you also have a more correct model which does actual reduction to simpler components.
Re: the practical usefulness of IFS as a therapeutic approach:
Approaching things from an IFS framework can be useful when working with clients with severe trauma, or other cases when the client is not ready/willing to directly deal with some of their material. However, outside that context (and even within it), IFS has a number of issues which make it much less effective than a non-parts-based approach.
Thinking about experiences like “being in distress” or “inner criticism” as parts that can be changed suggests that one could somehow completely eliminate those. But while triggers to pre-existing brain systems can be eliminated or changed, those brain systems themselves cannot. This means that it’s useless to try to get rid of such experiences entirely. One should rather focus on the memories which shape the rules that activate such systems.
Knowing this also makes it easier to unblend, because you understand that what is activated is a more general subsystem, rather than a very specific part.
If you experience your actions and behaviors being caused by subagents with their own desires, you will feel less in control of your life and more at the mercy of your subagents. This is a nice crutch for people with denial issues who want to disclaim their own desires, but not a framework that would enable you to actually have more control over your life.
“Negotiating with parts” buys into the above denial, and has you do playacting inside your head without really getting into the memories which created the schemas in the first place. If you knew about reconsolidation, you could just target the memories directly, and bypass all of the extra hassle.
“Developing self-leadership” involves practicing a desired behavior so that it could override an old one; this is what Unlocking the Emotional Brain calls a counteractive strategy, and is fragile in all the ways that UtEB describes. It would be much more effective to just use a reconsolidation-based approach.
IFS makes it hard to surface the assumptions behind behavior, because one is stuck in the frame of negotiating with mini-people inside one’s head, rather than looking at the underlying memories and assumptions. Possibly an experienced IFS therapist can help look for those assumptions, but then one might as well use a non-parts-based framework.
Even when the therapist does know what to look for, the fact that IFS does not have a direct model of evidence and counterevidence makes it hard to find the interventions which will actually trigger reconsolidation. Rather one just acts out various behaviors which may trigger reconsolidation if they happen to hit the right pattern.
Besides the issue with luck, IFS does not really have the concept of a schema which keeps interpreting behaviors in the light of its existing model, and thus filtering out all the counter-evidence that the playacting might otherwise have contained. To address this you need to target the problematic schema directly, which requires you to actually know about this kind of a thing and be able to use reconsolidation techniques directly.
Excellent summary! There are a couple of areas where you may have slightly over-stated my claims, though:
I wouldn’t say that IFS claims each mini-person is equally complex, only that the reduction here is just a separation of goals or concerns, and does not reduce the complexity of having agency. And this is particularly important because it is the elimination of the idea of smart or strategic agency that allows one to actually debug brains.
Compare to programming: when writing a program, one intends for it to behave in a certain way. Yet bugs exist, because the mapping of intention to actual rules for behavior is occasionally incomplete or incorrectly matched to the situation in which the program operates.
But, so long as the programmer thinks of the program as acting according to the programmer’s intention (as opposed to whatever the programmer actually wrote), it is hard for that programmer to actually debug the program. Debugging requires the programmer to discard any mental models of what the program is “supposed to” do, in order to observe what the program is actually doing… which might be quite wrong and/or stupid.
In the same way, I believe that ascribing “agency” to subsets of human behavior is a similar instance of being blinded by an abstraction that doesn’t match the actual thing. We’re made up of lots of code, and our problems can be considered bugs in the code… even if the behavior the code produces was “working as intended” when it was written. ;-)
I don’t claim that IFS assumes dedicated per-instance hardware; but it seems kind of implied. My understanding is that IFS at least assumes that parts are agents that 1) do things, 2) can be conversed with as if they were sentient, and 3) can be reasoned or negotiated with. That’s more than enough to view it as not reducing “agency”.
But the article that we are having this discussion on does try to a model a system with dedicated agents actually existing (whether in hardware or software), so at least that model is introducing dedicated entities beyond necessity. ;)
Technically, it’s possible to change people without intentionally using reconsolidation or a technique that works by directly attempting it. It happens by accident all the time, after all!
And it’s quite possible for an IFS therapist to notice the filtering or distortions taking place, if they’re skilled and paying attention. Presumably, they would assign it to a part and then engage in negotiation or an attempt to “heal” said part, which then might or might not result in reconsolidation.
So I’m not claiming that IFS can’t work in such cases, only that to work, it requires an observant therapist. But such a good therapist could probably get results with any therapy model that gave them sufficient freedom to notice and address the issue, no matter what terminology was used to describe the issue, or the method of addressing it.
As the authors of UTEB put it:
After all, reconsolidation isn’t some super-secret special hack or unintended brain exploit, it’s how the brain normally updates its predictive models, and it’s supposed to happen automatically. It’s just that once a model pushes the prior probability of something high (or low) enough, your brain starts throwing out each instance of a conflicting event, even if considered collectively they would be reason to make a major update in the probability.
Here’s my reply! Got article-length, so I posted it separately.
Thanks for the clarifications! I’ll get back to you with my responses soon-ish.