On Internal Family Systems and multi-agent minds: a reply to PJ Eby

Introduction

I recently had a conversation with PJ Eby in the comments of my article “Building up to an Internal Family Systems model”. My most recent reply to it started getting rather long, and is also more broadly relevant as it contains updates on how I view IFS and the multi-agent framework in general. As a result, I decided to post it as its own article.

pjeby’s comments that I’m replying to are here and here; to verify that I understood him correctly, I wrote a summary of what I took to be his core points (below). He mostly endorsed them as correct, while having a few clarifications; the below summary has incorporated those corrections. After listing all of his points, I will present my own replies.

The summary is divided into two parts. Earlier on, pjeby wondered why IFS was popular among rationalists; one of the things I said in response was that rationalists like reductionism, and IFS helps reduce the mind into smaller components. pjeby felt that IFS is not good reductionism. My response goes into detail about what kinds of claims I view IFS as making, how I interpret those in terms of my multi-agent minds series, and how I would now rephrase my original article differently. Here I feel like I broadly agree with pjeby, and feel that our disagreement has more to do with using terminology differently than it does with actual object-level issues.

The second part concerns the practical usefulness of IFS as a therapeutic model. After all, a model can be useful while still not being true. Here I have more disagreement, and feel that (regardless of how good it is as a literal description of the mind) IFS has considerable practical value.

This article assumes that one has read my later article about Unlocking the Emotional Brain, as I reference concepts from it.

My summary of pjeby’s positions

IFS as a reductionist model

  • Good reductionism involves breaking down complex things into simpler parts. In the case of doing reductionism on agents, the end result should be mechanisms without agency. IFS “breaks down” behavior into mini-people inside our heads, each mini-person still being just as agenty as a full person is. This isn’t simplifying anything.

  • Talking about subagents/​parts or using intentional language causes people to assign things properties that they actually don’t have. If you say that a thermostat “wants” the temperature to be something in particular, or that a part “wants” to keep you safe, then you will predict its behavior to be more flexible and strategic than it really is.

  • The mechanisms behind emotional issues aren’t doing anything agentic, such as strategically planning ahead for the purpose of achieving a goal. Rather they are relatively simple rules which are used to trigger built-in subsystems that have evolved to run particular action patterns (punishing, protesting, idealistic virtue signalling, etc.). The various rules in question are built up /​ selected for using different reinforcement learning mechanisms, and define when the subsystems should be activated (in response to what kind of a cue) and how (e.g. who should be the target of the punishing).

  • Reinforcement learning does not need to have global coherence. Seemingly contradictory behaviors can be explained by e.g. a particular action being externally reinforced or becoming self-reinforcing, all the while it causes globally negative consequences despite being locally positive.

  • On the other hand, IFS seems to imply that there is dedicated hardware for each instance of an action pattern: each part corresponds to something like an evolved module in the brain, and each instance of a negative behavior/​emotion corresponds to a separate part. Even if IFS does not assume it, then at least this article is assuming dedicated agents actually existing.

  • The assumption of dedicated hardware for each instance of an action pattern is multiplying entities beyond necessity. The kinds of reinforcement learning systems that have been described can generate the same kinds of behaviors with much less dedicated hardware. You just need the learning systems, which then learn rules for when and how to trigger a much smaller number of dedicated subsystems.

  • The assumption of dedicated hardware for each instance of an action pattern also contradicts the reconsolidation model, because if each part was a piece of built-in hardware, then you couldn’t just entirely change its behavior through changing earlier learning.

  • Everything in IFS could be described more simply in terms of if-then rules, reinforcement learning etc.; if you do this, you don’t need the metaphor of “parts”, and you also have a more correct model which does actual reduction to simpler components.

IFS as a therapeutic approach:

  • Approaching things from an IFS framework can be useful when working with clients with severe trauma, or other cases when the client is not ready/​willing to directly deal with some of their material. However, outside that context (and even within it), IFS has a number of issues which make it much less effective than a non-parts-based approach.

  • Thinking about experiences like “being in distress” or “inner criticism” as parts that can be changed suggests that one could somehow completely eliminate those. But while triggers to pre-existing brain systems can be eliminated or changed, those brain systems themselves cannot. This means that it’s useless to try to get rid of such experiences entirely. One should rather focus on the memories which shape the rules that activate such systems.

  • Knowing this also makes it easier to unblend, because you understand that what is activated is a more general subsystem, rather than a very specific part.

  • If you experience your actions and behaviors being as caused by subagents with their own desires, you will feel less in control of your life and more at the mercy of your subagents. This is a nice crutch for people with denial issues who want to disclaim their own desires, but not a framework that would enable you to actually have more control over your life.

  • “Negotiating with parts” buys into the above denial and has you do playacting inside your head without really getting into the memories which created the schemas in the first place. If you knew about reconsolidation, you could just target the memories directly, and bypass the extra hassle.

  • “Developing self-leadership” involves practicing a desired behavior to override an old one; this is what Unlocking the Emotional Brain calls a counteractive strategy, and is fragile in all the ways that UtEB describes. It would be much more effective to just use a reconsolidation-based approach.

  • IFS makes it hard to surface the assumptions behind behavior, because one is stuck in the frame of negotiating with mini-people inside one’s head, rather than looking at the underlying memories and assumptions. Possibly an experienced IFS therapist can help look for those assumptions, but then one might as well use a non-parts-based framework.

  • Even when the therapist does know what to look for, the fact that IFS does not have a direct model of evidence and counterevidence makes it hard to find the interventions that will actually trigger reconsolidation. Rather one just acts out various behaviors which may trigger reconsolidation if they happen to hit the right pattern.

  • Besides the issue with luck, IFS does not really have the concept of a schema which keeps interpreting behaviors in light of its existing model, and thus filtering out all the counter-evidence the playacting might otherwise have contained. A skilled IFS therapist paying attention might notice this happening and deal with it, but such a good therapist could probably get results with any therapy model that gave them sufficient freedom to notice and address the issue.

My responses

IFS as a reductionist model:

  • Good reductionism involves breaking down complex things into simpler parts. In the case of doing reductionism on agents, the end result should be mechanisms without agency. IFS “breaks down” behavior into mini-people inside our heads, each mini-person still being just as agenty as a full person is. This isn’t simplifying anything.

I agree that IFS feels pretty anthropomorphic, and especially if you take the subpersonalities thing too literally, it’s going to guide you away from more correct models. And I agree that reducing things to subpersonalities isn’t really reducing things to simpler mechanisms.

That said, there are degrees of correctness and wrongness. We got into the discussion about reductionism when pjeby wondered why IFS was so popular among rationalists, and I suggested that it might be in part because rationalists like reductionism. Now, maybe reductionism was a bad term, and I should have used something like “rationalists like having better models” instead.

In any case, what I was trying to say was that even if IFS isn’t great reductionism, it’s still a better model than a naïve conception of the mind as a unified whole. Even if some aspects of it are misleading, “this person is doing self-destructive things because of a firefighter response to a triggered exile” is better than “this person is doing self-destructive things because they are too stupid or irresponsible to act better”.

I think the main virtue of IFS is just that it breaks down the mind into, well, parts, each containing different perspectives and goals. There are many interpretations of IFS; some go with the full-blown “subpersonalities” approach, whereas some just consider the subpersonalities a metaphor for something fundamentally non-agenty. Even the first edition of the Internal Family Systems Therapy textbook noted that while in the author’s experience, the system works better if you treat parts as independent entities, many IFS therapists just treat them as metaphors and that this works too:

To use the IFS model, one does not have to believe in an ontological sense that parts are internal people. Many therapists view this depiction merely as a useful metaphor and have success with the approach. From a pragmatic perspective, however, the inhabitants of our internal systems respond best to that kind of respect, so it is best to treat them that way—to attribute to them human qualities and responses. Some therapists resolve this problem by not viewing parts as people when they are theorizing, but seeing them as inner people when they are treating clients.

In our previous discussion, there were moments when I said something like “if you treat the mind as a reinforcement learning system, then that predicts X which doesn’t actually match human behavior, but if you treat the mind as an IFS-like system, then that makes more sense”. pjeby responded by pointing out that reinforcement learning does not need to be globally coherent, and things can actually be explained fine in a reinforcement learning framework if one postulates various local RL mechanisms. I think we were talking past each other, because an RL framework with local mechanisms is the kind of thing that I meant by “an IFS-like system”.

The distinction that I was drawing was between “a unified top-down system of the kind that one finds in folk psychology or most contemporary AI designs” on the one hand, versus “a system made up of many distinct learning mechanisms and knowledge stores” on the other. When I said that IFS might be popular among rationalists because they like reductionism, I just meant that IFS is a good model for many because it naturally guides people towards thinking along the latter kinds of lines.

E.g. the most top-voted comment on my IFS article says that what it suggests “about procrastination and addiction alone [...] are already huge”, so even in 2019, many rationalists found the IFS model to be better at offering explanations than any other model. Here’s another recent comment that seems to have moved towards a latter kind of model through the IFS frame:

Over this past year I’ve been thinking more in terms of “Much of my behavior exists because it was made as a mechanism to meet a need at some point.”
Ideas that flow out of this frame seem to be things like Internal Family Systems, and “if I want to change behavior, I have to actually make sure that need is getting met.”

Of course, pjeby’s point was that there are even better models than what one gets from IFS; and I agree. The IFS model was one of the starting points of my sequence, but my later ones have been moving more explicitly towards a UtEB kind of model. The intention of my comment was descriptive rather than prescriptive; that historically, the IFS model has been popular because it’s pragmatically useful and because, despite its possible flaws, rationalists haven’t been exposed to any better models.

  • Talking about subagents/​parts or using intentional language causes people to assign things properties that they actually don’t have. If you say that a thermostat “wants” the temperature to be something in particular, or that a part “wants” to keep you safe, then you will predict its behavior to be more flexible and strategic than it really is.

I agree that excessive anthropomorphization is a possible issue with both IFS in particular and intentional language in general. As pjeby says, it can cause people to misapply mental models derived from full-blown humans into parts of the psyche, preventing people from realizing what’s going on. (In fact, this was one of the main reasons why I stayed away from IFS for a long time, since it sounded like naïve folk psychology at best.)

Though I don’t think it’s necessarily a problem. When pjeby said

… if you actually want to predict how a thermostat behaves, using the brain’s built-in model of “thing with intentional stance”, you’re making your model worse. If you model the thermostat as, “thing that ‘wants’ the house a certain temperature”, then you’ll be confused when somebody sticks an ice cube or teapot underneath it, or when the temperature sensor breaks.

… then it’s worth noting that the reason why this example works is that neither of us is actually going to make that kind of a mistake. We know enough about mechanical stuff to realize that what’s meant by “wanting” in this context, and what its limitations are.

Now again, it’s true that if you don’t know anything about mechanical stuff, then you will be misled. On the other hand, if you do know enough to understand what’s meant, then it’s a convenient shorthard and aid for thinking. Evolutionary biologists might talk about e.g. “evolution selecting for a trait”, which makes evolution sound like an agent which is doing things, even though they fully well know what’s actually happening.

In one comment, I said that the parts in IFS feel basically like the schemas in UtEB to me, and pjeby replied:

A “model” is passive: it merely outputs predictions or evaluations, which are then acted on by other parts of the brain. It doesn’t have any goals, it just blindly maps situations to “things that might be good to do or avoid”. An “agent” is implicitly active and goal-seeking, whereas a model is not.

But suppose that we have these descriptions:

Agenty: A manager part scans the environment for things it considers threats, taking a particular action whenever it detects something that it considers a potential sign of threat.

Passive: A model contains a record of some kinds of sensory inputs which have previously been associated with negative consequences, and also contains an action to be taken in response. When cues matching that record are detected in the system’s sensory input, the system takes the corresponding action.

… then the “passive” version sounds to me like it’s just a description of how the “agenty” version is implemented. Especially if one knows (as I assume LW readers to know) that there’s no clear-cut distinction between code and data, or that the distinction is even messier in the brain.

The most famous society of mind model is probably Minsky’s Society of Mind; and while it seems to define “agents” in different ways in different chapters, some of them seem very low-level. I recall one chapter which sounded a lot like the “agents” in it were individual nodes in a neural network; another chapter had agents like “the add operation”. So it doesn’t feel like talking about “agents” would necessarily imply a lot of agency.

But then on the third hand, I again agree that the potential for being misled is still there—especially since I got slightly misled myself, when I wrote my IFS post and formulated things in terms of the “subagents” themselves doing reinforcement learning. I’ve been moving away from that kind of a formulation in my later posts, and would write this post differently if I were to do it now. Maybe I should change the name of the sequence from “multiagent minds” while I’m at it, but then I would need to think of some equally catchy name. :-)

On the fourth hand, there are things in my IFS post which, despite being framed in an agenty way, I think are still formally equivalent to a more passive formulation. E.g. I talked about subagents voting for different objects they wish to become the content of consciousness, and the winner being randomly determined based on the vote counts; this still seems like a fair way of describing things like winner-take-all networks or biased competition, assuming stochasticity in the process.

  • The real mechanisms behind emotional issues aren’t really doing anything agentic, such as strategically planning ahead for the purpose of achieving a goal. Rather they are relatively simple rules used to trigger built-in subsystems that have evolved to run particular kinds of action patterns (punishing, protesting, idealistic virtue signalling, etc.). The various rules in question are built up /​ selected for using different reinforcement learning mechanisms, and define when the subsystems should be activated (in response to what kind of a cue) and how (e.g. who should be the target of the punishing).

I like this model. It’s similar to what I’ve been moving towards in my more recent posts, and reminds me of e.g. Janina Fisher’s model, but is more precise about what exactly is happening.

  • Reinforcement learning does not need global coherence. Seemingly contradictory behaviors can be explained by e.g. a particular action being externally reinforced or becoming self-reinforcing, all the while it causes globally negative consequences despite being locally positive.

Yes, I agree. In e.g. my post on subagents and coherence, I specifically talked about the overall system doing learning on which subsystems to activate in different situations, based on local considerations such as which subsystems have previously made accurate predictions. And my post on neural Turing machines tried to sketch out some of those mechanisms.

When I said that reinforcement learning alone couldn’t explain some behaviors, I was again drawing the distinction between “a unified top-down system” vs. “a system made up of many distinct learning mechanisms and knowledge stores”. I was interpreting “reinforcement learning” as an instance of the former, because my model of the mechanisms behind IFS already involves the kinds of RL mechanisms that pjeby has been describing.

  • On the other hand, IFS seems to imply that there is dedicated hardware for each instance of an action pattern: each part corresponds to something like an evolved module in the brain, and each instance of a negative behavior/​emotion corresponds to a separate part. Even if IFS does not assume it, then at least this article is assuming dedicated agents actually existing (whether in software or hardware).

  • The assumption of dedicated hardware for each instance of an action pattern is multiplying entities beyond necessity. The kinds of reinforcement learning systems that have been described can generate the same kinds of behaviors with much less dedicated hardware. You just need the learning systems, which then learn rules for when and how to trigger a much smaller number of dedicated subsystems.

  • The assumption of dedicated hardware for each instance of an action pattern also contradicts the reconsolidation model, because if each part was a piece of built-in hardware, then you couldn’t just entirely change its behavior through changing earlier learning.

  • Everything in IFS could be described more simply in terms of if-then rules, reinforcement learning etc.; if you do this, you don’t need the metaphor of “parts”, and you also have a more correct model which does actual reduction to simpler components.

I agree that “dedicated innate hardware for each instance of an action pattern” seems wrong, and that the same thing is more easily explained by learned rules. Again, if I were to re-write my IFS article now, I would model things somewhat differently. Though again, I don’t see there being that big of a difference between “a dedicated agent” and “a dedicated set of rules”, once you have a correct understanding of what “agent” means. And when you do have that understanding, you might as well use the “agent” term as a convenient shorthand.

Honestly, IFS doesn’t seem to be really clear on exactly how specific/​general the parts are interpreted to be. To some extent, this might be because of IFS’s general philosophy of investigating each client as a unique system and not imposing too much of a pre-existing theoretical framework on them.

The second edition of the Internal Family Systems Therapy book has a mention of an infant researcher who “observed infants rotating among four or five discrete states, which we would call parts”, suggesting a stance of interpreting parts as pretty general brain systems; likewise when it suggests that a new part has come online when a previously compliant 2-year-old wakes up one day and insists on saying “no” to everything. But then in practical work, a lot of the parts seem a lot more specific.

My guess is that the “interface” used for creating the appearance of parts isn’t really that specific and may sometimes target narrower and sometimes broader wholes; e.g. it might access one specific schema or set of experiences, and sometimes a collection of interrelated schemas (referred to when IFS claims that “parts have parts”). Still, it seems still reasonable to me to think that IFS’s parts generally correspond to something like UtEB’s schemas.

Also worth noting that the second edition of Internal Family Systems Therapy explicitly references the memory reconsolidation model as a possible explanation of how parts are healed:

Our colleague Frank Anderson (2013) has pioneered using IFS in psychopharmacology. In the manual on IFS that we coauthored with Frank, he speculated that the IFS steps of witnessing and unburdening may correlate with a process neuroscientists call memory reconsolidation, a form of neuroplasticity described by Ecker, Ticic, Hulley, and Neimeyer (2012) that “changes existing emotional memory at the synaptic level” (Anderson et al., 2017, p. 127). In contrast to memory reconsolidation, a therapy like cognitive behavioral therapy (CBT) uses counteractive change strategies, which focus on developing new neural networks to counter older neural networks. As described in the manual, memory reconsolidation includes four phases. The first involves getting access to the traumatic memory. In IFS this would be finding, focusing, and fleshing out the target part. The second phase involves reactivation or destabilizing the emotional memory network and unlocking it at the synaptic level. In IFS this would be unblending, in which parts make room for the client’s Self. The third phase involves the mismatch, which is “a full disconfirmation of the meaning of the target memory” (Anderson et al., 2017, p. 127). In IFS this would occur during the process of witnessing and retrieving exiled parts. Finally, the fourth phase, erasure, involves clients making use of a new perspective to amend their understanding of the traumatic experience. In IFS this would happen when exiles let go of their burdens because the Self has a validating, nontraumatic perspective on the meaning of the part’s experience: You’re not bad—a bad thing happened to you.

The practical usefulness of IFS as a therapeutic approach

  • Approaching things from an IFS framework can be useful when working with clients with severe trauma, or other cases when the client is not ready/​willing to directly deal with some of their material. However, outside that context (and even within it), IFS has a number of issues which make it much less effective than a non-parts-based approach.

  • Thinking about experiences like “being in distress” or “inner criticism” as parts that can be changed suggests that one could somehow completely eliminate those. But while triggers to pre-existing brain systems can be eliminated or changed, those brain systems themselves cannot. This means that it’s useless to try to get rid of such experiences entirely. One should rather focus on the memories which shape the rules that activate such systems.

Note that IFS doesn’t really advocate getting rid of e.g. anger either, but rather transforming angry parts so that they are less extreme. That is, don’t get angry when it would be a bad idea, only get angry when it’s a good idea. This seems basically the same as what pjeby is saying: change the rules which regulate how the different systems are triggered.

Of course, it’s still possible to get the impression that one could somehow entirely eliminate particular response patterns. But is the other model any safer from a similar interpretation? If the IFS-user-who-never-wanted-to-be-angry thought “I will transform all of my angry parts so that I’m never angry anymore”, then the UtEB-mindhacker-who-never-wants-to-be-angry might think “I will eliminate all of the rules which trigger the angriness subsystem, and then that system will never get triggered anymore”.

I would expect both to fail for the same reason (being motivated by an anti-angriness schema rather than a desire to actually find the best response for any given situation), but I don’t see the UtEB-style framework offering any particular defense against this failure mode.

  • Knowing this also makes it easier to unblend, because you understand that what is activated is a more general subsystem, rather than a very specific part.

I tried this a bit the other day when I had some mild anxiety, intentionally reframing the experience as a subsystem being activated rather than a part. I think it might have made it slightly easier to unblend, but it was honestly probably mostly placebo effect from trying something new? Not sure why one would expect it to make a difference which frame you use; maybe if you thought of the part as actively fighting back or something, then that would introduce extra effort into the unblending process. But in both frames, you are choosing to interpret the reaction as something distinct from “you”, and paying attention to features which let you distinguish it from “yourself”. That seems to be crucial ingredient, rather than one’s interpretation of what exactly it is that one is unblending from.

  • If you experience your actions and behaviors being caused by subagents with their own desires, you will feel less in control of your life and more at the mercy of your subagents. This is a nice crutch for people with denial issues who want to disclaim their own desires, but not a framework that would enable you to actually have more control over your life.

I guess this could be a problem if you experienced your subagents as being really strong and independent and hard to reason with. But the IFS notion of all parts—even the most malicious and “evil-seeming” ones—fundamentally wanting your best, and its super-cooperative attitude towards them, seems to be good at framing things in a way that avoids this impression. If anything, people often feel liberated when they use IFS techniques and stop experiencing a part as something that torments them and who they are at the mercy of, and come to view it as a friend and an ally instead.

Of course, if one does have a schema that guides them into strong helplessness behaviors or into denial, then they can just filter out those parts of the IFS framing and get something like the denial-supporting model described. But again, I’m not sure if there’s anything in another framework either that would prevent one from forming an equivalent interpretation? Instead of feeling at the mercy of your subagents, you would feel at the mercy of your schemas and rules for triggering subsystems.

Introspecting on my own experience, I feel like the rule-based framing might actually put me more into the “denying my own desires” mode. It primes me to think along the lines of “oh, all of my bad behaviors are just caused by bad irrational rules, I’ll use consolidation techniques to rewrite all of them”. Whereas if I model myself as IFS-like parts, then I think more along the lines of “hmm, all of my parts have some good intention for me and some reason for acting the way they do, let me see if they might have some important insight that I’ve missed”. (Which, in my experience, is a stance that gets much better results. SaraHax reports similar things.)

  • “Negotiating with parts” buys into the above denial, and has you do playacting inside your head without really getting into the memories which created the schemas in the first place. If you knew about reconsolidation, you could just target the memories directly, and bypass all of the extra hassle.

This doesn’t really describe my experience of IFS. My feeling (both from doing IFS personally and facilitating it to others) is that it is not so much playacting, but rather basically just Focusing with a slightly different frame. Mark Lippmann describes IFS as “a structured super-charger for Focusing”, and notes that Focusing, IFS and Coherence Therapy seem to all be doing basically the same thing, with slight variations in the framing and questions that are used giving you radically different answers.

Let’s go with UtEB’s model of schemas containing a problem, solution, and memories that formed the schema. Again in my experience, “protectors” seem to correspond to the problem and solution parts of the schema, and “exiles” to the original experiences giving rise to that schema.

pjeby says that “if you knew about reconsolidation, you could just target the memories directly”, but AFAICT, the questions that IFS suggests you ask your protectors are intended to give you access to the memories directly. One is asking questions like “what is the goal of this part”, “what is the part afraid that would happen if it didn’t do its job” (note that this one is basically the same as what Richard was asked to imagine in the UtEB example), and “what is this part’s core belief”. These are all queries for eliciting more information about the schema in question; the reason why one fleshes out a part’s appearance is to get a stronger felt sense handle to run those queries against.

“Negotiating with parts” seems to likewise be about passing information between conflicting schemas (this is more explicit in CFAR’s IDC technique, which is basically IFS without the stuff about healing exiles). One schema says that you should go into a particular situation, another says that it’s dangerous. So you elicit information from the schema which knows about the dangers and pass it to the pro-active schema, reconsolidating its model of “going to this situation would be a pure win”, and getting a more nuanced prediction about how the danger could be avoided. Then you pass it back to the objecting schema, reconsolidating it in turn, until the two schemas return a compatible prediction and you know how you should proceed.

IFS claims, and this has often also often been my experience, that if you can access the exile (the original memories behind the schema) from a place of Self, then healing will happen naturally and intuitively. pjeby suggests that this happens mostly accidentally, if the person happens to hit upon the right behaviors to trigger the reconsolidation. But my experience doesn’t support that: if I have proper access to an exile and I’m truly in Self, the right behaviors come intuitively and reliably (maybe with just a bit of nudging from the therapist/​facilitator), as IFS suggests.

My model is that the mind is wired to avoid discomfort, and that schemas tend to contain the implicit assumption of “if memories relating to this experience are triggered, then that will always make me feel horrible, so I need to avoid that”. For example: you weren’t polite to your parents and you got shamed for it. This is a problem to be solved (in the Coherence Therapy sense): impoliteness will lead to social disapproval. The knowledge of this problem is neurally encoded by associating the feeling of shame together with the memory/​concept of being impolite.

My key assumption here is that unlike the logical brain, the emotional brain does not have a concept of social disapproval being bad for instrumental reasons. Rather, social disapproval is bad because it triggers bad feelings, and the emotional brain is optimizing for the avoidance of bad feelings rather than for the avoidance of objectively observable circumstances. Thus, on the evolutionary level, brains are wired to avoid social disapproval because it is bad for survival; but on an algorithmic level, the emotional brain is wired to avoid social disapproval because social disapproval causes shame (presumably produced by some separate shame module).

So the concepts of impoliteness and shame become linked together to create a self-fulfilling prediction: experiencing yourself as impolite triggers shame through the link. The brain’s reinforcement learning machinery comes to anticipate this and starts developing strategies for avoiding that experience. This prediction of (impoliteness → shame) is reinforced each time when you experience or remember being impolite, and feel the corresponding emotion.

But when you experience the original painful memory from a place of Self, then you are recalling it without also feeling horrible. Similar to Richard noticing that confidence does not automatically lead to social rejection, this is counterevidence to the expectation of impoliteness always causing you to be shamed. This acts as a juxtaposition experience, eliminating the system’s need to compulsively avoid impoliteness… and in so doing, reconsolidates the concept of impoliteness so as to remove the link to feelings of shame, meaning that the concept will no longer trigger shame.

E.g. in my IFS training, it was remarked that when parts say that the result of them not doing their job properly would be some concrete external consequence, one should keep asking for “and what would be so bad about that.″ In other words, keeping asking until you get a concrete prediction of an emotion or feeling as a result; that’s the thing that the schema is really trying to avoid, not the “objective” consequence. When you get that prediction, then you can activate it and provide it with counterevidence from the fact that you are accessing it from Self, and thus not feeling that horrible after all.

Now, I have had IFS sessions which seemed to involve “playacting” and seemed to go along a strong narrative arc, which I can’t really put in any sensible memory reconsolidation terms. It still seemed to do something, but didn’t feel super-effective. Probably another approach would have been more efficient there. So I am not claiming that IFS is always the best way to approach things: it’s probably effective for some things and less effective for others.

  • “Developing self-leadership” involves practicing a desired behavior so that it could override an old one; this is what Unlocking the Emotional Brain calls a counteractive strategy, and is fragile in all the ways that UtEB describes. It would be much more effective to just use a reconsolidation-based approach.

I wouldn’t think of developing Self-leadership mostly as a counteractive strategy. It does have some counteractive components, such as learning to unblend when triggered. But that’s mostly because any general strategy that you can use whenever you are triggered, before you have had a chance to use reconsolidative techniques or know enough about the schema in question to do it, has to necessarily be counteractive.

But the way I see it, the development of self-leadership is mainly reconsolidative as well. Take again UtEB’s model where a schema contains both a problem description and a strategy for solving it. In my interpretation, “negotiating for Self-leadership” with a part is something you do when you have a grasp of the part’s (schema’s) strategy for solving a problem, but haven’t yet had the opportunity to dig into the underlying problem and associated memories. (Possibly because you only learned of this particular schema when you were triggered ten seconds ago, or because this is tricky in some other way.) In that situation, you apply reconsolidation to the strategy—basically showing the schema that even if it is responding to a valid problem, you can deal with it better if you are allowed to remain in Self.

E.g. here’s an excerpt from Internal Family Systems Therapy (1st ed.) describing a negotiation for Self-leadership; I’ve added comments in bold.

I asked Nina to imagine that she was at the graveyard with her mother-in-law and to describe her thoughts and actions. She said that her mother-in-law was telling her stories about Tom between fits of sobbing. Superwoman [a part] had taken control of Nina and was giving the tearful woman platitudes, with barely concealed impatience and discomfort. She felt distant from her mother-in-law and knew that she was not being very helpful, but believed she had to rely on Superwoman to keep her from falling apart in that context. [Up until now, the procedure has been very similar to the Coherence Therapy example where Richard was asked to imagine being in a meeting and describe what he expected to happen. There’s a situation which triggers a part/​schema, and associated feelings and responses. Of course, CT didn’t use the “part” terminology, but that doesn’t make the essential content of the exercise different.]
I told Nina to talk with Superwoman and the other activated parts about letting her be her Self with her mother-in-law. They were to try not to interfere, and instead see how it went with Nina’s Self in the lead. The parts reluctantly agreed to watch instead of jumping in; Nina then imagined the scene at the cemetery again, but this time with the parts watching in the background instead of struggling to be in control. [Making your pre-existing implicit knowledge explicit by constructing a mental simulation which uses it; this technique featured in UtEB also.] Nina was amazed at how much more cooperative her parts were after she and they had done the internal work.
After a few minutes of silence, Nina reported that things had gone much better for both women. Nina and her mother-in-law had cried together, holding and stroking each other in commiseration. She felt closer than she ever had to the older woman, and was extremely relieved not to have to pretend to be so strong. [This makes for a juxtaposition experience: the prediction that Nina needs the Superwoman’s response to avoid falling apart, contrasted with the simulated result of actually being able to act better when in Self. As there is an update to the “Superwoman” schema’s prediction that its strategy being activated will produce better results, it becomes less likely to trigger.] I asked Nina how her parts reacted to this; she said that they seemed impressed and surprised that it went so well, although Superwoman was still skeptical. Nina decided to try to repeat this experiment in vivo that week, and I wished her luck. [...]
At the [next] session, Nina proudly reported that her trip to the cemetery had unfolded almost as she had envisioned: She and her mother-in-law were able to cry together. She was also surprised at how nurturing her mother-in-law was able to be, once Nina gave her the chance. [after the strategy component of a schema has been successfully reconsolidated, the new behavior becomes effortless; if necessary, this can be used as additional evidence to further reconsolidate either the same schema or different ones]

IFS claims that as one does additional work, parts gradually “come to trust the Self” more, and it becomes easier to get parts to grant Self-leadership. My guess is that this is because you accumulate more and more evidence of it actually being a good idea to act out of Self, rather than from a triggered state. That will make it easier to find evidence to present to any schema which suggests acting otherwise.

Earlier pjeby said:

It’s far more efficient to delete the rules that trigger conflicting behavior before you try to learn self-leadership, so that you aren’t fighting your reinforced behaviors to do so.

And this is in fact also recommended in IFS; from the same discussion that I was quoting before:

Nina was amazed at how much more cooperative her parts were after she and they had done the internal work. [...] I find that this kind of in-sight (which I call the “Self-confidence technique”) is often useful for helping a client’s parts see that they can trust the Self. When parts let the Self lead in external situations, those situations are inevitably handled better than when a part leads. If parts refuse to allow the Self to lead, then their fears can be explored and addressed until they are willing to temporarily cede control to the Self.

In other words, if you can’t get Self-leadership to work, then dig into your schemas, see what they predict would go wrong, and do reconsolidation until it comes naturally.

  • IFS makes it hard to surface the assumptions behind behavior, because one is stuck in the frame of negotiating with mini-people inside one’s head, rather than looking at the underlying memories and assumptions. Possibly an experienced IFS therapist can help look for those assumptions, but then one might as well use a non-parts-based framework.

  • Even when the therapist does know what to look for, the fact that IFS does not have a direct model of evidence and counterevidence makes it hard to find the interventions which will actually trigger reconsolidation. Rather one just acts out various behaviors which may trigger reconsolidation if they happen to hit the right pattern.

As I’ve noted, I feel like IFS is already all about looking at the underlying memories and assumptions. This is certainly what my IFS training emphasized: keep asking questions which will reveal the assumptions of the protectors, and take you to those exiles. The “UI” of the parts is used as an experiental /​ felt sense handle that leads back to the core of the schema.

It’s true that one might use a different framework as well, and sometimes that may have its benefits. But what makes IFS unique, I think, is that you don’t need to understand all the underlying assumptions of the schema: whatever they are, they bottom out at “and then I would feel terrible”, and if you can get to Self, you can just reconsolidate that assumption directly. And framing things in terms of parts seems to be useful in getting the person into Self, by asking parts to move aside.

  • Besides the issue with luck, IFS does not really have the concept of a schema which keeps interpreting behaviors in the light of its existing model, and thus filtering out all the counter-evidence that the playacting might otherwise have contained. A skilled IFS therapist who was paying attention might notice this happening and deal with it, but such a good therapist could probably get results with any therapy model that gave them sufficient freedom to notice and address the issue.

To finish this post off on an agreeable note: here I’m actually somewhat inclined to agree with pjeby. IFS as it’s taught does have something like this concept: it’s referred to as a “Self-like part”, i.e. a part which believes itself to be Self and tries to go through the motions of Self, but doesn’t actually get any real healing done. But even though IFS does have this concept, I’m not convinced it has very good tools for dealing with it. The main one that I heard of was just to insist that the part in question is not actually Self, and match the therapist’s conviction and reassurance against the part’s, until it moves aside and gives you access to the real Self. But I’m not sure of what one is supposed to do if that doesn’t work.

Thanks to Maija Haavisto for comments on drafts of this article.