Building up to an Internal Family Systems model


In­ter­nal Fam­ily Sys­tems (IFS) is a psy­chother­apy school/​tech­nique/​model which lends it­self par­tic­u­larly well for be­ing used alone or with a peer. For years, I had no­ticed that many of the kinds of peo­ple who put in a lot of work into de­vel­op­ing their emo­tional and com­mu­ni­ca­tion skills, some within the ra­tio­nal­ist com­mu­nity and some out­side it, kept men­tion­ing IFS.

So I looked at the Wikipe­dia page about the IFS model, and bounced off, since it sounded like non­sense to me. Then some­one brought it up again, and I thought that maybe I should re­con­sider. So I looked at the WP page again, thought “nah, still non­sense”, and con­tinued to ig­nore it.

This con­tinued un­til I par­ti­ci­pated in CFAR men­tor­ship train­ing last Septem­ber, and we had a class on CFAR’s In­ter­nal Dou­ble Crux (IDC) tech­nique. IDC clicked re­ally well for me, so I started us­ing it a lot and also fa­cil­i­tat­ing it to some friends. How­ever, once we started us­ing it on more emo­tional is­sues (as op­posed to just things with em­piri­cal facts point­ing in differ­ent di­rec­tions), we started run­ning into some weird things, which it felt like IDC couldn’t quite han­dle… things which re­minded me of how peo­ple had been de­scribing IFS. So I fi­nally read up on it, and have been suc­cess­fully ap­ply­ing it ever since.

In this post, I’ll try to de­scribe and mo­ti­vate IFS in terms which are less likely to give peo­ple in this au­di­ence the same kind of a “no, that’s non­sense” re­ac­tion as I ini­tially had.

Epistemic status

This post is in­tended to give an ar­gu­ment for why some­thing like the IFS model could be true and a thing that works. It’s not re­ally an ar­gu­ment that IFS is cor­rect. My rea­son for think­ing in terms of IFS is sim­ply that I was ini­tially su­per-skep­ti­cal of it (more on the rea­sons of my skep­ti­cism later), but then started en­coun­ter­ing things which it turned out IFS pre­dicted—and I only found out about IFS pre­dict­ing those things af­ter I fa­mil­iarized my­self with it.

Ad­di­tion­ally, I now feel that IFS gives me sig­nifi­cantly more gears for un­der­stand­ing the be­hav­ior of both other peo­ple and my­self, and it has been sig­nifi­cantly trans­for­ma­tive in ad­dress­ing my own emo­tional is­sues. Sev­eral other peo­ple who I know re­port it hav­ing been similarly pow­er­ful for them. On the other hand, aside for a few iso­lated pa­pers with ti­tles like “proof-of-con­cept” or “pi­lot study”, there seems to be con­spicu­ously lit­tle peer-re­viewed ev­i­dence in fa­vor of IFS, mean­ing that we should prob­a­bly ex­er­cise some cau­tion.

I think that, even if not com­pletely cor­rect, IFS is cur­rently the best model that I have for ex­plain­ing the ob­ser­va­tions that it’s point­ing at. I en­courage you to read this post in the style of learn­ing soft skills—try­ing on this per­spec­tive, and see­ing if there’s any­thing in the de­scrip­tion which feels like it res­onates with your ex­pe­riences.

But be­fore we talk about IFS, let’s first talk about build­ing robots. It turns out that if we put to­gether some ex­ist­ing ideas from ma­chine learn­ing and neu­ro­science, we can end up with a robot de­sign that pretty closely re­sem­bles IFS’s model of the hu­man mind.

What fol­lows is an in­ten­tion­ally sim­plified story, which is sim­pler than ei­ther the full IFS model or a full ac­count that would in­cor­po­rate ev­ery­thing that I know about hu­man brains. Its in­tent is to demon­strate that an agent ar­chi­tec­ture with IFS-style sub­agents might eas­ily emerge from ba­sic ma­chine learn­ing prin­ci­ples, with­out claiming that all the de­tails of that toy model would ex­actly match hu­man brains. A dis­cus­sion of what ex­actly IFS does claim in the con­text of hu­man brains fol­lows af­ter the robot story.

Wanted: a robot which avoids catastrophes

Sup­pose that we’re build­ing a robot that we want to be gen­er­ally in­tel­li­gent. The hot thing these days seems to be deep re­in­force­ment learn­ing, so we de­cide to use that. The robot will ex­plore its en­vi­ron­ment, try out var­i­ous things, and grad­u­ally de­velop habits and prefer­ences as it ac­cu­mu­lates ex­pe­rience. (Just like those hu­man ba­bies.)

Now, there are some prob­lems we need to ad­dress. For one, deep re­in­force­ment learn­ing works fine in simu­lated en­vi­ron­ments where you’re safe to ex­plore for an in­definite du­ra­tion. How­ever, it runs into prob­lems if the robot is sup­posed to learn in a real life en­vi­ron­ment. Some ac­tions which the robot might take will re­sult in catas­trophic con­se­quences, such as it be­ing dam­aged. If the robot is just do­ing things at ran­dom, it might end up dam­ag­ing it­self. Even worse, if the robot does some­thing which could have been catas­trophic but nar­rowly avoids harm, it might then for­get about it and end up do­ing the same thing again!

How could we deal with this? Well, let’s look at the ex­ist­ing liter­a­ture. Lip­ton et al. (2016) pro­posed what seems like a promis­ing idea for ad­dress­ing the part about for­get­ting. Their ap­proach is to ex­plic­itly main­tain a mem­ory of dan­ger states—situ­a­tions which are not the catas­trophic out­come it­self, but from which the learner has pre­vi­ously ended up in a catas­tro­phe. For in­stance, if “be­ing burned by a hot stove” is a catas­tro­phe, then “be­ing about to poke your finger in the stove” is a dan­ger state. Depend­ing on how cau­tious we want to be and how many pre­ced­ing states we want to in­clude in our list of dan­ger states, “go­ing near the stove” and “see­ing the stove” can also be dan­ger states, though then we might end up with a se­ri­ously stove-pho­bic robot.

In any case, we main­tain a sep­a­rate stor­age of dan­ger states, in such a way that the learner never for­gets about them. We use this stor­age of dan­ger states to train a fear model: a model which is try­ing to pre­dict the prob­a­bil­ity of end­ing up in a catas­tro­phe from some given novel situ­a­tion. For ex­am­ple, maybe our robot poked its robot finger at the stove in our kitchen, but pok­ing its robot finger at stoves in other kitchens might be dan­ger­ous too. So we want the fear model to gen­er­al­ize from our stove to other stoves. On the other hand, we don’t want it to be stove-pho­bic and run away at the mere sight of a stove. The task of our fear model is to pre­dict ex­actly how likely it is for the robot to end up in a catas­tro­phe, given some situ­a­tion it is in, and then make it in­creas­ingly dis­in­clined to end up in the kinds of situ­a­tions which might lead to a catas­tro­phe.

This sounds nice in the­ory. On the other hand, Lip­ton et al. are still as­sum­ing that they can train their learner in a simu­lated en­vi­ron­ment, and that they can la­bel catas­trophic states ahead of time. We don’t know in ad­vance ev­ery pos­si­ble catas­tro­phe our robot might end up in—it might walk off a cliff, shoot it­self in the foot with a laser gun, be beaten up by ac­tivists protest­ing tech­nolog­i­cal un­em­ploy­ment, or any num­ber of other pos­si­bil­ities.

So let’s take in­spira­tion from hu­mans. We can’t know be­fore­hand ev­ery bad thing that might hap­pen to our robot, but we can iden­tify some classes of things which are cor­re­lated with catas­tro­phe. For in­stance, be­ing beaten or shoot­ing it­self in the foot will cause phys­i­cal dam­age, so we can in­stall sen­sors which in­di­cate when the robot has taken phys­i­cal dam­age. If these sen­sors—let’s call them “pain” sen­sors—reg­ister a high amount of dam­age, we con­sider the situ­a­tion to have been catas­trophic. When they do, we save that situ­a­tion and the situ­a­tions pre­ced­ing it to our list of dan­ger­ous situ­a­tions. As­sum­ing that our robot has man­aged to make it out of that situ­a­tion in­tact and can do any­thing in the first place, we use that list of dan­ger­ous situ­a­tions to train up a fear model.

At this point, we no­tice that this is start­ing to re­mind us about our ex­pe­rience with hu­mans. For ex­am­ple, the in­fa­mous Lit­tle Albert ex­per­i­ment. A hu­man baby was al­lowed to play with a lab­o­ra­tory rat, but each time that he saw the rat, a re­searcher made a loud scary sound be­hind his back. Soon Albert started get­ting scared when­ever he saw the rat—and then he got scared of furry things in gen­eral.

Some­thing like Albert’s be­hav­ior could be im­ple­mented very sim­ply us­ing some­thing like Heb­bian con­di­tion­ing to get a learn­ing al­gorithm which picks up on some fea­tures of the situ­a­tion, and then trig­gers a panic re­ac­tion when­ever it re-en­coun­ters those same fea­tures. For in­stance, it reg­isters that the sight of fur and loud sounds tend to co­in­cide, and then it trig­gers a fear re­ac­tion when­ever it sees fur. This would be a ba­sic fear model, and a “dan­ger state” would be “see­ing fur”.

Want­ing to keep things sim­ple, we de­cide to use this kind of an ap­proach as the fear model of our robot. Also, hav­ing read Con­scious­ness and the Brain, we re­mem­ber a few ba­sic prin­ci­ples about how those hu­man brains work, which we de­cide to copy be­cause we’re lazy and don’t want to come up with en­tirely new prin­ci­ples:

  • There’s a spe­cial net­work of neu­rons in the brain, called the global neu­ronal workspace. The con­tents of this workspace are roughly the same as the con­tents of con­scious­ness.

  • We can thus con­sider con­scious­ness a workspace which many differ­ent brain sys­tems have ac­cess to. It can hold a sin­gle “chunk” of in­for­ma­tion at a time.

  • The brain has mul­ti­ple differ­ent sys­tems do­ing differ­ent things. When a men­tal ob­ject be­comes con­scious (that is, is pro­jected into the workspace by a sub­sys­tem), many sys­tems will syn­chro­nize their pro­cess­ing around an­a­lyz­ing and ma­nipu­lat­ing that men­tal ob­ject.

So here is our de­sign:

  • The robot has a hard­wired sys­tem scan­ning for signs of catas­tro­phe. This sys­tem has sev­eral sub­com­po­nents. One of them scans the “pain” sen­sors for signs of phys­i­cal dam­age. Another sys­tem watches the “hunger” sen­sors for signs of low bat­tery.

  • Any of these “dis­tress” sys­tems can, alone or in com­bi­na­tion, feed a nega­tive re­ward sig­nal into the global workspace. This tells the rest of the sys­tem that this is a bad state, from which the robot should es­cape.

  • If a cer­tain thresh­old level of “dis­tress” is reached, the cur­rent situ­a­tion is des­ig­nated as catas­trophic. All other pri­ori­ties are sus­pended and the robot will pri­ori­tize get­ting out of the situ­a­tion. A mem­ory of the situ­a­tion and the situ­a­tions pre­ced­ing it are saved to a ded­i­cated stor­age.

  • After the ex­pe­rience, the mem­ory of the catas­trophic situ­a­tion is re­played in con­scious­ness for anal­y­sis. This re­play is used to train up a sep­a­rate fear model which effec­tively acts as a new “dis­tress” sys­tem.

  • As the robot walks around its en­vi­ron­ment, sen­sory in­for­ma­tion about the sur­round­ings will en­ter its con­scious­ness workspace. When it plans fu­ture ac­tions, simu­lated sen­sory in­for­ma­tion about how those ac­tions would un­fold en­ters the workspace. When­ever the new fear model de­tects fea­tures in ei­ther kind of sen­sory in­for­ma­tion which it as­so­ci­ates with the catas­trophic events, it will feed “fear”-type “dis­tress” into the con­scious­ness workspace.

So if the robot sees things which re­mind it of pok­ing at hot stove, it will be in­clined to go some­where else; if it imag­ines do­ing some­thing which would cause it to poke at the hot stove, then it will be in­clined to imag­ine do­ing some­thing else.

In­tro­duc­ing managers

But is this ac­tu­ally enough? We’ve now ba­si­cally set up an al­gorithm which warns the robot when it sees things which have pre­vi­ously pre­ceded a bad out­come. This might be enough for deal­ing with static tasks, such as not burn­ing your­self at a stove. But it seems in­suffi­cient for deal­ing with things like preda­tors or tech­nolog­i­cal un­em­ploy­ment protesters, who might show up in a wide va­ri­ety of places and ac­tively try to hunt you down. By the time you see a sign of them, you’re already in dan­ger. It would be bet­ter if we could learn to avoid them en­tirely, so that the fear model would never even be trig­gered.

As we pon­der this dilemma, we surf the web and run across this blog post sum­ma­riz­ing Saun­ders, Sas­try, Stuh­lmüller & Evans (2017). They are also con­cerned with pre­vent­ing re­in­force­ment learn­ing agents from run­ning into catas­tro­phes, but have a some­what differ­ent ap­proach. In their ap­proach, a re­in­force­ment learner is al­lowed to do differ­ent kinds of things, which a hu­man over­seer then al­lows or blocks. A sep­a­rate “blocker” model is trained to pre­dict which ac­tions the hu­man over­seer would block. In the fu­ture, if the robot would ever take an ac­tion which the “blocker” pre­dicts the hu­man over­seer would dis­al­low, it will block that ac­tion. In effect, the sys­tem con­sists of two sep­a­rate sub­agents, one sub­agent try­ing to max­i­mize re­wards and the other sub­agent try­ing to block non-ap­proved ac­tions.

Since our robot has a nice mod­u­lar ar­chi­tec­ture into which we can add var­i­ous sub­agents which are listen­ing in and tak­ing ac­tions, we de­cide to take in­spira­tion from this idea. We cre­ate a sys­tem for spawn­ing ded­i­cated sub­pro­grams which try to pre­dict and and block ac­tions which would cause the fear model to be trig­gered. In the­ory, this is un­nec­es­sary: given enough time, even stan­dard re­in­force­ment learn­ing should learn to avoid the situ­a­tions which trig­ger the fear model. But again, trial-and-er­ror can take a very long time to learn ex­actly which situ­a­tions trig­ger fear, so we ded­i­cate a sep­a­rate sub­pro­gram to the task of pre-emp­tively figur­ing it out.

Each fear model is paired with a sub­agent that we’ll call a man­ager. While the fear model has as­so­ci­ated a bunch of cues with the no­tion of an im­pend­ing catas­tro­phe, the man­ager learns to pre­dict which situ­a­tions would cause the fear model to trig­ger. De­spite sound­ing similar, these are not the same thing: one in­di­cates when you are already in dan­ger, the other is try­ing to figure out what you can do to never end up in dan­ger in the first place. A fear model might learn to rec­og­nize signs which tech­nolog­i­cal un­em­ploy­ment protesters com­monly wear. Whereas a man­ager might learn the kinds of en­vi­ron­ments where the fear model has no­ticed protesters be­fore: for in­stance, near the protester HQ.

Then, if a man­ager pre­dicts that a given ac­tion (such as go­ing to the protester HQ) would even­tu­ally trig­ger the fear model, it will block that ac­tion and pro­mote some other ac­tion. We can use the in­ter­ac­tion of these sub­sys­tems to try to en­sure that the robot only feels fear in situ­a­tions which already re­sem­ble the catas­trophic situ­a­tion so much as to ac­tu­ally be dan­ger­ous. At the same time, the robot will be un­afraid to take safe ac­tions in situ­a­tions from which it could end up in a dan­ger zone, but are them­selves safe to be in.

As an added benefit, we can re­cy­cle the man­ager com­po­nent to also do the same thing as the blocker com­po­nent in the Saun­ders et al. pa­per origi­nally did. That is, if the robot has a hu­man over­seer tel­ling it in strict terms not to do some things, it can cre­ate a man­ager sub­pro­gram which mod­els that over­seer and like­wise blocks the robot from do­ing things which the model pre­dicts that the over­seer would dis­ap­prove of.

Put­ting to­gether a toy model

If the robot does end up in a situ­a­tion where the fear model is sound­ing an alarm, then we want to get it out of the situ­a­tion as quickly as pos­si­ble. It may be worth spawn­ing a spe­cial­ized sub­rou­tine just for this pur­pose. Tech­nolog­i­cal un­em­ploy­ment ac­tivists could, among other things, use flamethrow­ers that set the robot on fire. So let’s call these types of sub­pro­grams ded­i­cated to es­cap­ing from the dan­ger zone, fire­fighters.

So how does the sys­tem as a whole work? First, the differ­ent sub­agents act by send­ing into the con­scious­ness workspace var­i­ous men­tal ob­jects, such as an emo­tion of fear, or an in­tent to e.g. make break­fast. If sev­eral sub­agents are sub­mit­ting iden­ti­cal men­tal ob­jects, we say that they are vot­ing for the same ob­ject. On each time-step, one of the sub­mit­ted ob­jects is cho­sen at ran­dom to be­come the con­tents of the workspace, with each ob­ject hav­ing a chance to be se­lected that’s pro­por­tional to its num­ber of votes. If a men­tal ob­ject de­scribing a phys­i­cal ac­tion (an “in­ten­tion”) ends up in the workspace and stays cho­sen for sev­eral time-steps, then that ac­tion gets ex­e­cuted by a mo­tor sub­sys­tem.

Depend­ing on the situ­a­tion, some sub­agents will have more votes than oth­ers. E.g. a fear model sub­mit­ting a fear ob­ject gets a num­ber of votes pro­por­tional to how strongly it is ac­ti­vated. Be­sides the spe­cial­ized sub­agents we’ve dis­cussed, there’s also a de­fault plan­ning sub­agent, which is just tak­ing what­ever ac­tions (that is, send­ing to the workspace what­ever men­tal ob­jects) it thinks will pro­duce the great­est re­ward. This sub­agent only has a small num­ber of votes.

Fi­nally, there’s a self-nar­ra­tive agent which is con­struct­ing a nar­ra­tive of the robot’s ac­tions as if it was a unified agent, for so­cial pur­poses and for do­ing rea­son­ing af­ter­wards. After the mo­tor sys­tem has taken an ac­tion, the self-nar­ra­tive agent records this as some­thing like “I, Robby the Robot, made break­fast by cook­ing eggs and ba­con”, trans­mit­ting this state­ment to the workspace and sav­ing it to an epi­sodic mem­ory store for fu­ture refer­ence.

Con­se­quences of the model

Is this de­sign any good? Let’s con­sider a few of its im­pli­ca­tions.

First, in or­der for the robot to take phys­i­cal ac­tions, the in­tent to do so has to be in its con­scious­ness for a long enough time for the ac­tion to be taken. If there are any sub­agents that wish to pre­vent this from hap­pen­ing, they must muster enough votes to bring into con­scious­ness some other men­tal ob­ject re­plac­ing that in­ten­tion be­fore it’s been around for enough time-steps to be ex­e­cuted by the mo­tor sys­tem. (This is analo­gous to the con­cept of the fi­nal veto in hu­mans, where con­scious­ness is the last place to block pre-con­sciously ini­ti­ated ac­tions be­fore they are taken.)

Se­cond, the differ­ent sub­agents do not see each other di­rectly: they only see the con­se­quences of each other’s ac­tions, as that’s what’s re­flected in the con­tents of the workspace. In par­tic­u­lar, the self-nar­ra­tive agent has no ac­cess to in­for­ma­tion about which sub­agents were re­spon­si­ble for gen­er­at­ing which phys­i­cal ac­tion. It only sees the in­ten­tions which pre­ceded the var­i­ous ac­tions, and the ac­tions them­selves. Thus it might eas­ily end up con­struct­ing a nar­ra­tive which cre­ates the in­ter­nal ap­pear­ance of a sin­gle agent, even though the sys­tem is ac­tu­ally com­posed of mul­ti­ple sub­agents.

Third, even if the sub­agents can’t di­rectly see each other, they might still end up form­ing al­li­ances. For ex­am­ple, if the robot is stand­ing near the stove, a cu­ri­os­ity-driven sub­agent might pro­pose pok­ing at the stove (“I want to see if this causes us to burn our­selves again!”), while the de­fault plan­ning sys­tem might pro­pose cook­ing din­ner, since that’s what it pre­dicts will please the hu­man owner. Now, a man­ager try­ing to pre­vent a fear model agent from be­ing ac­ti­vated, will even­tu­ally learn that if it votes for the de­fault plan­ning sys­tem’s in­ten­tions to cook din­ner (which it saw ear­lier), then the cu­ri­os­ity-driven agent is less likely to get its in­ten­tions into con­scious­ness. Thus, no pok­ing at the stove, and the man­ager’s and the de­fault plan­ning sys­tem’s goals end up al­igned.

Fourth, this de­sign can make it re­ally difficult for the robot to even be­come aware of the ex­is­tence of some man­agers. A man­ager may learn to sup­port any other men­tal pro­cesses which block the robot from tak­ing spe­cific ac­tions. It does it by vot­ing in fa­vor of men­tal ob­jects which ori­ent be­hav­ior to­wards any­thing else. This might man­i­fest as some­thing sub­tle, such as a mys­te­ri­ous lack of in­ter­est to­wards some­thing that sounds like a good idea in prin­ci­ple, or just re­peat­edly for­get­ting to do some­thing, as the robot always seems to get dis­tracted by some­thing else. The self-nar­ra­tive agent, not hav­ing any idea of what’s go­ing on, might just ex­plain this as “Robby the Robot is for­get­ful some­times” in its in­ter­nal nar­ra­tive.

Fifth, the de­fault plan­ning sub­agent here is do­ing some­thing like ra­tio­nal plan­ning, but given its weak vot­ing power, it’s likely to be over­ruled if other sub­agents dis­agree with it (un­less some sub­agents also agree with it). If some ac­tions seem worth do­ing, but there are man­agers which are block­ing it and the de­fault plan­ning sub­agent doesn’t have an ex­plicit rep­re­sen­ta­tion of them, this can man­i­fest as all kinds of pro­cras­ti­nat­ing be­hav­iors and nu­mer­ous failed at­tempts for the de­fault plan­ning sys­tem to “try to get it­self to do some­thing”, us­ing var­i­ous strate­gies. But as long as the man­agers keep block­ing those ac­tions, the sys­tem is likely to re­main stuck.

Sixth, the pur­pose of both man­agers and fire­fighters is to keep the robot out of a situ­a­tion that has been pre­vi­ously des­ig­nated as dan­ger­ous. Man­agers do this by try­ing to pre-emp­tively block ac­tions that would cause the fear model agent to ac­ti­vate; fire­fighters do this by try­ing to take ac­tions which shut down the fear model agent af­ter it has ac­ti­vated. But the fear model agent ac­ti­vat­ing is not ac­tu­ally the same thing as be­ing in a dan­ger­ous situ­a­tion. Thus, both man­agers and fire­fighters may fall vic­tim to Good­hart’s law, do­ing things which block the fear model while be­ing ir­rele­vant for es­cap­ing catas­trophic situ­a­tions.

For ex­am­ple, “think­ing about the con­se­quences of go­ing to the ac­tivist HQ” is some­thing that might ac­ti­vate the fear model agent, so a man­ager might try to block just think­ing about it. This has ob­vi­ous con­se­quence that the robot can’t think clearly about that is­sue. Similarly, once the fear model has already ac­ti­vated, a fire­fighter might Good­hart by sup­port­ing any ac­tion which helps ac­ti­vate an agent with a lot of vot­ing power that’s go­ing to think about some­thing en­tirely differ­ent. This could re­sult in com­pul­sive be­hav­iors which were effec­tive at push­ing the fear aside, but use­less for achiev­ing any of the robot’s ac­tual aims.

At worst, this could cause loops of mu­tu­ally ac­ti­vat­ing sub­agents push­ing in op­po­site di­rec­tions. First, a stove-pho­bic robot runs away from the stove as it was about to make break­fast. Then a fire­fighter try­ing to sup­press that fear, causes the robot to get stuck look­ing at pic­tures of beau­tiful naked robots, which is en­gross­ing and thus great for re­mov­ing the fear of the stove. Then an­other fear model starts to ac­ti­vate, this one afraid of failure and of spend­ing so much time look­ing at pic­tures of beau­tiful naked robots that the robot won’t ac­com­plish its goal of mak­ing break­fast. A sep­a­rate fire­fighter as­so­ci­ated with this sec­ond fear model has learned that fo­cus­ing the robot’s at­ten­tion on the pic­tures of beau­tiful naked robots even more is the most effec­tive ac­tion for keep­ing this new fear tem­porar­ily sub­dued. So the two fire­fighters are al­lied and tem­porar­ily suc­cess­ful at their goal, but then the first one—see­ing that the origi­nal stove fear has dis­ap­peared—turns off. Without the first fire­fighter’s votes sup­port­ing the sec­ond fire­fighter, the fear man­ages to over­whelm the sec­ond fire­fighter, caus­ing the robot to rush into mak­ing break­fast. This again ac­ti­vates its fear of the stove, but if the fear of failure re­mains strong enough, it might over­power its fear of the stove so that the robot man­ages to make break­fast in time...

Hmm. Maybe this de­sign isn’t so great af­ter all. Good thing we no­ticed these failure modes, so that there aren’t any mind ar­chi­tec­tures like this go­ing around be­ing vuln­er­a­ble to them!

The In­ter­nal Fam­ily Sys­tems model

But enough hy­po­thet­i­cal robot de­sign; let’s get to the topic of IFS. The IFS model hy­poth­e­sizes the ex­is­tence of three kinds of “ex­treme parts” in the hu­man mind:

  • Ex­iles are said to be parts of the mind which hold the mem­ory of past trau­matic events, which the per­son did not have the re­sources to han­dle. They are parts of the psy­che which have been split off from the rest and are frozen in time of the trau­matic event. When some­thing causes them to sur­face, they tend to flood the mind with pain. For ex­am­ple, some­one may have an ex­ile as­so­ci­ated with times when they were ro­man­ti­cally re­jected in the past.

  • Man­agers are parts that have been tasked with keep­ing the ex­iles per­ma­nently ex­iled from con­scious­ness. They try to ar­range a per­son’s life and psy­che so that ex­iles never sur­face. For ex­am­ple, man­agers might keep some­one from reach­ing out to po­ten­tial dates due to a fear of re­jec­tion.

  • Fire­fighters re­act when ex­iles have been trig­gered, and try to ei­ther sup­press the ex­ile’s pain or dis­tract the mind from it. For ex­am­ple, af­ter some­one has been re­jected by a date, they might find them­selves drink­ing in an at­tempt to numb the pain.

  • Some pre­sen­ta­tions of the IFS model sim­plify things by com­bin­ing Man­agers and Fire­fighters into the broader cat­e­gory of Pro­tec­tors, so only talk about Ex­iles and Pro­tec­tors.

Ex­iles are not limited to be­ing cre­ated from the kinds of situ­a­tions that we would com­monly con­sider se­ri­ously trau­matic. They can also be cre­ated from things like rel­a­tively minor child­hood up­sets, as long as the child didn’t feel like they could han­dle the situ­a­tion.

IFS fur­ther claims that you can treat these parts as some­thing like in­de­pen­dent sub­per­son­al­ities. You can com­mu­ni­cate with them, con­sider their wor­ries, and grad­u­ally per­suade man­agers and fire­fighters to give you ac­cess to the ex­iles that have been kept away from con­scious­ness. When you do this, you can show them that you are no longer in the situ­a­tion which was catas­trophic be­fore, and now have the re­sources to han­dle it if some­thing similar was to hap­pen again. This heals the ex­ile, and also lets the man­agers and fire­fighters as­sume bet­ter, healthier roles.

As I men­tioned in the be­gin­ning, when I first heard about IFS, I was turned off by it for sev­eral differ­ent rea­sons. For in­stance, here were some of my thoughts at the time:

  1. The whole model about some parts of the mind be­ing in pain, and other parts try­ing to sup­press their suffer­ing. The thing about ex­iles was framed in terms of a part of the mind split­ting off in or­der to pro­tect the rest of the mind against dam­age. What? That doesn’t make any evolu­tion­ary sense! A trau­matic situ­a­tion is just sen­sory in­for­ma­tion for the brain, it’s not literal brain dam­age: it wouldn’t have made any sense for minds to evolve in a way that caused parts of it to split off, forc­ing other parts of the mind to try to keep them sup­pressed. Why not just… never be dam­aged in the first place?

  2. That whole thing about parts be­ing per­son­al­ized char­ac­ters that you could talk to. That… doesn’t de­scribe any­thing in my ex­pe­rience.

  3. Also, how does just talk­ing to your­self fix any trauma or deeply in­grained be­hav­iors?

  4. IFS talks about ev­ery­one hav­ing a “True Self”. Quote from Wikipe­dia: IFS also sees peo­ple as be­ing whole, un­der­neath this col­lec­tion of parts. Every­one has a true self or spiritual cen­ter, known as the Self to dis­t­in­guish it from the parts. Even peo­ple whose ex­pe­rience is dom­i­nated by parts have ac­cess to this Self and its heal­ing qual­ities of cu­ri­os­ity, con­nect­ed­ness, com­pas­sion, and calm­ness. IFS sees the ther­a­pist’s job as helping the client to dis­en­tan­gle them­selves from their parts and ac­cess the Self, which can then con­nect with each part and heal it, so that the parts can let go of their de­struc­tive roles and en­ter into a har­mo­nious col­lab­o­ra­tion, led by the Self. That… again did not sound par­tic­u­larly de­rived from any sen­si­ble psy­chol­ogy.

Hope­fully, I’ve already an­swered my past self’s con­cerns about the first point. The model it­self talks in terms of man­agers pro­tect­ing the mind from pain, ex­iles be­ing ex­iled from con­scious­ness in or­der for their pain to re­main sup­pressed, etc. Which is a rea­son­able de­scrip­tion of the sub­jec­tive ex­pe­rience of what hap­pens. But the evolu­tion­ary logic—as far as I can guess—is slightly differ­ent: to keep us out of dan­ger­ous situ­a­tions.

The story of the robot de­scribes the ac­tual “de­sign ra­tio­nale”. Ex­iles are in fact sub­agents which are “frozen in the time of a trau­matic event”, but they didn’t split off to pro­tect the rest of the mind from dam­age. Rather, they were cre­ated as an iso­lated mem­ory block to en­sure that the mem­ory of the event wouldn’t be for­got­ten. Man­agers then ex­ist to keep the per­son away from such catas­trophic situ­a­tions, and fire­fighters ex­ist to help es­cape them. Un­for­tu­nately, this setup is vuln­er­a­ble to var­i­ous failure modes, similar to those that the robot is vuln­er­a­ble to.

With that said, let’s tackle the re­main­ing prob­lems that I had with IFS.

Per­son­al­ized characters

IFS sug­gests that you can ex­pe­rience the ex­iles, man­agers and fire­fighters in your mind as some­thing akin to sub­per­son­al­ities—en­tities with their own names, vi­sual ap­pear­ances, prefer­ences, be­liefs, and so on. Fur­ther­more, this isn’t in­her­ently dys­func­tional, nor in­dica­tive of some­thing like Dis­so­ci­a­tive Iden­tity Di­sor­der. Rather, even peo­ple who are en­tirely healthy and nor­mal may ex­pe­rience this kind of “mul­ti­plic­ity”.

Now, it’s im­por­tant to note right off that not ev­ery­one has this to a ma­jor ex­tent: you don’t need to ex­pe­rience mul­ti­plic­ity in or­der for the IFS pro­cess to work. For in­stance, my parts feel more like bod­ily sen­sa­tions and shards of de­sire than sub­per­son­al­ities, but IFS still works su­per-well for me.

In the book In­ter­nal Fam­ily Sys­tems Ther­apy, Richard Schwartz, the de­vel­oper of IFS, notes that if a per­son’s sub­agents play well to­gether, then that per­son is likely to feel mostly in­ter­nally unified. On the other hand, if a per­son has lots of in­ter­nal con­flict, then they are more likely to ex­pe­rience them­selves as hav­ing mul­ti­ple parts with con­flict­ing de­sires.

I think that this makes a lot of sense, as­sum­ing the ex­is­tence of some­thing like a self-nar­ra­tive sub­agent. If you re­mem­ber, this is the part of the mind which looks at the ac­tions that the mind-sys­tem has taken, and then con­structs an ex­pla­na­tion for why those ac­tions were taken. (See e.g. the posts on the limits of in­tro­spec­tion and on the Apol­o­gist and the Revolu­tion­ary for pre­vi­ous ev­i­dence for the ex­is­tence of such a con­fab­u­lat­ing sub­agent with limited ac­cess to our true mo­ti­va­tions.) As long as all the ex­iles, man­agers and fire­fighters are func­tion­ing in a unified fash­ion, the most par­si­mo­nious model that the self-nar­ra­tive sub­agent might con­struct is sim­ply that of a unified self. But if the sys­tem keeps be­ing driven into strongly con­flict­ing be­hav­iors, then it can’t nec­es­sar­ily make sense of them from a sin­gle-agent per­spec­tive. Then it might nat­u­rally set­tle on some­thing like a mul­ti­a­gent ap­proach and ex­pe­rience it­self as be­ing split into parts.

Kevin Sim­ler, in Neu­rons Gone Wild, notes how peo­ple with strong ad­dic­tions seem par­tic­u­larly prone to de­vel­op­ing multi-agent nar­ra­tives:

This Amer­i­can Life did a nice seg­ment on ad­dic­tion a few years back, in which the pro­duc­ers — seem­ingly on a lark — asked peo­ple to per­son­ify their ad­dic­tions. “It was like peo­ple had been wait­ing all their lives for some­body to ask them this ques­tion,” said the pro­duc­ers, and they gushed forth with de­scrip­tions of the ‘voice’ of their in­ner ad­dict:
“The voice is ir­re­sistible, always. I’m in the thrall of that voice.”
“To­tally out of con­trol. It’s got this life of its own, and I can’t tame it any­more.”
“I ac­tu­ally have a name for the voice. I call it Stan. Stan is the guy who tells me to have the ex­tra glass of wine. Stan is the guy who tells me to smoke.”

This doesn’t seem like it ex­plains all of it, though. I’ve fre­quently been very dys­func­tional, and have always found very in­tu­itive the no­tion of the mind be­ing split into very parts. Yet I mostly still don’t seem to ex­pe­rience my sub­agents any­where near as per­son-like as some oth­ers clearly do. I know at least one per­son who ended up find­ing IFS be­cause of hav­ing all of these talk­ing char­ac­ters in their head, and who was look­ing for some­thing that would help them make sense of it. Noth­ing like that has ever been the case for me: I did ex­pe­rience strongly con­flict­ing de­sires, but they were just that, strongly con­flict­ing de­sires.

I can only sur­mise that it has some­thing to do with the same kinds of differ­ences which cause some peo­ple to think mainly ver­bally, oth­ers mainly vi­su­ally, and oth­ers yet in some other hard-to-de­scribe modal­ity. Some fic­tion writ­ers spon­ta­neously ex­pe­rience their char­ac­ters as real peo­ple who speak to them and will even bother the writer when at the su­per­mar­ket, and some oth­ers don’t.

It’s been noted that the mechanisms which use to model our­selves and other peo­ple over­lap—not very sur­pris­ingly, since both we and other peo­ple are (pre­sum­ably) hu­mans. So it seems rea­son­able that some of the mechanisms for rep­re­sent­ing other peo­ple, would some­times also end up spon­ta­neously re­cruited for rep­re­sent­ing in­ter­nal sub­agents or coal­i­tions of them.

Why should this tech­nique be use­ful for psy­cholog­i­cal heal­ing?

Okay, sup­pose it’s pos­si­ble to ac­cess our sub­agents some­how. Why would just talk­ing with these en­tities in your own head, help you fix psy­cholog­i­cal is­sues?

Let’s con­sider that a per­son hav­ing ex­iles, man­agers and fire­fighters is costly in the sense of con­strain­ing that per­son’s op­tions. If you never want to do any­thing that would cause you to see a stove, that limits quite a bit of what you can do. I strongly sus­pect that many forms of pro­cras­ti­na­tion and failure to do things we’d like to do are mostly a man­i­fes­ta­tion of over­ac­tive man­agers. So it’s im­por­tant not to cre­ate those kinds of en­tities un­less the situ­a­tion re­ally is one which should be des­ig­nated as cat­e­gor­i­cally un­ac­cept­able to end up in.

The the­ory for IFS men­tions that not all painful situ­a­tions turn into trauma: just ones in which we felt hel­pless and like we didn’t have the nec­es­sary re­sources for deal­ing with it. This makes sense, since if we were ca­pa­ble of deal­ing with it, then the situ­a­tion can’t have been that catas­trophic. The af­ter­math of the im­me­di­ate event is im­por­tant as well: a child who ends up in a painful situ­a­tion doesn’t nec­es­sar­ily end up trau­ma­tized, if they have an adult who can put the event in a re­as­sur­ing con­text af­ter­wards.

But situ­a­tions which used to be catas­trophic and im­pos­si­ble for us to han­dle be­fore, aren’t nec­es­sar­ily that any more. It seems im­por­tant to have a mechanism for up­dat­ing that cache of catas­trophic events and for dis­assem­bling the pro­tec­tions around it, if the pro­tec­tions turn out to be un­nec­es­sary.

How does that pro­cess usu­ally hap­pen, with­out IFS or any other spe­cial­ized form of ther­apy?

Often, by talk­ing about your ex­pe­riences with some­one you trust. Or writ­ing about them in pri­vate or in a blog.

In my post about Con­scious­ness and the Brain, I men­tioned that once a men­tal ob­ject be­comes con­scious, many differ­ent brain sys­tems syn­chro­nize their pro­cess­ing around it. I sus­pect that the rea­son why many peo­ple have such a pow­er­ful urge to dis­cuss their trau­matic ex­pe­riences with some­one else, is that do­ing so is a way of bring­ing those mem­o­ries into con­scious­ness in de­tail. And once you’ve dug up your trau­matic mem­o­ries from their cache, their con­tent can be re-pro­cessed and re-eval­u­ated. If your brain judges that you now do have the re­sources to han­dle that event if you ever end up in it again, or if it’s some­thing that sim­ply can’t hap­pen any­more, then the mem­ory can be re­moved from the cache and you no longer need to avoid it.

I think it’s also sig­nifi­cant that, while some­thing like just writ­ing about a trau­matic event is some­times enough to heal, of­ten it’s more effec­tive if you have a sym­pa­thetic listener who you trust. Trau­mas of­ten in­volve some amount of shame: maybe you were called lazy as a kid and are still afraid of oth­ers think­ing that you are lazy. Here, hav­ing friends who ac­cept you and are will­ing to non­judg­men­tally listen while you talk about your is­sues, is by it­self an in­di­ca­tion that the thing that you used to be afraid of isn’t a dan­ger any­more: there ex­ist peo­ple who will stay by your side de­spite know­ing your se­cret.

Now, when you are talk­ing to a friend about your trau­matic mem­ory, you will be go­ing through cached mem­o­ries that have been stored in an ex­ile sub­agent. A spe­cific mem­ory cir­cuit—one of sev­eral cir­cuits spe­cial­ized for the act of hold­ing painful mem­o­ries—is ac­tive and out­putting its con­tents into the global workspace, from which they are be­ing turned into words.

Mean­ing that, in a sense, your friend is talk­ing di­rectly to your ex­ile.

Could you hack this pro­cess, so that you wouldn’t even need a friend, and could carry this pro­cess out en­tirely in­ter­nally?

In my ear­lier post, I re­marked that you could view lan­guage as a way of join­ing two peo­ple’s brains to­gether. A sub­agent in your brain out­puts some­thing that ap­pears in your con­scious­ness, you com­mu­ni­cate it to a friend, it ap­pears in their con­scious­ness, sub­agents in your friend’s brain ma­nipu­late the in­for­ma­tion some­how, and then they send it back to your con­scious­ness.

If you are tel­ling your friend about your trauma, you are in a sense join­ing your workspaces to­gether, and let­ting some sub­agents in your workspace, com­mu­ni­cate with the “sym­pa­thetic listener” sub­agents in your friend’s workspace.

So why not let a “sym­pa­thetic listener” sub­agent in your workspace, hook up di­rectly with the trau­ma­tized sub­agents that are also in your own workspace?

I think that some­thing like this hap­pens when you do IFS. You are us­ing a tech­nique de­signed to ac­ti­vate the rele­vant sub­agents in a very spe­cific way, which al­lows for this kind of a “hook­ing up” with­out need­ing an­other per­son.

For in­stance, sup­pose that you are talk­ing to a man­ager sub­agent which wants to hide the fact that you’re bad at some­thing, and starts re­act­ing defen­sively when­ever the topic is brought up. Now, one way by which its ac­ti­va­tion could man­i­fest, is feed­ing those defen­sive thoughts and re­ac­tions di­rectly into your workspace. In such a case, you would ex­pe­rience them as your own thoughts, and pos­si­bly as ob­jec­tively real. IFS calls this “blend­ing”; I’ve also pre­vi­ously used the term “cog­ni­tive fu­sion” for what’s es­sen­tially the same thing.

In­stead of re­main­ing blended, you then use var­i­ous un­blend­ing /​ cog­ni­tive de­fu­sion tech­niques that high­light the way by which these thoughts and emo­tions are com­ing from a spe­cific part of your mind. You could think of this as wrap­ping ex­tra con­tent around the thoughts and emo­tions, and then see­ing them through the wrap­per (which is ob­vi­ously not-you), rather than ex­pe­rienc­ing the thoughts and emo­tions di­rectly (which you might ex­pe­rience as your own). For ex­am­ple, the IFS book Self-Ther­apy sug­gests this un­blend­ing tech­nique (among oth­ers):

Allow a vi­sual image of the part [sub­agent] to arise. This will give you the sense of it as a sep­a­rate en­tity. This ap­proach is even more effec­tive if the part is clearly a cer­tain dis­tance away from you. The fur­ther away it is, the more sep­a­ra­tion this cre­ates.
Another way to ac­com­plish vi­sual sep­a­ra­tion is to draw or paint an image of the part. Or you can choose an ob­ject from your home that rep­re­sents the part for you or find an image of it in a mag­a­z­ine or on the In­ter­net. Hav­ing a con­crete to­ken of the part helps to cre­ate sep­a­ra­tion.

I think of this as some­thing like, you are tak­ing the sub­agent in ques­tion, rout­ing its re­sponses through a vi­su­al­iza­tion sub­sys­tem, and then you see a talk­ing fox or what­ever. And this is then a rep­re­sen­ta­tion that your in­ter­nal sub­sys­tems for talk­ing with other peo­ple can re­spond to. You can then have a di­alogue with the part (ver­bally or oth­er­wise) in a way where its re­sponses are clearly la­beled as com­ing from it, rather than be­ing mixed to­gether with all the other thoughts in the workspace. This lets the con­tent com­ing from the sym­pa­thetic-listener sub­agent and the ex­ile/​man­ager/​fire­fighter sub­agent be kept clearly apart, al­low­ing you to con­sider the emo­tional con­tent as you would as an ex­ter­nal listener, pre­vent­ing you from drown­ing in it. You’re hack­ing your brain so as to work as the ther­a­pist and client as the same time.

The Self

IFS claims that, be­low all the var­i­ous parts and sub­agents, there ex­ists a “true self” which you can learn to ac­cess. When you are in this Self, you ex­hibit the qual­ities of “calm­ness, cu­ri­os­ity, clar­ity, com­pas­sion, con­fi­dence, cre­ativity, courage, and con­nect­ed­ness”. Be­ing at least par­tially in Self is said to be a pre­req­ui­site for work­ing with your parts: if you are not, then you are not able to eval­u­ate their mod­els ob­jec­tively. The parts will sense this, and as a re­sult, they will not share their mod­els prop­erly, pre­vent­ing the kind of global re-eval­u­a­tion of their con­tents that would up­date them.

This was the part that I was ini­tially the most skep­ti­cal of, and which made me most fre­quently de­cide that IFS was not worth look­ing at. I could eas­ily con­cep­tu­al­ize the mind as be­ing made up of var­i­ous sub­agents. But then it would just be nu­mer­ous sub­agents all the way down, with­out any sin­gle one that could be des­ig­nated the “true” self.

But let’s look at IFS’s de­scrip­tion of how ex­actly to get into Self. You check whether you seem to be blended with any part. If you are, you un­blend with it. Then you check whether you might also be blended with some other part. If you are, you un­blend from it also. You then keep do­ing this un­til you can find no part that you might be blended with. All that’s left are those “eight Cs”, which just seem to be a kind of a global state, with no par­tic­u­lar part that they would be com­ing from.

I now think that “be­ing in Self” rep­re­sents a state where there no par­tic­u­lar sub­agent is get­ting a dis­pro­por­tionate share of vot­ing power, and ev­ery­thing is pro­cessed by the sys­tem as a whole. Re­mem­ber that in the robot story, catas­trophic states were situ­a­tions in which the or­ganism should never end up. A sub­agent kick­ing in to pre­vent that from hap­pen­ing is a kind of a pri­or­ity over­ride to nor­mal think­ing. It blocks you from be­ing open and calm and cu­ri­ous be­cause some sub­agent thinks that do­ing so would be dan­ger­ous. If you then turn off or sus­pend all those pri­or­ity over­rides, then the mind’s de­fault state ab­sent any over­ride seems to be one with the qual­ities of the Self.

This ac­tu­ally fits at least one model of the func­tion of pos­i­tive emo­tions pretty well. Fredrick­son (1998) sug­gests that an im­por­tant func­tion of pos­i­tive emo­tions is to make us en­gage in ac­tivi­ties such as play, ex­plo­ra­tion, and sa­vor­ing the com­pany of other peo­ple. Do­ing these things has the effect of build­ing up skills, knowl­edge, so­cial con­nec­tions, and other kinds of re­sources which might be use­ful for us in the fu­ture. If there are no ac­tive on­go­ing threats, then that im­plies that the situ­a­tion is pretty safe for the time be­ing, mak­ing it rea­son­able to re­vert to a pos­i­tive state of be­ing open to ex­plo­ra­tion.

The In­ter­nal Fam­ily Sys­tems Ther­apy book makes a some­what big deal out of the fact that ev­ery­one, even most trau­ma­tized peo­ple, ul­ti­mately has a Self which they can ac­cess. It ex­plains this in terms of the mind be­ing or­ga­nized to pro­tect against dam­age, and with parts always split­ting off from the Self when it would oth­er­wise be dam­aged. I think the real ex­pla­na­tion is much sim­pler: the mind is not ac­cu­mu­lat­ing dam­age, it is just ac­cu­mu­lat­ing a longer and longer list of situ­a­tions not con­sid­ered safe.

As an aside, this model feels like it makes me less con­fused about con­fi­dence. It seems like peo­ple are re­ally at­tracted to con­fi­dent peo­ple, and that to some ex­tent it’s also pos­si­ble to fake con­fi­dence un­til it be­comes gen­uine. But if con­fi­dence is so at­trac­tive and we can fake it, why hasn’t evolu­tion just made ev­ery­one con­fi­dent by de­fault?

Turns out that it has. The rea­son why faked con­fi­dence grad­u­ally turns into gen­uine con­fi­dence is that by forc­ing your­self to act in con­fi­dent ways which felt dan­ger­ous be­fore, your mind gets in­for­ma­tion in­di­cat­ing that this be­hav­ior is not as dan­ger­ous as you origi­nally thought. That grad­u­ally turns off those pri­or­ity over­rides that kept you out of Self origi­nally, un­til you get there nat­u­rally.

The rea­son why be­ing in Self is a re­quire­ment for do­ing IFS, is the ex­is­tence of con­flicts be­tween parts. For in­stance, re­call the stove-pho­bic robot hav­ing a fire­fighter sub­agent that caused it to re­treat from the stove into watch­ing pic­tures of beau­tiful naked robots. This trig­gered a sub­agent which was afraid of the naked-robot-watch­ing pre­vent­ing the robot from achiev­ing its goals. If the robot now tried to do IFS and talk with the fire­fighter sub­agent that caused it to run away from stoves, this might bring to mind con­tent which ac­ti­vated the ex­ile that was afraid of not achiev­ing things. Then that ex­ile would keep flood­ing the mind with nega­tive mem­o­ries, try­ing to achieve its pri­or­ity over­ride of “we need to get out of this situ­a­tion”, and pre­vent­ing the pro­cess from pro­ceed­ing. Thus, all of the sub­agents that have strong opinions about the situ­a­tion need to be un­blended from, be­fore in­te­gra­tion can pro­ceed.

IFS also has a sep­a­rate con­cept of “Self-Lead­er­ship”. This is a pro­cess where var­i­ous sub­agents even­tu­ally come to trust the Self, so that they al­low the per­son to in­creas­ingly re­main in Self even in var­i­ous emer­gen­cies. IFS views this as a pos­i­tive de­vel­op­ment, not only be­cause it feels nice, but be­cause do­ing so means that the per­son will have more cog­ni­tive re­sources available for ac­tu­ally deal­ing with the emer­gency in ques­tion.

I think that this ties back to the origi­nal no­tion of sub­agents be­ing gen­er­ated to in­voke pri­or­ity over­rides for situ­a­tions which the per­son origi­nally didn’t have the re­sources to han­dle. Many of the sub­agents IFS talks about seem to emerge from child­hood ex­pe­riences. A child has many fewer cog­ni­tive, so­cial, and emo­tional re­sources for deal­ing with bad situ­a­tions, in which case it makes sense to just cat­e­gor­i­cally avoid them, and in­voke spe­cial over­rides to en­sure that this hap­pens. A child’s cog­ni­tive ca­pac­i­ties, mod­els of the world, and abil­ities to self-reg­u­late are also less de­vel­oped, so she may have a harder time stay­ing out of dan­ger­ous situ­a­tions with­out hav­ing some pri­or­ity over­rides built in. An adult, how­ever, typ­i­cally has many more re­sources than a child does. Even when faced with an emer­gency situ­a­tion, it can be much bet­ter to be able to re­main calm and an­a­lyze the situ­a­tion us­ing all of one’s sub­agents, rather than hav­ing a few of them take over all the de­ci­sion-mak­ing. Thus, it seems to me—both the­o­ret­i­cally and prac­ti­cally—that de­vel­op­ing Self-Lead­er­ship is re­ally valuable.

That said, I do not wish to im­ply that it would be a good goal to never have nega­tive emo­tions. Some­times blend­ing with a sub­agent, and ex­pe­rienc­ing re­sult­ing nega­tive emo­tions, is the right thing to do in that situ­a­tion. Rather than sup­press­ing nega­tive emo­tions en­tirely, Self-Lead­er­ship aims to get to a state where any emo­tional re­ac­tion tends to be en­dorsed by the mind-sys­tem as a whole. Thus, if feel­ing an­gry or sad or bit­ter or what­ever feels ap­pro­pri­ate to the situ­a­tion, you can let your­self feel so, and then give your­self to that emo­tion with­out re­sist­ing it. As a re­sult, nega­tive emo­tions be­come less un­pleas­ant to ex­pe­rience, since there are fewer sub­agents try­ing to fight against them. Also, if it turns out that be­ing in a nega­tive emo­tional state is no longer use­ful, the sys­tem as a whole can just choose to move back into Self.

Fi­nal words

I’ve now given a brief sum­mary of the IFS model, and ex­plained why I think it makes sense. This is of course not enough to es­tab­lish the model as true. But it might help in mak­ing the model plau­si­ble enough to at least try out.

I think that most peo­ple could benefit from learn­ing and do­ing IFS on them­selves, ei­ther alone or to­gether with a friend. I’ve been say­ing that ex­iles/​man­agers/​fire­fighters tend to be gen­er­ated from trauma, but it’s im­por­tant to re­al­ize that these events don’t need to be any­thing im­mensely trau­matic. The kinds of or­di­nary, nor­mal child­hood up­sets that ev­ery­one has had can gen­er­ate these kinds of sub­agents. Re­mem­ber, just be­cause you think of a child­hood event as triv­ial now, doesn’t mean that it felt triv­ial to you as a child. Do­ing IFS work, I’ve found ex­iles re­lated to mem­o­ries and events which I thought left no nega­tive traces, but ac­tu­ally did.

Re­mem­ber also that it can be re­ally hard to no­tice the pres­ence of some man­agers: if they are do­ing their job effec­tively, then you might never be­come aware of them di­rectly. “I don’t have any trauma so I wouldn’t benefit from do­ing IFS” isn’t nec­es­sar­ily cor­rect. Rather, the cues that I use for de­tect­ing a need to do in­ter­nal work are:

  • Do I have the qual­ities as­so­ci­ated with Self, or is some­thing block­ing them?

  • Do I feel like I’m ca­pa­ble of deal­ing with this situ­a­tion ra­tio­nally, and do­ing the things which feel like good ideas on an in­tel­lec­tual level?

  • Do my emo­tional re­ac­tions feel like they are en­dorsed by my mind-sys­tem as a whole, or is there a re­sis­tance to them?

If not, there is of­ten some in­ter­nal con­flict which needs to be ad­dressed—and IFS, com­bined with some other prac­tices such as Fo­cus­ing and med­i­ta­tion—has been very use­ful in learn­ing to solve those in­ter­nal con­flicts.

Even if you don’t feel con­vinced that do­ing IFS per­son­ally would be a good idea, I think adopt­ing its frame­work of ex­iles, man­agers and fire­fighters is use­ful for bet­ter un­der­stand­ing the be­hav­ior of other peo­ple. Their dy­nam­ics will be eas­ier to rec­og­nize in other peo­ple if you’ve had some ex­pe­rience rec­og­niz­ing them in your­self, how­ever.

If you want to learn more about IFS, I would recom­mend start­ing with Self-Ther­apy by Jay Ear­ley. In terms of What/​How/​Why books, my cur­rent sug­ges­tions would be:

This post was writ­ten as part of re­search sup­ported by the Foun­da­tional Re­search In­sti­tute. Thank you to ev­ery­one who pro­vided feed­back on ear­lier drafts of this ar­ti­cle: Eli Tyre, Eliz­a­beth Van Nos­trand, Jan Kul­veit, Juha Tör­mä­nen, Lumi Pakka­nen, Maija Haav­isto, Mar­cello Her­reshoff, Qiaochu Yuan, and Steve Omo­hun­dro.