[Conversation Log] Compartmentalization

(7:40:37 PM) hand­oflixue: Had an odd thought re­cently, and am try­ing to see if I un­der­stand the idea of com­part­men­tal­iza­tion.
(7:41:08 PM) hand­oflixue: I’ve always acted in a way, where­upon if I’m play­ing WOW, I role­play an elf. If I’m at church, I role­play a uni­tar­ian. If I’m on LessWrong, I role­play a ra­tio­nal­ist.
(7:41:31 PM) hand­oflixue: And for the most part, these are three sep­a­rate boxes. My elf is not a ra­tio­nal­ist nor a uni­tar­ian, and I don’t ap­ply the Li­tany of Tarski to church.
(7:41:49 PM) hand­oflixue: And I re­al­ized I’m *as­sum­ing* this is what peo­ple mean by com­part­men­tal­iz­ing.
(7:42:11 PM) hand­oflixue: But I also had some *re­ally* in­ter­est­ing as­sump­tions about what peo­ple meant by re­li­gion and spiritual and such, so it’s prob­a­bly smart to step back and check ^^
(7:43:45 PM) Ade­lene: I’m ac­tu­ally not sure what’s usu­ally meant by the con­cept (which I don’t ac­tu­ally use), but that’s not the guess I came up with when you first asked, and I think mine works a lit­tle bet­ter.
(7:44:50 PM) hand­oflixue: Then I am glad I asked! :)
(7:45:24 PM) Ade­lene: My guess is some­thing along the lines of this: Com­part­men­tal­iz­ing is when one has sev­eral mod­els of how the world works, which pre­dict differ­ent things about the same situ­a­tions, and uses ar­bi­trary, so­cial, or emo­tional meth­ods rather than log­i­cal meth­ods to de­cide which model to use where.
(7:46:54 PM) hand­oflixue: Ah­hhh
(7:47:05 PM) hand­oflixue: So it’s not hav­ing differ­ent mod­els, it’s be­ing alog­i­cal about choos­ing a method/​
(7:47:08 PM) hand­oflixue: ?
(7:47:14 PM) Ade­lene: That’s my guess, yes.
(7:47:37 PM) Ade­lene: I do think that it’s speci­fi­cally not just about hav­ing differ­ent be­hav­ioral habits in differ­ent situ­a­tions.
(7:48:00 PM) Ade­lene: (Which is what I think you mean by ‘role­play as’.)
(7:49:21 PM) hand­oflixue: It’s not *ex­actly* differ­ent situ­a­tions, though. That’s just a con­ve­nient refer­ence point, and the pro­cess that usu­ally de­vel­ops new modes. I can be an elf on LessWrong, or a ra­tio­nal­ist WOW player, too.
(7:49:53 PM) Ade­lene: Also, with re­gards to the mod­els model, some mod­els don’t seem to be re­li­able at all from a log­i­cal stand­point, so it’s fairly safe to as­sume that some­one who uses such a model in any situ­a­tion is com­part­men­tal­iz­ing.
(7:50:34 PM) hand­oflixue: But the god­dess re­ally does talk to me dur­ing rites >.>;
(7:51:16 PM) Ade­lene: …okay, maybe that’s not the best word­ing of that con­cept.
(7:51:33 PM) hand­oflixue: It’s a con­cept I tend to have trou­ble with, too, I’ll ad­mit
(7:51:36 PM) hand­oflixue: I… mmm.
(7:51:56 PM) hand­oflixue: Eh :)
(7:52:18 PM) Ade­lene: I’m try­ing to get at a more ‘main­stream chris­ti­an­ity model’ type thing, with that—most Chris­ti­ans I’ve known don’t ac­tu­ally ex­pect any kind of feed­back at all from God.
(7:53:00 PM) Ade­lene: Whereas your model at least seems to make some use­ful pre­dic­tions about your mind­states in re­sponse to cer­tain stim­ulii.
(7:53:20 PM) hand­oflixue: .. but that would be stupid >.>
(7:53:26 PM) Ade­lene: eh?
(7:53:50 PM) hand­oflixue: If they don’t … get any­thing out of it, that would be stupid to do it o.o
(7:54:11 PM) Ade­lene: Oh, Chris­ti­ans? They get so­cial stuff out of it.
(7:54:35 PM) hand­oflixue: *nods* So… it’s benefi­cial.
(7:54:46 PM) Ade­lene: But still com­part­ment-ey.
(7:55:10 PM) Ade­lene: I listed ‘so­cial’ in the rea­sons one might use an illog­i­cal model on pur­pose. :)
(7:55:25 PM) hand­oflixue: Hm­mmm.
(7:56:05 PM) hand­oflixue: I wish I knew ac­tual Chris­ti­ans I could ask about this ^^;
(7:56:22 PM) Ade­lene: They’re not hard to find, I hear. ^.-
(7:56:27 PM) hand­oflixue: … huh
(7:56:42 PM) hand­oflixue: Good point.
(7:57:12 PM) Ade­lene: Pos­si­bly of in­ter­est: I worked in a Ro­man Catholic nurs­ing home—with ac­tual nuns! - for four years.
(7:57:25 PM) hand­oflixue: Ooh, that is use­ful :)
(7:57:38 PM) hand­oflixue: I’d rather bug some­one who doesn’t seem to ob­ject to my true mo­tives :)
(7:58:00 PM) Ade­lene: Not that I talked to the nuns much, but there were some definite op­por­tu­ni­ties for in­for­ma­tion-gath­er­ing.
(7:58:27 PM) hand­oflixue: Mostly, mmm...
(7:58:34 PM) hand­oflixue: http://​​less­wrong.com/​​lw/​​1mh/​​that_mag­i­cal_click/​​ Have you read this ar­ti­cle?
(7:58:52 PM) Ade­lene: Not re­cently, but I re­mem­ber the gist of it.
(7:59:05 PM) hand­oflixue: I’m try­ing to un­der­stand the idea of a mind that doesn’t click, and I’m try­ing to un­der­stand the idea of how com­part­men­tal­iz­ing would some­how *block* that.
(7:59:15 PM) hand­oflixue: I dunno, the way nor­mal peo­ple think baf­fles me
(7:59:28 PM) Ade­lene: *nodnods*
(7:59:30 PM) hand­oflixue: I as­sumed ev­ery­one was play­ing a re­ally weird game un­til, um, a few months ago >.>
(7:59:58 PM) Ade­lene: heh
(8:00:29 PM) Ade­lene: *pon­ders not-click­ing and com­part­men­tal­iza­tion*
(8:00:54 PM) hand­oflixue: It’s sort of… all the mod­els I have of peo­ple make sense.
(8:00:58 PM) hand­oflixue: They have to make sense.
(8:01:22 PM) hand­oflixue: I can un­der­stand “Per­son A is Chris­tian be­cause it benefits them, and the cost of tran­si­tion­ing to a differ­ent state is un­af­ford­ably high, even if be­ing Athe­ist would be a net gain”
(8:01:49 PM) Ade­lene: That’s se­ri­ously a sim­plifi­ca­tion.
(8:02:00 PM) hand­oflixue: I’m sure it is ^^
(8:02:47 PM) hand­oflixue: But that’s a model I can un­der­stand, be­cause it makes sense. And I can flesh it out in com­plex ways, such as adding the so­cial penalty that goes in to think­ing about defect­ing, and the ick-field around defect­ing, and such. But it still mod­els out about that way.
(8:02:58 PM) Ade­lene: Rele­vantly, they don’t know what the cost of tran­si­tion ac­tu­ally would be, and they don’t know what the benefit would be.
(8:04:51 PM) hand­oflixue: Mmmm… re­ally?
(8:05:03 PM) hand­oflixue: I think most peo­ple can at least roughly ap­prox­i­mate the cost-of-tran­si­tion
(8:05:19 PM) hand­oflixue: (“Oh, but I’d lose all my friends! I wouldn’t know WHAT to be­lieve any­more”)
(8:05:20 PM) Ade­lene: And also I think most peo­ple know on some level that mak­ing a tran­si­tion like that is not re­ally vol­un­tary in any sense once one starts con­sid­er­ing it—it hap­pens on a pre-con­scious level, and it ei­ther does or doesn’t with­out the con­scious mind hav­ing much say in it (though it can try to deny that the change has hap­pened). So they avoid think­ing about it at all un­less they have a re­ally good rea­son to.
(8:05:57 PM) hand­oflixue: There may be ways for them to miti­gate that cost, that they’re un­aware of (“make friends with an athe­ist pro­gram­mers group”, “read the metaethics se­quence”), but … that’s just ig­no­rance and that makes sense ^^
(8:06:21 PM) Ade­lene: And what would the cost of those cost-miti­ga­tion things be?
(8:07:02 PM) hand­oflixue: Varies based on whether the per­son already knows an athe­ist pro­gram­mers group I sup­pose? ^^
(8:07:26 PM) Ade­lene: Yep. And most peo­ple don’t, and don’t know what it would cost to find and join one.
(8:07:40 PM) hand­oflixue: The point was more “They can’t es­cape be­cause of the cost, and while there are ways to buy-down that cost, peo­ple are usu­ally ig­nor...
(8:07:41 PM) hand­oflixue: Ah­hhh
(8:07:42 PM) hand­oflixue: Okay
(8:07:44 PM) hand­oflixue: Gotcha
(8:07:49 PM) hand­oflixue: Usu­ally ig­no­rant be­cause *they aren’t look­ing*
(8:08:01 PM) hand­oflixue: They’re not lay­ing down es­cape routes
(8:08:24 PM) Ade­lene: And why would they, when they’re not plan­ning on es­cap­ing?
(8:09:28 PM) hand­oflixue: Be­cause it’s just ra­tio­nal to seek to op­ti­mize your life, and you’d have to be stupid to think you’re liv­ing an op­ti­mum life?
(8:10:13 PM) Ade­lene: uh­hhh.… no, most peo­ple don’t think like that, ba­si­cally at all.
(8:10:30 PM) hand­oflixue: Yeah, I know. I just don’t quite un­der­stand why not >.>
(8:10:54 PM) hand­oflixue: *pon­ders*
(8:11:02 PM) hand­oflixue: So com­part­men­tal­iza­tion is sorta… not think­ing about things?
(8:11:18 PM) Ade­lene: That’s at least a ma­jor symp­tom, yeah.
(8:11:37 PM) hand­oflixue: Com­part­men­tal­iza­tion is when model A is never used in situ­a­tion X
(8:12:17 PM) hand­oflixue: And, of­ten, when model A is only used in situ­a­tion Y
(8:12:22 PM) Ade­lene: And not be­cause model A is speci­fi­cally de­signed for simu­la­tions of type Y, yes.
(8:12:39 PM) hand­oflixue: I’d rephrase that to “and not be­cause model A is use­less for X”
(8:13:06 PM) Ade­lene: mmm...
(8:13:08 PM) hand­oflixue: Quan­tum physics isn’t de­signed as an ar­gu­ment for cry­on­ics, but eliezer uses it that way.
(8:13:14 PM) Ade­lene: hold on a sec.
(8:13:16 PM) hand­oflixue: Kay
(8:16:01 PM) Ade­lene: The Chris­tian model claims to be use­ful in lots of situ­a­tions where it’s ob­serv­ably not. For ex­am­ple, a given per­son’s Chris­tian model might say that if they pray, they’ll have a mirac­u­lous re­cov­ery from a dis­ease. Their main­stream-so­ciety-memes model, on the other hand, says that go­ing to see a doc­tor and get­ting treat­ment is the way to go. The Chris­tian model is *ob­serv­ably* ba­si­cally use­less in that situ­a­tion, but I’d still call that com­part­men­tal­iza­tion if they went with the main­stream-so­ciety-memes model but still claimed to pri­mar­ily fol­low the Chris­tian one.
(8:16:46 PM) hand­oflixue: Hmmm, in­ter­est­ing.
(8:16:51 PM) hand­oflixue: I always just called that “ly­ing” >.>
(8:17:05 PM) hand­oflixue: (At least, if I’m un­der­stand­ing you right: They do X, claim it’s for Y rea­son, and it’s very ob­vi­ously for Z)
(8:17:27 PM) hand­oflixue: (Ly­ing-to-self quite pos­si­bly, but I still call that ly­ing)
(8:18:00 PM) Ade­lene: No, no—in my nar­ra­tive, they never claim that go­ing to a doc­tor is the Chris­tian thing to do—they just never bring Chris­ti­an­ity up in that con­text.
(8:19:15 PM) hand­oflixue: Ahhh
(8:19:24 PM) hand­oflixue: So they’re be­ing Selec­tively Chris­tian?
(8:19:27 PM) Ade­lene: Yup.
(8:19:37 PM) hand­oflixue: But I play an elf, and an elf doesn’t in­vest in cry­on­ics.
(8:20:09 PM) hand­oflixue: So it seems like that’s just… hav­ing two *differ­ent* modes.
(8:20:40 PM) Ade­lene: I don’t think that’s in­trin­si­cally a prob­lem. The ques­tion is how you pick be­tween them.
(8:22:08 PM) hand­oflixue: Our ex­am­ple Chris­tian seems to be pick­ing sen­si­bly, though.
(8:22:11 PM) Ade­lene: In the con­texts that you con­sider ‘elfy’, cry­on­ics might ac­tu­ally not make sense. Or it might be re­placed by some­thing else—I bet your elf would snap up an amulet of ha-ha-you-can’t-kill-me, fr’ex.
(8:22:26 PM) hand­oflixue: Heeeh :)
(8:28:51 PM) Ade­lene: About the Chris­tian ex­am­ple—yes, in that par­tic­u­lar case they chose the model for log­i­cal rea­sons—the main­stream model is the log­i­cal one be­cause it works, at least rea­son­ably well. It’s im­plied that the per­son will use the Chris­tian model at least some­times, though. Say for ex­am­ple they wind up mak­ing poor fi­nan­cial de­ci­sions be­cause ‘God will provide’, or some­thing.
(8:29:48 PM) hand­oflixue: Heh ^^;
(8:29:55 PM) hand­oflixue: Okay, yeah, that one I’m guilty of >.>
(8:30:05 PM) hand­oflixue: (In my defense, it keeps *work­ing*)
(8:30:10 PM) Ade­lene: (I ap­pear to be out of my depth, now. Like I said, this isn’t a con­cept I use. I haven’t thought about it much.)
(8:30:22 PM) hand­oflixue: It’s been helpful to define a model for me.
(8:30:33 PM) Ade­lene: ^^
(8:30:50 PM) hand­oflixue: The idea that the mis­take is not hav­ing sep­a­rate mod­els, but in the ap­pli­ca­tion or lack thereof.
(8:31:07 PM) hand­oflixue: Sort of like how I don’t use quan­tum me­chan­ics to do my taxes.
(8:31:14 PM) hand­oflixue: Use­ful model, wrong situ­a­tion, not com­part­men­tal­iza­tion.
(8:31:28 PM) Ade­lene: *nods*
(8:32:09 PM) hand­oflixue: So, hm­mmm.’
(8:32:18 PM) hand­oflixue: One thing I’ve no­ticed in life is that hav­ing mul­ti­ple mod­els is use­ful
(8:32:32 PM) hand­oflixue: And one thing I’ve no­ticed with a lot of “ra­tio­nal­ists” is that they seem not to fol­low that prin­ci­ple.
(8:33:15 PM) hand­oflixue: Does that make sense
(8:33:24 PM) Ade­lene: *nods*
(8:34:13 PM) Ade­lene: That ac­tu­ally feels re­lated.
(8:35:03 PM) Ade­lene: Peo­ple want to think they know how things work, so when they find a tool that’s rea­son­ably use­ful they tend to put more faith in it than it de­serves.
(8:35:39 PM) Ade­lene: Get­ting burned a cou­ple times seems to break that habit, but suffi­ciently smart peo­ple can avoid that les­son for a sur­pris­ingly long time.
(8:35:55 PM) Ade­lene: Well, suffi­ciently smart, suffi­ciently priv­ileged peo­ple.
(8:37:15 PM) hand­oflixue: Heeeh, *nods*
(8:37:18 PM) hand­oflixue: I seem to … I dunno
(8:37:24 PM) hand­oflixue: I grew up on the multi-model mind­set.
(8:37:41 PM) hand­oflixue: It’s… a very odd sort of difficult to try and com­pre­hend that other peo­ple didnt...
(8:37:47 PM) Ade­lene: *nods*
(8:38:47 PM) Ade­lene: A lot of peo­ple just avoid things where their preferred model doesn’t work al­to­gether. I don’t think many LWers are badly guilty of that, but I do sus­pect that most LWers were raised by peo­ple who are.
(8:39:16 PM) hand­oflixue: Mm­mmm...
(8:39:38 PM) hand­oflixue: I tend to get the feel­ing that the com­mu­nity-con­sen­sus has trou­ble un­der­stand­ing “but this model gen­uinely WORKS for a per­son in this situ­a­tion”
(8:39:58 PM) hand­oflixue: With some de­gree of… just not un­der­stand­ing that ideas are re­sources too, and they’re rather priv­ileged there and in other ways.
(8:40:16 PM) Ade­lene: That is an in­ter­est­ing way of putting it and I like it.
(8:40:31 PM) hand­oflixue: Yaaay :)
(8:40:40 PM) Ade­lene: ^.^
(8:41:01 PM) Ade­lene: Hmm
(8:41:18 PM) Ade­lene: It oc­curs to me that com­part­men­tal­iza­tion might in a sense be a so­cial form of one-box­ing.
(8:41:41 PM) hand­oflixue: Heh! Go on :)
(8:42:01 PM) Ade­lene: “For sig­nal­ing rea­sons, I fol­low model X in situ­a­tion-class Y, even when the re­sults are sub-op­ti­mal.”
(8:42:59 PM) hand­oflixue: Hm­mmm.
(8:43:36 PM) hand­oflixue: Go­ing back to pre­vi­ous, though, I think com­part­men­tal­iza­tion re­quires some de­gree of not be­ing *aware* that you’re do­ing it.
(8:43:47 PM) Ade­lene: Hu­mans are good at that.
(8:43:48 PM) hand­oflixue: So… what you said, ex­actly, but on a sub­con­scious level
(8:43:53 PM) Ade­lene: *nodnods*
(8:44:00 PM) Ade­lene: I meant sub­con­sciously.