[Conversation Log] Compartmentalization

(7:40:37 PM) handoflixue: Had an odd thought recently, and am trying to see if I understand the idea of compartmentalization.
(7:41:08 PM) handoflixue: I’ve always acted in a way, whereupon if I’m playing WOW, I roleplay an elf. If I’m at church, I roleplay a unitarian. If I’m on LessWrong, I roleplay a rationalist.
(7:41:31 PM) handoflixue: And for the most part, these are three separate boxes. My elf is not a rationalist nor a unitarian, and I don’t apply the Litany of Tarski to church.
(7:41:49 PM) handoflixue: And I realized I’m *assuming* this is what people mean by compartmentalizing.
(7:42:11 PM) handoflixue: But I also had some *really* interesting assumptions about what people meant by religion and spiritual and such, so it’s probably smart to step back and check ^^
(7:43:45 PM) Adelene: I’m actually not sure what’s usually meant by the concept (which I don’t actually use), but that’s not the guess I came up with when you first asked, and I think mine works a little better.
(7:44:50 PM) handoflixue: Then I am glad I asked! :)
(7:45:24 PM) Adelene: My guess is something along the lines of this: Compartmentalizing is when one has several models of how the world works, which predict different things about the same situations, and uses arbitrary, social, or emotional methods rather than logical methods to decide which model to use where.
(7:46:54 PM) handoflixue: Ahhhh
(7:47:05 PM) handoflixue: So it’s not having different models, it’s being alogical about choosing a method/​
(7:47:08 PM) handoflixue: ?
(7:47:14 PM) Adelene: That’s my guess, yes.
(7:47:37 PM) Adelene: I do think that it’s specifically not just about having different behavioral habits in different situations.
(7:48:00 PM) Adelene: (Which is what I think you mean by ‘roleplay as’.)
(7:49:21 PM) handoflixue: It’s not *exactly* different situations, though. That’s just a convenient reference point, and the process that usually develops new modes. I can be an elf on LessWrong, or a rationalist WOW player, too.
(7:49:53 PM) Adelene: Also, with regards to the models model, some models don’t seem to be reliable at all from a logical standpoint, so it’s fairly safe to assume that someone who uses such a model in any situation is compartmentalizing.
(7:50:34 PM) handoflixue: But the goddess really does talk to me during rites >.>;
(7:51:16 PM) Adelene: …okay, maybe that’s not the best wording of that concept.
(7:51:33 PM) handoflixue: It’s a concept I tend to have trouble with, too, I’ll admit
(7:51:36 PM) handoflixue: I… mmm.
(7:51:56 PM) handoflixue: Eh :)
(7:52:18 PM) Adelene: I’m trying to get at a more ‘mainstream christianity model’ type thing, with that—most Christians I’ve known don’t actually expect any kind of feedback at all from God.
(7:53:00 PM) Adelene: Whereas your model at least seems to make some useful predictions about your mindstates in response to certain stimulii.
(7:53:20 PM) handoflixue: .. but that would be stupid >.>
(7:53:26 PM) Adelene: eh?
(7:53:50 PM) handoflixue: If they don’t … get anything out of it, that would be stupid to do it o.o
(7:54:11 PM) Adelene: Oh, Christians? They get social stuff out of it.
(7:54:35 PM) handoflixue: *nods* So… it’s beneficial.
(7:54:46 PM) Adelene: But still compartment-ey.
(7:55:10 PM) Adelene: I listed ‘social’ in the reasons one might use an illogical model on purpose. :)
(7:55:25 PM) handoflixue: Hmmmm.
(7:56:05 PM) handoflixue: I wish I knew actual Christians I could ask about this ^^;
(7:56:22 PM) Adelene: They’re not hard to find, I hear. ^.-
(7:56:27 PM) handoflixue: … huh
(7:56:42 PM) handoflixue: Good point.
(7:57:12 PM) Adelene: Possibly of interest: I worked in a Roman Catholic nursing home—with actual nuns! - for four years.
(7:57:25 PM) handoflixue: Ooh, that is useful :)
(7:57:38 PM) handoflixue: I’d rather bug someone who doesn’t seem to object to my true motives :)
(7:58:00 PM) Adelene: Not that I talked to the nuns much, but there were some definite opportunities for information-gathering.
(7:58:27 PM) handoflixue: Mostly, mmm...
(7:58:34 PM) handoflixue: http://​​lesswrong.com/​​lw/​​1mh/​​that_magical_click/​​ Have you read this article?
(7:58:52 PM) Adelene: Not recently, but I remember the gist of it.
(7:59:05 PM) handoflixue: I’m trying to understand the idea of a mind that doesn’t click, and I’m trying to understand the idea of how compartmentalizing would somehow *block* that.
(7:59:15 PM) handoflixue: I dunno, the way normal people think baffles me
(7:59:28 PM) Adelene: *nodnods*
(7:59:30 PM) handoflixue: I assumed everyone was playing a really weird game until, um, a few months ago >.>
(7:59:58 PM) Adelene: heh
(8:00:29 PM) Adelene: *ponders not-clicking and compartmentalization*
(8:00:54 PM) handoflixue: It’s sort of… all the models I have of people make sense.
(8:00:58 PM) handoflixue: They have to make sense.
(8:01:22 PM) handoflixue: I can understand “Person A is Christian because it benefits them, and the cost of transitioning to a different state is unaffordably high, even if being Atheist would be a net gain”
(8:01:49 PM) Adelene: That’s seriously a simplification.
(8:02:00 PM) handoflixue: I’m sure it is ^^
(8:02:47 PM) handoflixue: But that’s a model I can understand, because it makes sense. And I can flesh it out in complex ways, such as adding the social penalty that goes in to thinking about defecting, and the ick-field around defecting, and such. But it still models out about that way.
(8:02:58 PM) Adelene: Relevantly, they don’t know what the cost of transition actually would be, and they don’t know what the benefit would be.
(8:04:51 PM) handoflixue: Mmmm… really?
(8:05:03 PM) handoflixue: I think most people can at least roughly approximate the cost-of-transition
(8:05:19 PM) handoflixue: (“Oh, but I’d lose all my friends! I wouldn’t know WHAT to believe anymore”)
(8:05:20 PM) Adelene: And also I think most people know on some level that making a transition like that is not really voluntary in any sense once one starts considering it—it happens on a pre-conscious level, and it either does or doesn’t without the conscious mind having much say in it (though it can try to deny that the change has happened). So they avoid thinking about it at all unless they have a really good reason to.
(8:05:57 PM) handoflixue: There may be ways for them to mitigate that cost, that they’re unaware of (“make friends with an atheist programmers group”, “read the metaethics sequence”), but … that’s just ignorance and that makes sense ^^
(8:06:21 PM) Adelene: And what would the cost of those cost-mitigation things be?
(8:07:02 PM) handoflixue: Varies based on whether the person already knows an atheist programmers group I suppose? ^^
(8:07:26 PM) Adelene: Yep. And most people don’t, and don’t know what it would cost to find and join one.
(8:07:40 PM) handoflixue: The point was more “They can’t escape because of the cost, and while there are ways to buy-down that cost, people are usually ignor...
(8:07:41 PM) handoflixue: Ahhhh
(8:07:42 PM) handoflixue: Okay
(8:07:44 PM) handoflixue: Gotcha
(8:07:49 PM) handoflixue: Usually ignorant because *they aren’t looking*
(8:08:01 PM) handoflixue: They’re not laying down escape routes
(8:08:24 PM) Adelene: And why would they, when they’re not planning on escaping?
(8:09:28 PM) handoflixue: Because it’s just rational to seek to optimize your life, and you’d have to be stupid to think you’re living an optimum life?
(8:10:13 PM) Adelene: uhhhh.… no, most people don’t think like that, basically at all.
(8:10:30 PM) handoflixue: Yeah, I know. I just don’t quite understand why not >.>
(8:10:54 PM) handoflixue: *ponders*
(8:11:02 PM) handoflixue: So compartmentalization is sorta… not thinking about things?
(8:11:18 PM) Adelene: That’s at least a major symptom, yeah.
(8:11:37 PM) handoflixue: Compartmentalization is when model A is never used in situation X
(8:12:17 PM) handoflixue: And, often, when model A is only used in situation Y
(8:12:22 PM) Adelene: And not because model A is specifically designed for simulations of type Y, yes.
(8:12:39 PM) handoflixue: I’d rephrase that to “and not because model A is useless for X”
(8:13:06 PM) Adelene: mmm...
(8:13:08 PM) handoflixue: Quantum physics isn’t designed as an argument for cryonics, but eliezer uses it that way.
(8:13:14 PM) Adelene: hold on a sec.
(8:13:16 PM) handoflixue: Kay
(8:16:01 PM) Adelene: The Christian model claims to be useful in lots of situations where it’s observably not. For example, a given person’s Christian model might say that if they pray, they’ll have a miraculous recovery from a disease. Their mainstream-society-memes model, on the other hand, says that going to see a doctor and getting treatment is the way to go. The Christian model is *observably* basically useless in that situation, but I’d still call that compartmentalization if they went with the mainstream-society-memes model but still claimed to primarily follow the Christian one.
(8:16:46 PM) handoflixue: Hmmm, interesting.
(8:16:51 PM) handoflixue: I always just called that “lying” >.>
(8:17:05 PM) handoflixue: (At least, if I’m understanding you right: They do X, claim it’s for Y reason, and it’s very obviously for Z)
(8:17:27 PM) handoflixue: (Lying-to-self quite possibly, but I still call that lying)
(8:18:00 PM) Adelene: No, no—in my narrative, they never claim that going to a doctor is the Christian thing to do—they just never bring Christianity up in that context.
(8:19:15 PM) handoflixue: Ahhh
(8:19:24 PM) handoflixue: So they’re being Selectively Christian?
(8:19:27 PM) Adelene: Yup.
(8:19:37 PM) handoflixue: But I play an elf, and an elf doesn’t invest in cryonics.
(8:20:09 PM) handoflixue: So it seems like that’s just… having two *different* modes.
(8:20:40 PM) Adelene: I don’t think that’s intrinsically a problem. The question is how you pick between them.
(8:22:08 PM) handoflixue: Our example Christian seems to be picking sensibly, though.
(8:22:11 PM) Adelene: In the contexts that you consider ‘elfy’, cryonics might actually not make sense. Or it might be replaced by something else—I bet your elf would snap up an amulet of ha-ha-you-can’t-kill-me, fr’ex.
(8:22:26 PM) handoflixue: Heeeh :)
(8:28:51 PM) Adelene: About the Christian example—yes, in that particular case they chose the model for logical reasons—the mainstream model is the logical one because it works, at least reasonably well. It’s implied that the person will use the Christian model at least sometimes, though. Say for example they wind up making poor financial decisions because ‘God will provide’, or something.
(8:29:48 PM) handoflixue: Heh ^^;
(8:29:55 PM) handoflixue: Okay, yeah, that one I’m guilty of >.>
(8:30:05 PM) handoflixue: (In my defense, it keeps *working*)
(8:30:10 PM) Adelene: (I appear to be out of my depth, now. Like I said, this isn’t a concept I use. I haven’t thought about it much.)
(8:30:22 PM) handoflixue: It’s been helpful to define a model for me.
(8:30:33 PM) Adelene: ^^
(8:30:50 PM) handoflixue: The idea that the mistake is not having separate models, but in the application or lack thereof.
(8:31:07 PM) handoflixue: Sort of like how I don’t use quantum mechanics to do my taxes.
(8:31:14 PM) handoflixue: Useful model, wrong situation, not compartmentalization.
(8:31:28 PM) Adelene: *nods*
(8:32:09 PM) handoflixue: So, hmmmm.’
(8:32:18 PM) handoflixue: One thing I’ve noticed in life is that having multiple models is useful
(8:32:32 PM) handoflixue: And one thing I’ve noticed with a lot of “rationalists” is that they seem not to follow that principle.
(8:33:15 PM) handoflixue: Does that make sense
(8:33:24 PM) Adelene: *nods*
(8:34:13 PM) Adelene: That actually feels related.
(8:35:03 PM) Adelene: People want to think they know how things work, so when they find a tool that’s reasonably useful they tend to put more faith in it than it deserves.
(8:35:39 PM) Adelene: Getting burned a couple times seems to break that habit, but sufficiently smart people can avoid that lesson for a surprisingly long time.
(8:35:55 PM) Adelene: Well, sufficiently smart, sufficiently privileged people.
(8:37:15 PM) handoflixue: Heeeh, *nods*
(8:37:18 PM) handoflixue: I seem to … I dunno
(8:37:24 PM) handoflixue: I grew up on the multi-model mindset.
(8:37:41 PM) handoflixue: It’s… a very odd sort of difficult to try and comprehend that other people didnt...
(8:37:47 PM) Adelene: *nods*
(8:38:47 PM) Adelene: A lot of people just avoid things where their preferred model doesn’t work altogether. I don’t think many LWers are badly guilty of that, but I do suspect that most LWers were raised by people who are.
(8:39:16 PM) handoflixue: Mmmmm...
(8:39:38 PM) handoflixue: I tend to get the feeling that the community-consensus has trouble understanding “but this model genuinely WORKS for a person in this situation”
(8:39:58 PM) handoflixue: With some degree of… just not understanding that ideas are resources too, and they’re rather privileged there and in other ways.
(8:40:16 PM) Adelene: That is an interesting way of putting it and I like it.
(8:40:31 PM) handoflixue: Yaaay :)
(8:40:40 PM) Adelene: ^.^
(8:41:01 PM) Adelene: Hmm
(8:41:18 PM) Adelene: It occurs to me that compartmentalization might in a sense be a social form of one-boxing.
(8:41:41 PM) handoflixue: Heh! Go on :)
(8:42:01 PM) Adelene: “For signaling reasons, I follow model X in situation-class Y, even when the results are sub-optimal.”
(8:42:59 PM) handoflixue: Hmmmm.
(8:43:36 PM) handoflixue: Going back to previous, though, I think compartmentalization requires some degree of not being *aware* that you’re doing it.
(8:43:47 PM) Adelene: Humans are good at that.
(8:43:48 PM) handoflixue: So… what you said, exactly, but on a subconscious level
(8:43:53 PM) Adelene: *nodnods*
(8:44:00 PM) Adelene: I meant subconsciously.