Compartmentalization as a passive phenomenon

We com­monly dis­cuss com­part­men­tal­iza­tion as if it were an ac­tive pro­cess, some­thing you do. Eliezer sus­pected his al­tru­ism, as well as some peo­ple’s “click­ing”, was due to a “failure to com­part­men­tal­ize”. Morendil dis­cussed com­part­men­tal­iza­tion as some­thing to avoid. But I sus­pect com­part­men­tal­iza­tion might ac­tu­ally be the nat­u­ral state, the one that re­quires effort to over­come.

I started think­ing about this when I en­coun­tered an ar­ti­cle claiming that the av­er­age Amer­i­can does not know the an­swer to the fol­low­ing ques­tion:

If a pen is dropped on a moon, will it:
A) Float away
B) Float where it is
C) Fall to the sur­face of the moon

Now, I have to ad­mit that the cor­rect an­swer wasn’t ob­vi­ous to me at first. I thought about it for a mo­ment, and al­most set­tled on B—af­ter all, there isn’t much grav­ity on the moon, and a pen is so light that it might just be un­af­fected. It was only then that I re­mem­bered that the as­tro­nauts had walked on the sur­face of the moon with­out trou­ble. Once I re­mem­bered that piece of knowl­edge, I was able to de­duce that the pen quite prob­a­bly would fall.

A link on that page brought me to an­other ar­ti­cle. This one de­scribed two stu­dents ran­domly call­ing 30 peo­ple and ask­ing them the ques­tion above. 47 per­cent of them got the ques­tion cor­rect, but what was in­ter­est­ing was that those who got it wrong were asked a fol­low-up ques­tion: “You’ve seen films of the APOLLO as­tro­nauts walk­ing around on the Moon, why didn’t they fall off?” Of those who heard it, about 20 per­cent changed their an­swer, but about half con­fi­dently replied, “Be­cause they were wear­ing heavy boots”.

While these ar­ti­cles were to­tally un­scien­tific sur­veys, it doesn’t seem to me like this would be the re­sult of an ac­tive pro­cess of com­part­men­tal­iza­tion. I don’t think my mind first knew that pens would fall down be­cause of grav­ity, but quickly hid that knowl­edge from my con­scious aware­ness un­til I was able to over­come the block. What would be the point in that? Rather, it seems to in­di­cate that my “com­part­men­tal­iza­tion” was sim­ply a lack of a con­nec­tion, and that such con­nec­tions are much harder to draw than we might as­sume.

The world is a com­pli­cated place. One of the rea­sons we don’t have AI yet is be­cause we haven’t found very many re­li­able cross-do­main rea­son­ing rules. Rea­son­ing al­gorithms in gen­eral are quickly sub­ject to a com­bi­na­to­rial ex­plo­sion: the rea­son­ing sys­tem might know which po­ten­tial in­fer­ences are valid ones, but not which ones are mean­ingful in any use­ful sense. Most cur­rent-day AI sys­tems need to be more or less fine-tuned or re­built en­tirely when they’re made to rea­son in a do­main they weren’t origi­nally built for.

For hu­mans, it can be even worse than that. Many of the ba­sic tenets in a va­ri­ety of fields are counter-in­tu­itive, or are in­tu­itive but have counter-in­tu­itive con­se­quences. The uni­verse isn’t ac­tu­ally fully ar­bi­trary, but for some­body who doesn’t know how all the rules add up, it might as well be. Think of all the times when some­body has tried to rea­son us­ing sur­face analo­gies, mis­tak­ing them for deep causes; or dis­missed a deep cause, mis­tak­ing it for a sur­face anal­ogy. Some­body might pre­sent us with a con­nec­tion be­tween two do­mains, but we have no sure way of test­ing the val­idity of that con­nec­tion.

Much of our rea­son­ing, I sus­pect, is ac­tu­ally pat­tern recog­ni­tion. We ini­tially have no idea of the con­nec­tion be­tween X and Y, but then we see X and Y oc­cur fre­quently to­gether, and we be­gin to think of the con­nec­tion as an “ob­vi­ous” one. For those well-versed in physics, it seems mind-numb­ingly bizarre to hear some­one claim that the Moon’s grav­ity isn’t enough to af­fect a pen, but is enough to af­fect peo­ple wear­ing heavy boots. But as for some hy­po­thet­i­cal per­son who hasn’t stud­ied much physics… or screw the hy­po­thet­i­cals—for me, this sounds wrong but not ob­vi­ously and com­pletely wrong. I mean, “the pen has less mass, so there’s less stuff for grav­ity to af­fect” sounds in­tu­itively sorta-plau­si­ble for me, be­cause I haven’t had enough ex­po­sure to for­mal physics to ham­mer in the right in­tu­ition.

I sus­pect that of­ten when we say “(s)he’s com­part­men­tal­iz­ing!”, we’re op­er­at­ing in a do­main that’s more fa­mil­iar to us, and thus it feels like an ac­tive at­tempt to keep things sep­a­rate must be the cause. After all, how could they not see it, were they not ac­tively keep­ing it com­part­men­tal­ized?

So my the­ory is that much of com­part­men­tal­iza­tion is sim­ply be­cause the search space is so large that peo­ple don’t end up see­ing that there might be a con­nec­tion be­tween two do­mains. Even if they do see the po­ten­tial, or if it’s ex­plic­itly pointed out to them, they might still not know enough about the do­main in ques­tion (such as in the ex­am­ple of heavy boots), or they might find the pro­posed con­nec­tion im­plau­si­ble. If you don’t know which cross-do­main rules and rea­son­ing pat­terns are valid, then build­ing up a sep­a­rate set of rules for each do­main is the safe ap­proach. Dis­card­ing as much of your pre­vi­ous knowl­edge as pos­si­ble when learn­ing about a new thing is slow, but it at least guaran­tees that you’re not pol­luted by ex­ist­ing in­cor­rect in­for­ma­tion. Build your the­o­ries pri­mar­ily on ev­i­dence found from a sin­gle do­main, and they will be true within that do­main. While there can cer­tainly also be situ­a­tions call­ing for an ac­tive pro­cess of com­part­men­tal­iza­tion, that might only hap­pen in a minor­ity of the cases.