How many people am I?

Strongly re­lated: the Ebborians

Imag­ine map­ping my brain into two in­ter­pen­e­trat­ing net­works. For each brain cell, half of it goes to one map and half to the other. For each con­nec­tion be­tween cells, half of each con­nec­tion goes to one map and half to the other. We can call these two mapped out halves Man­fred One and Man­fred Two. Be­cause neu­rons are clas­si­cal, as I think, both of these maps change to­gether. They con­tain the full pat­tern of my thoughts. (This situ­a­tion is even more clear in the Eb­bo­ri­ans, who can liter­ally split down the mid­dle.)

So how many peo­ple am I? Are Man­fred One and Man­fred Two both peo­ple? Of course, once we have two, why stop there—are there thou­sands of Man­freds in here, with “me” as only one of them? Put like that it sounds a lit­tle over­wrought—what’s re­ally go­ing on here is the ques­tion of what phys­i­cal sys­tem cor­re­sponds to “I” in en­glish state­ments like “I wake up.” This may mat­ter.

The im­pact on an­thropic prob­a­bil­ities is some­what straight­for­ward. With ev­ery­day defi­ni­tions of “I wake up,” I wake up just once per day no mat­ter how big my head is. But if the “I” in that sen­tence is some con­stant-size phys­i­cal pat­tern, then “I wake up” is an event that hap­pens more times if my head is big­ger. And so us­ing the vari­able peo­ple-num­ber defi­ni­tion, I ex­pect to wake up with a gi­gan­tic head.

The im­pact on de­ci­sions is less big. If I’m in this head with a bunch of other Man­freds, we’re all on the same page—it’s a non-an­thropic prob­lem of co­or­di­nated de­ci­sion-mak­ing. For ex­am­ple, if I were to make any mon­e­tary bets about my head size, and then donate prof­its to char­ity, no mat­ter what defi­ni­tion I’m us­ing, I should bet as if my head size didn’t af­fect an­thropic prob­a­bil­ities. So to some ex­tent the real point of this effect is that it is a way an­thropic prob­a­bil­ities can be ill-defined. On the other hand, what about prefer­ences that de­pend di­rectly on per­son-num­bers like how to value peo­ple with differ­ent head sizes? Or for veg­e­tar­i­ans, should we care more about cows than chick­ens, be­cause each cow is more an­i­mals than a chicken is?

Ac­cord­ing to my com­mon sense, it seems like my body has just one per­son in it. Why does my com­mon sense think that? I think there are two an­swers, one un­helpful and one helpful.

The first an­swer is evolu­tion. Hav­ing kids is an ac­tion that’s in­de­pen­dent of what phys­i­cal sys­tem we iden­tify with “I,” and so my an­ces­tors never found mod­el­ing their bod­ies as be­ing mul­ti­ple peo­ple use­ful.

The sec­ond an­swer is causal­ity. Man­fred One and Man­fred Two are causally dis­tinct from two copies of me in sep­a­rate bod­ies but the same in­put/​out­put. If a differ­ence be­tween the two sep­a­rated copies arose some­how, (rem­i­nis­cent of Den­nett’s fac­tual ac­count) hence­forth the two bod­ies would do and say differ­ent things and have differ­ent brain states. But if some differ­ence arises be­tween Man­fred One and Man­fred Two, it is erased by diffu­sion.

Which is to say, the map that is Man­fred One is stat­i­cally the same pat­tern as my whole brain, but it’s causally differ­ent. So is “I” the pat­tern, or is “I” the causal sys­tem?

In this sort of situ­a­tion I am happy to stick with com­mon sense, and thus when I say me, I think the causal sys­tem is refer­ring to the causal sys­tem. But I’m not very sure.

Go­ing back to the Eb­bo­ri­ans, one in­ter­est­ing thing about that post is the con­flict be­tween com­mon sense and com­mon sense—it seems like com­mon sense that each Eb­bo­rian is equally much one per­son, but it also seems like com­mon sense that if you looked at an Eb­bo­rian di­vid­ing, there doesn’t seem to be a mo­ment where the amount of sub­jec­tive ex­pe­rience should change, and so amount of sub­jec­tive ex­pe­rience should be pro­por­tional to thick­ness. But as it is said, just be­cause there are two op­pos­ing ideas doesn’t mean one of them is right.

On the ques­tions of sub­jec­tive ex­pe­rience raised in that post, I think this mostly gets cleared up by pre­cise de­scrip­tion an an­thropic nar­row­ness. I’m un­sure of the rel­a­tive sizes of this mar­gin and the proof, but the sketch is to re­place a mys­te­ri­ous “sub­jec­tive ex­pe­rience” that spans copies with in­di­vi­d­ual ex­pe­riences of peo­ple who are us­ing a TDT-like the­ory to choose so that they in­di­vi­d­u­ally achieve good out­comes given their ex­is­tence.