Where Physics Meets Experience

Fol­lowup to: De­co­her­ence, Where Philos­o­phy Meets Science

Once upon a time, there was an alien species, whose planet hov­ered in the void of a uni­verse with laws al­most like our own. They would have been alien to us, but of course they did not think of them­selves as alien. They com­mu­ni­cated via rapid flashes of light, rather than sound. We’ll call them the Eb­bo­ri­ans.

Eb­bo­ri­ans re­pro­duce by fis­sion, an adult di­vid­ing into two new in­di­vi­d­u­als. They share ge­netic ma­te­rial, but not through sex­ual re­com­bi­na­tion; Eb­bo­rian adults swap ge­netic ma­te­rial with each other. They have two eyes, four legs, and two hands, let­ting a fis­sioned Eb­bo­rian sur­vive long enough to re­grow.

Hu­man DNA is built in a dou­ble he­lix; un­z­ip­ping the he­lix a lit­tle at a time pro­duces two stretches of sin­gle strands of DNA. Each sin­gle strand at­tracts com­ple­men­tary bases, pro­duc­ing a new dou­ble strand. At the end of the op­er­a­tion, a DNA dou­ble he­lix has turned into two dou­ble he­lices. Hence earthly life.

Eb­bo­ri­ans fis­sion their brains, as well as their bod­ies, by a pro­cess some­thing like how hu­man DNA di­vides.

Imag­ine an Eb­bo­rian brain as a flat sheet of pa­per, com­put­ing in a way that is more elec­tri­cal than chem­i­cal—charges flow­ing down con­duc­tive path­ways.

When it’s time for an Eb­bo­rian to fis­sion, the brain-pa­per splits down its thick­ness into two sheets of pa­per. Each new sheet is ca­pa­ble of con­duct­ing elec­tric­ity on its own. In­deed, the Eb­bo­rian(s) stays con­scious through­out the whole fis­sion­ing pro­cess. Over time, the brain-pa­per grows thick enough to fis­sion again.

Elec­tric­ity flows through Eb­bo­rian brains faster than hu­man neu­rons fire. But the Eb­bo­rian brain is con­strained by its two-di­men­sion­al­ity. An Eb­bo­rian brain-pa­per must split down its thick­ness while re­tain­ing the in­tegrity of its pro­gram. Eb­bo­rian evolu­tion took the cheap way out: the brain-pa­per com­putes in a purely two-di­men­sional way. The Eb­bo­ri­ans have much faster neu­ron-equiv­a­lents, but they are far less in­ter­con­nected.

On the whole, Eb­bo­ri­ans think faster than hu­mans and re­mem­ber less. They are less sus­cep­ti­ble to habit; they re­com­pute what we would cache. They would be in­cre­d­u­lous at the idea that a hu­man neu­ron might be con­nected to a thou­sand neigh­bors, and equally in­cre­d­u­lous at the idea that our ax­ons and den­drites prop­a­gate sig­nals at only a few me­ters per sec­ond.

The Eb­bo­ri­ans have no con­cept of par­ents, chil­dren, or sex­u­al­ity. Every adult Eb­bo­rian re­mem­bers fis­sion­ing many times. But Eb­bo­rian mem­o­ries quickly fade if not used; no one knows the last com­mon an­ces­tor of those now al­ive.

In prin­ci­ple, an Eb­bo­rian per­son­al­ity can be im­mor­tal. Yet an Eb­bo­rian re­mem­bers less life than a sev­enty-year-old hu­man. They re­tain only the most im­por­tant high­lights of their last few mil­len­nia. Is this im­mor­tal­ity? Is it death?

The Eb­bo­ri­ans had to re­dis­cover nat­u­ral se­lec­tion from scratch, be­cause no one re­tained their mem­o­ries of be­ing a fish.

But I digress from my tale.

To­day, the Eb­bo­ri­ans have gath­ered to cel­e­brate a day which all pre­sent will re­mem­ber for hun­dreds of years. They have dis­cov­ered (they be­lieve) the Ul­ti­mate Grand Unified The­ory of Every­thing for their uni­verse. The the­ory which seems, at last, to ex­plain ev­ery known fun­da­men­tal phys­i­cal phe­nomenon—to pre­dict what ev­ery in­stru­ment will mea­sure, in ev­ery ex­per­i­ment whose ini­tial con­di­tions are ex­actly known, and which can be calcu­lated on available com­put­ers.

“But wait!” cries an Eb­bo­rian. (We’ll call this one Po’mi.) “But wait!”, cries Po’mi, “There are still ques­tions the Unified The­ory can’t an­swer! Dur­ing the fis­sion pro­cess, when ex­actly does one Eb­bo­rian con­scious­ness be­come two sep­a­rate peo­ple?”

The gath­ered Eb­bo­ri­ans look at each other. Fi­nally, there speaks the mod­er­a­tor of the gath­er­ing, the sec­ond-fore­most Eb­bo­rian on the planet: the much-re­spected Nhar­glane of Eb­bore, who achieved his po­si­tion through con­sis­tent gen­tle­ness and cour­tesy.

“Well,” Nhar­glane says, “I ad­mit I can’t an­swer that one—but is it re­ally a ques­tion of fun­da­men­tal physics?”

“I wouldn’t even call that a ‘ques­tion’,” snorts De’da the Eb­bo­rian, “see­ing as how there’s no ex­per­i­men­tal test whose re­sult de­pends on the an­swer.”

“On the con­trary,” re­torts Po’mi, “all our ex­per­i­men­tal re­sults ul­ti­mately come down to our ex­pe­riences. If a the­ory of physics can’t pre­dict what we’ll ex­pe­rience, what good is it?”

De’da shrugs. “One per­son, two peo­ple—how does that make a differ­ence even to ex­pe­rience? How do you tell even in­ter­nally whether you’re one per­son or two peo­ple? Of course, if you look over and see your other self, you know you’re finished di­vid­ing—but by that time your brain has long since finished split­ting.”

“Clearly,” says Po’mi, “at any given point, what­ever is hav­ing an ex­pe­rience is one per­son. So it is never nec­es­sary to tell whether you are one per­son or two peo­ple. You are always one per­son. But at any given time dur­ing the split, does there ex­ist an­other, differ­ent con­scious­ness as yet, with its own aware­ness?”

De’da performs an elab­o­rate quiver, the Eb­bo­rian equiv­a­lent of wav­ing one’s hands. “When the brain splits, it splits fast enough that there isn’t much time where the ques­tion would be am­bigu­ous. One in­stant, all the elec­tri­cal charges are mov­ing as a whole. The next in­stant, they move sep­a­rately.”

“That’s not true,” says Po’mi. “You can’t sweep the prob­lem un­der the rug that eas­ily. There is a quite ap­pre­cia­ble time—many pi­cosec­onds—when the two halves of the brain are within dis­tance for the mov­ing elec­tri­cal charges in each half to tug on the other. Not quite causally sep­a­rated, and not quite the same com­pu­ta­tion ei­ther. Cer­tainly there is a time when there is definitely one per­son, and a time when there is definitely two peo­ple. But at which ex­act point in be­tween are there two dis­tinct con­scious ex­pe­riences?”

“My challenge stands,” says De’da. “How does it make a differ­ence, even a differ­ence of first-per­son ex­pe­rience, as to when you say the split oc­curs? There’s no third-party ex­per­i­ment you can perform to tell you the an­swer. And no differ­ence of first-per­son ex­pe­rience, ei­ther. Your be­lief that con­scious­ness must ‘split’ at some par­tic­u­lar point, stems from try­ing to model con­scious­ness as a big rock of aware­ness that can only be in one place at a time. There’s no third-party ex­per­i­ment, and no first-per­son ex­pe­rience, that can tell you when you’ve split; the ques­tion is mean­ingless.”

“If ex­pe­rience is mean­ingless,” re­torts Po’mi, “then so are all our sci­en­tific the­o­ries, which are merely in­tended to ex­plain our ex­pe­riences.”

“If I may,” says an­other Eb­bo­rian, named Yu’el, “I think I can re­fine my hon­or­able col­league Po’mi’s dilemma. Sup­pose that you anes­thetized one of us—”

(Eb­bo­ri­ans use an anes­thetic that effec­tively shuts off elec­tri­cal power to the brain—no pro­cess­ing or learn­ing oc­curs while an Eb­bo­rian is anes­thetized.)

“—and then flipped a coin. If the coin comes up heads, you split the sub­ject while they are un­con­scious. If the coin comes up tails, you leave the sub­ject as is. When the sub­ject goes to sleep, should they an­ti­ci­pate a 23 prob­a­bil­ity of see­ing the coin come up heads, or an­ti­ci­pate a 12 prob­a­bil­ity of see­ing the coin come up heads? If you an­swer 23, then there is a differ­ence of an­ti­ci­pa­tion that could be made to de­pend on ex­actly when you split.”

“Clearly, then,” says De’da, “the an­swer is 12, since an­swer­ing 23 gets us into para­dox­i­cal and ill-defined is­sues.”

Yu’el looks thought­ful. “What if we split you into 512 parts while you were anes­thetized? Would you still an­swer a prob­a­bil­ity of 12 for see­ing the coin come up heads?”

De’da shrugs. “Cer­tainly. When I went to sleep, I would figure on a 12 prob­a­bil­ity that I wouldn’t get split at all.”

“Hmm...” Yu’el says. “All right, sup­pose that we are definitely go­ing to split you into 16 parts. 3 of you will wake up in a red room, 13 of you will wake up in a green room. Do you an­ti­ci­pate a 1316 prob­a­bil­ity of wak­ing up in a green room?”

“I an­ti­ci­pate wak­ing up in a green room with near-1 prob­a­bil­ity,” replies De’da, “and I an­ti­ci­pate wak­ing up in a red room with near-1 prob­a­bil­ity. My fu­ture selves will ex­pe­rience both out­comes.”

“But I’m ask­ing about your per­sonal an­ti­ci­pa­tion,” Yu’el per­sists. “When you fall asleep, how much do you an­ti­ci­pate see­ing a green room? You can’t see both room col­ors at once—that’s not an ex­pe­rience any­one will have—so which color do you per­son­ally an­ti­ci­pate more?”

De’da shakes his head. “I can see where this is go­ing; you plan to ask what I an­ti­ci­pate in cases where I may or may not be split. But I must deny that your ques­tion has an ob­jec­tive an­swer, pre­cisely be­cause of where it leads. Now, I do say to you, that I care about my fu­ture selves. If you ask me whether I would like each of my green-room selves, or each of my red-room selves, to re­ceive ten dol­lars, I will of course choose the green-roomers—but I don’t care to fol­low this no­tion of ‘per­sonal an­ti­ci­pa­tion’ where you are tak­ing it.”

“While you are anes­thetized,” says Yu’el, “I will flip a coin; if the coin comes up heads, I will put 3 of you into red rooms and 13 of you into green rooms. If the coin comes up tails, I will re­verse the pro­por­tion. If you wake up in a green room, what is your pos­te­rior prob­a­bil­ity that the coin came up heads?”

De’da pauses. “Well...” he says slowly, “Clearly, some of me will be wrong, no mat­ter which rea­son­ing method I use—but if you offer me a bet, I can min­i­mize the num­ber of me who bet poorly, by us­ing the gen­eral policy, of each self bet­ting as if the pos­te­rior prob­a­bil­ity of their color dom­i­nat­ing is 1316. And if you try to make that judg­ment de­pend on the de­tails of the split­ting pro­cess, then it just de­pends on how who­ever offers the bet counts Eb­bo­ri­ans.”

Yu’el nods. “I can see what you are say­ing, De’da. But I just can’t make my­self be­lieve it, at least not yet. If there were to be 3 of me wak­ing up in red rooms, and a billion of me wak­ing up in green rooms, I would quite strongly an­ti­ci­pate see­ing a green room when I woke up. Just the same way that I an­ti­ci­pate not win­ning the lot­tery. And if the pro­por­tions of three red to a billion green, fol­lowed from a coin com­ing up heads; but the re­verse pro­por­tion, of a billion red to three green, fol­lowed from tails; and I woke up and saw a red room; why, then, I would be nearly cer­tain—on a quite per­sonal level—that the coin had come up tails.”

“That stance ex­poses you to quite a bit of trou­ble,” notes De’da.

Yu’el nods. “I can even see some of the trou­bles my­self. Sup­pose you split brains only a short dis­tance apart from each other, so that they could, in prin­ci­ple, be fused back to­gether again? What if there was an Eb­bo­rian with a brain thick enough to be split into a mil­lion parts, and the parts could then re-unite? Even if it’s not biolog­i­cally pos­si­ble, we could do it with a com­puter-based mind, some­day. Now, sup­pose you split me into 500,000 brains who woke up in green rooms, and 3 much thicker brains who woke up in red rooms. I would surely an­ti­ci­pate see­ing the green room. But most of me who see the green room will see nearly the same thing—differ­ent in tiny de­tails, per­haps, enough to differ­en­ti­ate our ex­pe­rience, but such de­tails are soon for­got­ten. So now sup­pose that my 500,000 green selves are re­united into one Eb­bo­rian, and my 3 red selves are re­united into one Eb­bo­rian. Have I just sent nearly all of my “sub­jec­tive prob­a­bil­ity” into the green fu­ture self, even though it is now only one of two? With only a lit­tle more work, you can see how a tem­po­rary ex­pen­di­ture of com­put­ing power, or a nicely re­fined brain-split­ter and a dose of anes­the­sia, would let you have a high sub­jec­tive prob­a­bil­ity of win­ning any lot­tery. At least any lot­tery that in­volved split­ting you into pieces.”

De’da fur­rows his eyes. “So have you not just proved your own the­ory to be non­sense?”

“I’m not sure,” says Yu’el. “At this point, I’m not even sure the con­clu­sion is wrong.”

“I didn’t sug­gest your con­clu­sion was wrong,” says De’da, “I sug­gested it was non­sense. There’s a differ­ence.”

“Per­haps,” says Yu’el. “Per­haps it will in­deed turn out to be non­sense, when I know bet­ter. But if so, I don’t quite know bet­ter yet. I can’t quite see how to elimi­nate the no­tion of sub­jec­tive an­ti­ci­pa­tion from my view of the uni­verse. I would need some­thing to re­place it, some­thing to re-fill the role that an­ti­ci­pa­tion cur­rently plays in my wor­ld­view.”

De’da shrugs. “Why not just elimi­nate ‘sub­jec­tive an­ti­ci­pa­tion’ out­right?”

“For one thing,” says Yu’el, “I would then have no way to ex­press my sur­prise at the or­der­li­ness of the uni­verse. Sup­pose you claimed that the uni­verse was ac­tu­ally made up en­tirely of ran­dom ex­pe­riences, brains tem­porar­ily co­a­lesc­ing from dust and ex­pe­rienc­ing all pos­si­ble sen­sory data. Then if I don’t count in­di­vi­d­u­als, or weigh their ex­is­tence some­how, that chaotic hy­poth­e­sis would pre­dict my ex­is­tence as strongly as does sci­ence. The re­al­iza­tion of all pos­si­ble chaotic ex­pe­riences would pre­dict my own ex­pe­rience with prob­a­bil­ity 1. I need to keep my sur­prise at hav­ing this par­tic­u­lar or­derly ex­pe­rience, to jus­tify my an­ti­ci­pa­tion of see­ing an or­derly fu­ture. If I throw away the no­tion of sub­jec­tive an­ti­ci­pa­tion, then how do I differ­en­ti­ate the chaotic uni­verse from the or­derly one? Pre­sum­ably there are Yu’els, some­where in time and space (for the uni­verse is spa­tially in­finite) who are about to have a re­ally chaotic ex­pe­rience. I need some way of say­ing that these Yu’els are rare, or weigh lit­tle—some way of mostly an­ti­ci­pat­ing that I won’t sprout wings and fly away. I’m not say­ing that my cur­rent way of do­ing this is good book­keep­ing, or even co­her­ent book­keep­ing; but I can’t just delete the book­keep­ing with­out a more solid un­der­stand­ing to put in its place. I need some way to say that there are ver­sions of me who see one thing, and ver­sions of me who see some­thing else, but there’s some kind of differ­ent weight on them. Right now, what I try to do is count copies—but I don’t know ex­actly what con­sti­tutes a copy.”

Po’mi clears his throat, and speaks again. “So, Yu’el, you agree with me that there ex­ists a definite and fac­tual ques­tion as to ex­actly when there are two con­scious ex­pe­riences, in­stead of one.”

“That, I do not con­cede,” says Yu’el. “All that I have said may only be a recital of my own con­fu­sion. You are too quick to fix the lan­guage of your be­liefs, when there are words in it that, by your own ad­mis­sion, you do not un­der­stand. No mat­ter how fun­da­men­tal your ex­pe­rience feels to you, it is not safe to trust that feel­ing, un­til ex­pe­rience is no longer some­thing you are con­fused about. There is a black box here, a mys­tery. Any­thing could be in­side that box—any sort of sur­prise—a shock that shat­ters ev­ery­thing you cur­rently be­lieve about con­scious­ness. In­clud­ing up­set­ting your be­lief that ex­pe­rience is fun­da­men­tal. In fact, that strikes me as a sur­prise you should an­ti­ci­pate—though it will still come as a shock.”

“But then,” says Po’mi, “do you at least agree that if our physics does not spec­ify which ex­pe­riences are ex­pe­rienced, or how many of them, or how much they ‘weigh’, then our physics must be in­com­plete?”

“No,” says Yu’el, “I don’t con­cede that ei­ther. Be­cause con­sider that, even if a physics is known—even if we con­struct a uni­verse with very sim­ple physics, much sim­pler than our own Unified The­ory—I can still pre­sent the same split-brain dilem­mas, and they will still seem just as puz­zling. This sug­gests that the source of the con­fu­sion is not in our the­o­ries of fun­da­men­tal physics. It is on a higher level of or­ga­ni­za­tion. We can’t com­pute ex­actly how pro­teins will fold up; but this is not a deficit in our the­ory of atomic dy­nam­ics, it is a deficit of com­put­ing power. We don’t know what makes sharkras bloom only in spring; but this is not a deficit in our Unified The­ory, it is a deficit in our biol­ogy—we don’t pos­sess the tech­nol­ogy to take the sharkras apart on a molec­u­lar level to find out how they work. What you are point­ing out is a gap in our sci­ence of con­scious­ness, which would pre­sent us with just the same puz­zles even if we knew all the fun­da­men­tal physics. I see no work here for physi­cists, at all.”

Po’mi smiles faintly at this, and is about to re­ply, when a listen­ing Eb­bo­rian shouts, “What, have you be­gun to be­lieve in zom­bies? That when you spec­ify all the phys­i­cal facts about a uni­verse, there are facts about con­scious­ness left over?”

“No!” says Yu’el. “Of course not! You can know the fun­da­men­tal physics of a uni­verse, hold all the fun­da­men­tal equa­tions in your mind, and still not have all the phys­i­cal facts. You may not know why sharkras bloom in the sum­mer. But if you could ac­tu­ally hold the en­tire fun­da­men­tal phys­i­cal state of the sharkra in your mind, and un­der­stand all its lev­els of or­ga­ni­za­tion, then you would nec­es­sar­ily know why it blooms—there would be no fact left over, from out­side physics. When I say, ‘Imag­ine run­ning the split-brain ex­per­i­ment in a uni­verse with sim­ple known physics,’ you are not con­cretely imag­in­ing that uni­verse, in ev­ery de­tail. You are not ac­tu­ally spec­i­fy­ing the en­tire phys­i­cal makeup of an Eb­bo­rian in your imag­i­na­tion. You are only imag­in­ing that you know it. But if you ac­tu­ally knew how to build an en­tire con­scious be­ing from scratch, out of pa­per­clips and rub­ber­bands, you would have a great deal of knowl­edge that you do not presently have. This is im­por­tant in­for­ma­tion that you are miss­ing! Imag­in­ing that you have it, does not give you the in­sights that would fol­low from re­ally know­ing the full phys­i­cal state of a con­scious be­ing.”

“So,” Yu’el con­tinues, “We can imag­ine our­selves know­ing the fun­da­men­tal physics, and imag­ine an Eb­bo­rian brain split­ting, and find that we don’t know ex­actly when the con­scious­ness has split. Be­cause we are not con­cretely imag­in­ing a com­plete and de­tailed de­scrip­tion of a con­scious be­ing, with full com­pre­hen­sion of the im­plicit higher lev­els of or­ga­ni­za­tion. There are knowl­edge gaps here, but they are not gaps of physics. They are gaps in our un­der­stand­ing of con­scious­ness. I see no rea­son to think that fun­da­men­tal physics has any­thing to do with such ques­tions.”

“Well then,” Po’mi says, “I have a puz­zle I should like you to ex­plain, Yu’el. As you know, it was dis­cov­ered not many years ago, that our uni­verse has four spa­tial di­men­sions, rather than three di­men­sions, as it first ap­pears.”

“Aye,” says Nhar­glane of Eb­bore, “this was a key part in our work­ing-out of the Unified The­ory. Our mod­els would be ut­terly at a loss to ac­count for ob­served ex­per­i­men­tal re­sults, if we could not model the fourth di­men­sion, and differ­en­ti­ate the fourth-di­men­sional den­sity of ma­te­ri­als.”

“And we also dis­cov­ered,” con­tinues Po’mi, “that our very planet of Eb­bore, in­clud­ing all the peo­ple on it, has a four-di­men­sional thick­ness, and is con­stantly fis­sion­ing along that thick­ness, just as our brains do. Only the fis­sioned sides of our planet do not re­main in con­tact, as our new selves do; the sides sep­a­rate into the fourth-di­men­sional void.”

Nhar­glane nods. “Yes, it was rather a sur­prise to re­al­ize that the whole world is du­pli­cated over and over. I shall re­mem­ber that re­al­iza­tion for a long time in­deed. It is a good thing we Eb­bo­ri­ans had our ex­pe­rience with self-fis­sion­ing, to pre­pare us for the shock. Other­wise we might have been driven mad, and em­braced ab­surd phys­i­cal the­o­ries.”

“Well,” says Po’mi, “when the world splits down its four-di­men­sional thick­ness, it does not always split ex­actly evenly. In­deed, it is not un­com­mon to see nine-tenths of the four-di­men­sional thick­ness in one side.”

“Really?” says Yu’el. “My knowl­edge of physics is not so great as yours, but—”

“The state­ment is cor­rect,” says the re­spected Nhar­glane of Eb­bore.

“Now,” says Po’mi, “if fun­da­men­tal physics has noth­ing to do with con­scious­ness, can you tell me why the sub­jec­tive prob­a­bil­ity of find­ing our­selves in a side of the split world, should be ex­actly pro­por­tional to the square of the thick­ness of that side?”

There is a great ter­rible silence.

“WHAT?” says Yu’el.

“WHAT?” says De’da.

“WHAT?” says Nhar­glane.

“WHAT?” says the en­tire au­di­ence of Eb­bo­ri­ans.

To be con­tinued...

Part of The Quan­tum Physics Sequence

Next post: “Where Ex­pe­rience Con­fuses Physi­cists

Pre­vi­ous post: “Which Ba­sis Is More Fun­da­men­tal?