# Where Physics Meets Experience

Fol­lowup to: De­co­her­ence, Where Philos­o­phy Meets Science

Once upon a time, there was an alien species, whose planet hov­ered in the void of a uni­verse with laws al­most like our own. They would have been alien to us, but of course they did not think of them­selves as alien. They com­mu­ni­cated via rapid flashes of light, rather than sound. We’ll call them the Eb­bo­ri­ans.

Eb­bo­ri­ans re­pro­duce by fis­sion, an adult di­vid­ing into two new in­di­vi­d­u­als. They share ge­netic ma­te­rial, but not through sex­ual re­com­bi­na­tion; Eb­bo­rian adults swap ge­netic ma­te­rial with each other. They have two eyes, four legs, and two hands, let­ting a fis­sioned Eb­bo­rian sur­vive long enough to re­grow.

Hu­man DNA is built in a dou­ble he­lix; un­z­ip­ping the he­lix a lit­tle at a time pro­duces two stretches of sin­gle strands of DNA. Each sin­gle strand at­tracts com­ple­men­tary bases, pro­duc­ing a new dou­ble strand. At the end of the op­er­a­tion, a DNA dou­ble he­lix has turned into two dou­ble he­lices. Hence earthly life.

Eb­bo­ri­ans fis­sion their brains, as well as their bod­ies, by a pro­cess some­thing like how hu­man DNA di­vides.

Imag­ine an Eb­bo­rian brain as a flat sheet of pa­per, com­put­ing in a way that is more elec­tri­cal than chem­i­cal—charges flow­ing down con­duc­tive path­ways.

When it’s time for an Eb­bo­rian to fis­sion, the brain-pa­per splits down its thick­ness into two sheets of pa­per. Each new sheet is ca­pa­ble of con­duct­ing elec­tric­ity on its own. In­deed, the Eb­bo­rian(s) stays con­scious through­out the whole fis­sion­ing pro­cess. Over time, the brain-pa­per grows thick enough to fis­sion again.

Elec­tric­ity flows through Eb­bo­rian brains faster than hu­man neu­rons fire. But the Eb­bo­rian brain is con­strained by its two-di­men­sion­al­ity. An Eb­bo­rian brain-pa­per must split down its thick­ness while re­tain­ing the in­tegrity of its pro­gram. Eb­bo­rian evolu­tion took the cheap way out: the brain-pa­per com­putes in a purely two-di­men­sional way. The Eb­bo­ri­ans have much faster neu­ron-equiv­a­lents, but they are far less in­ter­con­nected.

On the whole, Eb­bo­ri­ans think faster than hu­mans and re­mem­ber less. They are less sus­cep­ti­ble to habit; they re­com­pute what we would cache. They would be in­cre­d­u­lous at the idea that a hu­man neu­ron might be con­nected to a thou­sand neigh­bors, and equally in­cre­d­u­lous at the idea that our ax­ons and den­drites prop­a­gate sig­nals at only a few me­ters per sec­ond.

The Eb­bo­ri­ans have no con­cept of par­ents, chil­dren, or sex­u­al­ity. Every adult Eb­bo­rian re­mem­bers fis­sion­ing many times. But Eb­bo­rian mem­o­ries quickly fade if not used; no one knows the last com­mon an­ces­tor of those now al­ive.

In prin­ci­ple, an Eb­bo­rian per­son­al­ity can be im­mor­tal. Yet an Eb­bo­rian re­mem­bers less life than a sev­enty-year-old hu­man. They re­tain only the most im­por­tant high­lights of their last few mil­len­nia. Is this im­mor­tal­ity? Is it death?

The Eb­bo­ri­ans had to re­dis­cover nat­u­ral se­lec­tion from scratch, be­cause no one re­tained their mem­o­ries of be­ing a fish.

But I digress from my tale.

To­day, the Eb­bo­ri­ans have gath­ered to cel­e­brate a day which all pre­sent will re­mem­ber for hun­dreds of years. They have dis­cov­ered (they be­lieve) the Ul­ti­mate Grand Unified The­ory of Every­thing for their uni­verse. The the­ory which seems, at last, to ex­plain ev­ery known fun­da­men­tal phys­i­cal phe­nomenon—to pre­dict what ev­ery in­stru­ment will mea­sure, in ev­ery ex­per­i­ment whose ini­tial con­di­tions are ex­actly known, and which can be calcu­lated on available com­put­ers.

“But wait!” cries an Eb­bo­rian. (We’ll call this one Po’mi.) “But wait!“, cries Po’mi, “There are still ques­tions the Unified The­ory can’t an­swer! Dur­ing the fis­sion pro­cess, when ex­actly does one Eb­bo­rian con­scious­ness be­come two sep­a­rate peo­ple?”

The gath­ered Eb­bo­ri­ans look at each other. Fi­nally, there speaks the mod­er­a­tor of the gath­er­ing, the sec­ond-fore­most Eb­bo­rian on the planet: the much-re­spected Nhar­glane of Eb­bore, who achieved his po­si­tion through con­sis­tent gen­tle­ness and cour­tesy.

“Well,” Nhar­glane says, “I ad­mit I can’t an­swer that one—but is it re­ally a ques­tion of fun­da­men­tal physics?”

“I wouldn’t even call that a ‘ques­tion’,” snorts De’da the Eb­bo­rian, “see­ing as how there’s no ex­per­i­men­tal test whose re­sult de­pends on the an­swer.”

“On the con­trary,” re­torts Po’mi, “all our ex­per­i­men­tal re­sults ul­ti­mately come down to our ex­pe­riences. If a the­ory of physics can’t pre­dict what we’ll ex­pe­rience, what good is it?”

De’da shrugs. “One per­son, two peo­ple—how does that make a differ­ence even to ex­pe­rience? How do you tell even in­ter­nally whether you’re one per­son or two peo­ple? Of course, if you look over and see your other self, you know you’re finished di­vid­ing—but by that time your brain has long since finished split­ting.”

“Clearly,” says Po’mi, “at any given point, what­ever is hav­ing an ex­pe­rience is one per­son. So it is never nec­es­sary to tell whether you are one per­son or two peo­ple. You are always one per­son. But at any given time dur­ing the split, does there ex­ist an­other, differ­ent con­scious­ness as yet, with its own aware­ness?”

De’da performs an elab­o­rate quiver, the Eb­bo­rian equiv­a­lent of wav­ing one’s hands. “When the brain splits, it splits fast enough that there isn’t much time where the ques­tion would be am­bigu­ous. One in­stant, all the elec­tri­cal charges are mov­ing as a whole. The next in­stant, they move sep­a­rately.”

“That’s not true,” says Po’mi. “You can’t sweep the prob­lem un­der the rug that eas­ily. There is a quite ap­pre­cia­ble time—many pi­cosec­onds—when the two halves of the brain are within dis­tance for the mov­ing elec­tri­cal charges in each half to tug on the other. Not quite causally sep­a­rated, and not quite the same com­pu­ta­tion ei­ther. Cer­tainly there is a time when there is definitely one per­son, and a time when there is definitely two peo­ple. But at which ex­act point in be­tween are there two dis­tinct con­scious ex­pe­riences?”

“My challenge stands,” says De’da. “How does it make a differ­ence, even a differ­ence of first-per­son ex­pe­rience, as to when you say the split oc­curs? There’s no third-party ex­per­i­ment you can perform to tell you the an­swer. And no differ­ence of first-per­son ex­pe­rience, ei­ther. Your be­lief that con­scious­ness must ‘split’ at some par­tic­u­lar point, stems from try­ing to model con­scious­ness as a big rock of aware­ness that can only be in one place at a time. There’s no third-party ex­per­i­ment, and no first-per­son ex­pe­rience, that can tell you when you’ve split; the ques­tion is mean­ingless.”

“If ex­pe­rience is mean­ingless,” re­torts Po’mi, “then so are all our sci­en­tific the­o­ries, which are merely in­tended to ex­plain our ex­pe­riences.”

“If I may,” says an­other Eb­bo­rian, named Yu’el, “I think I can re­fine my hon­or­able col­league Po’mi’s dilemma. Sup­pose that you anes­thetized one of us—”

(Eb­bo­ri­ans use an anes­thetic that effec­tively shuts off elec­tri­cal power to the brain—no pro­cess­ing or learn­ing oc­curs while an Eb­bo­rian is anes­thetized.)

“—and then flipped a coin. If the coin comes up heads, you split the sub­ject while they are un­con­scious. If the coin comes up tails, you leave the sub­ject as is. When the sub­ject goes to sleep, should they an­ti­ci­pate a 23 prob­a­bil­ity of see­ing the coin come up heads, or an­ti­ci­pate a 12 prob­a­bil­ity of see­ing the coin come up heads? If you an­swer 23, then there is a differ­ence of an­ti­ci­pa­tion that could be made to de­pend on ex­actly when you split.”

“Clearly, then,” says De’da, “the an­swer is 12, since an­swer­ing 23 gets us into para­dox­i­cal and ill-defined is­sues.”

Yu’el looks thought­ful. “What if we split you into 512 parts while you were anes­thetized? Would you still an­swer a prob­a­bil­ity of 12 for see­ing the coin come up heads?”

De’da shrugs. “Cer­tainly. When I went to sleep, I would figure on a 12 prob­a­bil­ity that I wouldn’t get split at all.”

“Hmm...” Yu’el says. “All right, sup­pose that we are definitely go­ing to split you into 16 parts. 3 of you will wake up in a red room, 13 of you will wake up in a green room. Do you an­ti­ci­pate a 1316 prob­a­bil­ity of wak­ing up in a green room?”

“I an­ti­ci­pate wak­ing up in a green room with near-1 prob­a­bil­ity,” replies De’da, “and I an­ti­ci­pate wak­ing up in a red room with near-1 prob­a­bil­ity. My fu­ture selves will ex­pe­rience both out­comes.”

“But I’m ask­ing about your per­sonal an­ti­ci­pa­tion,” Yu’el per­sists. “When you fall asleep, how much do you an­ti­ci­pate see­ing a green room? You can’t see both room col­ors at once—that’s not an ex­pe­rience any­one will have—so which color do you per­son­ally an­ti­ci­pate more?”

De’da shakes his head. “I can see where this is go­ing; you plan to ask what I an­ti­ci­pate in cases where I may or may not be split. But I must deny that your ques­tion has an ob­jec­tive an­swer, pre­cisely be­cause of where it leads. Now, I do say to you, that I care about my fu­ture selves. If you ask me whether I would like each of my green-room selves, or each of my red-room selves, to re­ceive ten dol­lars, I will of course choose the green-roomers—but I don’t care to fol­low this no­tion of ‘per­sonal an­ti­ci­pa­tion’ where you are tak­ing it.”

“While you are anes­thetized,” says Yu’el, “I will flip a coin; if the coin comes up heads, I will put 3 of you into red rooms and 13 of you into green rooms. If the coin comes up tails, I will re­verse the pro­por­tion. If you wake up in a green room, what is your pos­te­rior prob­a­bil­ity that the coin came up heads?”

De’da pauses. “Well...” he says slowly, “Clearly, some of me will be wrong, no mat­ter which rea­son­ing method I use—but if you offer me a bet, I can min­i­mize the num­ber of me who bet poorly, by us­ing the gen­eral policy, of each self bet­ting as if the pos­te­rior prob­a­bil­ity of their color dom­i­nat­ing is 1316. And if you try to make that judg­ment de­pend on the de­tails of the split­ting pro­cess, then it just de­pends on how who­ever offers the bet counts Eb­bo­ri­ans.”

Yu’el nods. “I can see what you are say­ing, De’da. But I just can’t make my­self be­lieve it, at least not yet. If there were to be 3 of me wak­ing up in red rooms, and a billion of me wak­ing up in green rooms, I would quite strongly an­ti­ci­pate see­ing a green room when I woke up. Just the same way that I an­ti­ci­pate not win­ning the lot­tery. And if the pro­por­tions of three red to a billion green, fol­lowed from a coin com­ing up heads; but the re­verse pro­por­tion, of a billion red to three green, fol­lowed from tails; and I woke up and saw a red room; why, then, I would be nearly cer­tain—on a quite per­sonal level—that the coin had come up tails.”

“That stance ex­poses you to quite a bit of trou­ble,” notes De’da.

Yu’el nods. “I can even see some of the trou­bles my­self. Sup­pose you split brains only a short dis­tance apart from each other, so that they could, in prin­ci­ple, be fused back to­gether again? What if there was an Eb­bo­rian with a brain thick enough to be split into a mil­lion parts, and the parts could then re-unite? Even if it’s not biolog­i­cally pos­si­ble, we could do it with a com­puter-based mind, some­day. Now, sup­pose you split me into 500,000 brains who woke up in green rooms, and 3 much thicker brains who woke up in red rooms. I would surely an­ti­ci­pate see­ing the green room. But most of me who see the green room will see nearly the same thing—differ­ent in tiny de­tails, per­haps, enough to differ­en­ti­ate our ex­pe­rience, but such de­tails are soon for­got­ten. So now sup­pose that my 500,000 green selves are re­united into one Eb­bo­rian, and my 3 red selves are re­united into one Eb­bo­rian. Have I just sent nearly all of my “sub­jec­tive prob­a­bil­ity” into the green fu­ture self, even though it is now only one of two? With only a lit­tle more work, you can see how a tem­po­rary ex­pen­di­ture of com­put­ing power, or a nicely re­fined brain-split­ter and a dose of anes­the­sia, would let you have a high sub­jec­tive prob­a­bil­ity of win­ning any lot­tery. At least any lot­tery that in­volved split­ting you into pieces.”

De’da fur­rows his eyes. “So have you not just proved your own the­ory to be non­sense?”

“I’m not sure,” says Yu’el. “At this point, I’m not even sure the con­clu­sion is wrong.”

“I didn’t sug­gest your con­clu­sion was wrong,” says De’da, “I sug­gested it was non­sense. There’s a differ­ence.”

“Per­haps,” says Yu’el. “Per­haps it will in­deed turn out to be non­sense, when I know bet­ter. But if so, I don’t quite know bet­ter yet. I can’t quite see how to elimi­nate the no­tion of sub­jec­tive an­ti­ci­pa­tion from my view of the uni­verse. I would need some­thing to re­place it, some­thing to re-fill the role that an­ti­ci­pa­tion cur­rently plays in my wor­ld­view.”

De’da shrugs. “Why not just elimi­nate ‘sub­jec­tive an­ti­ci­pa­tion’ out­right?”

“For one thing,” says Yu’el, “I would then have no way to ex­press my sur­prise at the or­der­li­ness of the uni­verse. Sup­pose you claimed that the uni­verse was ac­tu­ally made up en­tirely of ran­dom ex­pe­riences, brains tem­porar­ily co­a­lesc­ing from dust and ex­pe­rienc­ing all pos­si­ble sen­sory data. Then if I don’t count in­di­vi­d­u­als, or weigh their ex­is­tence some­how, that chaotic hy­poth­e­sis would pre­dict my ex­is­tence as strongly as does sci­ence. The re­al­iza­tion of all pos­si­ble chaotic ex­pe­riences would pre­dict my own ex­pe­rience with prob­a­bil­ity 1. I need to keep my sur­prise at hav­ing this par­tic­u­lar or­derly ex­pe­rience, to jus­tify my an­ti­ci­pa­tion of see­ing an or­derly fu­ture. If I throw away the no­tion of sub­jec­tive an­ti­ci­pa­tion, then how do I differ­en­ti­ate the chaotic uni­verse from the or­derly one? Pre­sum­ably there are Yu’els, some­where in time and space (for the uni­verse is spa­tially in­finite) who are about to have a re­ally chaotic ex­pe­rience. I need some way of say­ing that these Yu’els are rare, or weigh lit­tle—some way of mostly an­ti­ci­pat­ing that I won’t sprout wings and fly away. I’m not say­ing that my cur­rent way of do­ing this is good book­keep­ing, or even co­her­ent book­keep­ing; but I can’t just delete the book­keep­ing with­out a more solid un­der­stand­ing to put in its place. I need some way to say that there are ver­sions of me who see one thing, and ver­sions of me who see some­thing else, but there’s some kind of differ­ent weight on them. Right now, what I try to do is count copies—but I don’t know ex­actly what con­sti­tutes a copy.”

Po’mi clears his throat, and speaks again. “So, Yu’el, you agree with me that there ex­ists a definite and fac­tual ques­tion as to ex­actly when there are two con­scious ex­pe­riences, in­stead of one.”

“That, I do not con­cede,” says Yu’el. “All that I have said may only be a recital of my own con­fu­sion. You are too quick to fix the lan­guage of your be­liefs, when there are words in it that, by your own ad­mis­sion, you do not un­der­stand. No mat­ter how fun­da­men­tal your ex­pe­rience feels to you, it is not safe to trust that feel­ing, un­til ex­pe­rience is no longer some­thing you are con­fused about. There is a black box here, a mys­tery. Any­thing could be in­side that box—any sort of sur­prise—a shock that shat­ters ev­ery­thing you cur­rently be­lieve about con­scious­ness. In­clud­ing up­set­ting your be­lief that ex­pe­rience is fun­da­men­tal. In fact, that strikes me as a sur­prise you should an­ti­ci­pate—though it will still come as a shock.”

“But then,” says Po’mi, “do you at least agree that if our physics does not spec­ify which ex­pe­riences are ex­pe­rienced, or how many of them, or how much they ‘weigh’, then our physics must be in­com­plete?”

“No,” says Yu’el, “I don’t con­cede that ei­ther. Be­cause con­sider that, even if a physics is known—even if we con­struct a uni­verse with very sim­ple physics, much sim­pler than our own Unified The­ory—I can still pre­sent the same split-brain dilem­mas, and they will still seem just as puz­zling. This sug­gests that the source of the con­fu­sion is not in our the­o­ries of fun­da­men­tal physics. It is on a higher level of or­ga­ni­za­tion. We can’t com­pute ex­actly how pro­teins will fold up; but this is not a deficit in our the­ory of atomic dy­nam­ics, it is a deficit of com­put­ing power. We don’t know what makes sharkras bloom only in spring; but this is not a deficit in our Unified The­ory, it is a deficit in our biol­ogy—we don’t pos­sess the tech­nol­ogy to take the sharkras apart on a molec­u­lar level to find out how they work. What you are point­ing out is a gap in our sci­ence of con­scious­ness, which would pre­sent us with just the same puz­zles even if we knew all the fun­da­men­tal physics. I see no work here for physi­cists, at all.”

Po’mi smiles faintly at this, and is about to re­ply, when a listen­ing Eb­bo­rian shouts, “What, have you be­gun to be­lieve in zom­bies? That when you spec­ify all the phys­i­cal facts about a uni­verse, there are facts about con­scious­ness left over?”

“No!” says Yu’el. “Of course not! You can know the fun­da­men­tal physics of a uni­verse, hold all the fun­da­men­tal equa­tions in your mind, and still not have all the phys­i­cal facts. You may not know why sharkras bloom in the sum­mer. But if you could ac­tu­ally hold the en­tire fun­da­men­tal phys­i­cal state of the sharkra in your mind, and un­der­stand all its lev­els of or­ga­ni­za­tion, then you would nec­es­sar­ily know why it blooms—there would be no fact left over, from out­side physics. When I say, ‘Imag­ine run­ning the split-brain ex­per­i­ment in a uni­verse with sim­ple known physics,’ you are not con­cretely imag­in­ing that uni­verse, in ev­ery de­tail. You are not ac­tu­ally spec­i­fy­ing the en­tire phys­i­cal makeup of an Eb­bo­rian in your imag­i­na­tion. You are only imag­in­ing that you know it. But if you ac­tu­ally knew how to build an en­tire con­scious be­ing from scratch, out of pa­per­clips and rub­ber­bands, you would have a great deal of knowl­edge that you do not presently have. This is im­por­tant in­for­ma­tion that you are miss­ing! Imag­in­ing that you have it, does not give you the in­sights that would fol­low from re­ally know­ing the full phys­i­cal state of a con­scious be­ing.”

“So,” Yu’el con­tinues, “We can imag­ine our­selves know­ing the fun­da­men­tal physics, and imag­ine an Eb­bo­rian brain split­ting, and find that we don’t know ex­actly when the con­scious­ness has split. Be­cause we are not con­cretely imag­in­ing a com­plete and de­tailed de­scrip­tion of a con­scious be­ing, with full com­pre­hen­sion of the im­plicit higher lev­els of or­ga­ni­za­tion. There are knowl­edge gaps here, but they are not gaps of physics. They are gaps in our un­der­stand­ing of con­scious­ness. I see no rea­son to think that fun­da­men­tal physics has any­thing to do with such ques­tions.”

“Well then,” Po’mi says, “I have a puz­zle I should like you to ex­plain, Yu’el. As you know, it was dis­cov­ered not many years ago, that our uni­verse has four spa­tial di­men­sions, rather than three di­men­sions, as it first ap­pears.”

“Aye,” says Nhar­glane of Eb­bore, “this was a key part in our work­ing-out of the Unified The­ory. Our mod­els would be ut­terly at a loss to ac­count for ob­served ex­per­i­men­tal re­sults, if we could not model the fourth di­men­sion, and differ­en­ti­ate the fourth-di­men­sional den­sity of ma­te­ri­als.”

“And we also dis­cov­ered,” con­tinues Po’mi, “that our very planet of Eb­bore, in­clud­ing all the peo­ple on it, has a four-di­men­sional thick­ness, and is con­stantly fis­sion­ing along that thick­ness, just as our brains do. Only the fis­sioned sides of our planet do not re­main in con­tact, as our new selves do; the sides sep­a­rate into the fourth-di­men­sional void.”

Nhar­glane nods. “Yes, it was rather a sur­prise to re­al­ize that the whole world is du­pli­cated over and over. I shall re­mem­ber that re­al­iza­tion for a long time in­deed. It is a good thing we Eb­bo­ri­ans had our ex­pe­rience with self-fis­sion­ing, to pre­pare us for the shock. Other­wise we might have been driven mad, and em­braced ab­surd phys­i­cal the­o­ries.”

“Well,” says Po’mi, “when the world splits down its four-di­men­sional thick­ness, it does not always split ex­actly evenly. In­deed, it is not un­com­mon to see nine-tenths of the four-di­men­sional thick­ness in one side.”

“Really?” says Yu’el. “My knowl­edge of physics is not so great as yours, but—”

“The state­ment is cor­rect,” says the re­spected Nhar­glane of Eb­bore.

“Now,” says Po’mi, “if fun­da­men­tal physics has noth­ing to do with con­scious­ness, can you tell me why the sub­jec­tive prob­a­bil­ity of find­ing our­selves in a side of the split world, should be ex­actly pro­por­tional to the square of the thick­ness of that side?”

There is a great ter­rible silence.

“WHAT?” says Yu’el.

“WHAT?” says De’da.

“WHAT?” says Nhar­glane.

“WHAT?” says the en­tire au­di­ence of Eb­bo­ri­ans.

To be con­tinued...

Next post: “Where Ex­pe­rience Con­fuses Physi­cists

Pre­vi­ous post: “Which Ba­sis Is More Fun­da­men­tal?

• This post just con­firms that you should take your blook­ing efforts and turn them into money AND reach a much wider and more last­ing au­di­ence by writ­ing a book. I think you could do a book like GEB … a long, quirky, mul­ti­dis­ci­plinary in­tro to cog­ni­tive bi­ases, Bayes’ the­o­rem, Physics, the Sin­gu­lar­ity, and what­ever else you might like, and peo­ple will buy it if you write like this.

• Great writ­ing. It’s not hard to see where you took a lot of the de­bates about your writ­ings and lifted them up into some­thing greater -dare I say, art?

The great thing about nov­els and plays (this could be ei­ther) is it al­lows the writer to cap­ture and ex­press mul­ti­ple points of view, with­out them­selves com­mit­ting to any of them.

I think this post is su­pe­rior to your past ones in that it re­flects a deeper un­der­stand­ing of var­i­ous view­points on this topic, and ex­presses the best ver­sions of them, rather than car­i­ca­ture views differ­ent than a par­tic­u­lar one you pro­mote in the post.

So I think you’re mov­ing from pas­sion plays to­wards Shake­speare.

Like I said, great writ­ing!

• As I wrote part of this post to ex­plic­itly dis­cuss Mitchell Porter’s po­si­tion, I think it only fair to post Mitchell Porter’s com­ment here, where it should be more at home than in “On Philoso­phers”. -- Eliezer Yudkowsky

**

Mitchell Porter com­mented:

Be­fore I get lost in these se­man­tic and epistemic com­plex­ities, I will say once again what the prob­lem is.

We are en­deav­or­ing to in­ter­pret the wave­func­tions or state vec­tors of quan­tum me­chan­ics: to form a hy­poth­e­sis about the re­al­ity they de­scribe. The hy­poth­e­sis is: be­fore de­co­her­ence there is one “quan­tum world”, af­ter de­co­her­ence there are many quan­tum wor­lds. As the differ­ence be­tween “one world” and “more than one world” is dis­con­tin­u­ous, but the pro­cess of de­co­her­ence is con­tin­u­ous, with no sharp bound­ary be­tween be­fore and af­ter, I asked ex­actly where the tran­si­tion from one world to more than one world oc­curs. The re­ply was that that is not an is­sue, since the an­swer would make no differ­ence to the ar­gu­ment in the pa­pers. I con­ceded that it makes no differ­ence to this par­tic­u­lar ar­gu­ment, but the is­sue it­self must be faced; the ex­is­tence of these wor­lds, if they are to be taken se­ri­ously, must be an ob­jec­tive mat­ter.

Some­how, hav­ing at­tempted to ar­gue for that last propo­si­tion, I find my­self be­ing asked to define what I mean by “ex­is­tence”, to ac­cept that some­one’s ex­is­tence can be “vague”, and who knows what else is go­ing to come up. I ac­cept the de­sir­a­bil­ity of try­ing to elu­ci­date fun­da­men­tal con­cepts as thor­oughly as pos­si­ble. But can I first ask: If a per­son said that ac­cord­ing to their the­ory of the uni­verse, at one time you have one of some­thing, and later on you have many copies of that same thing, but there’s no par­tic­u­lar mo­ment in time when the one be­comes the many, and that doesn’t mat­ter be­cause the some­thing only has a vague, fuzzy ex­is­tence… wouldn’t you think that the the­ory might have a few prob­lems, or at least be miss­ing a part?

Every­thing I have said about wor­lds, and ob­servers in wor­lds, and about the cer­tainty of one’s own ex­is­tence as an in­di­vi­d­ual ob­server, has been meant to drive that home. That chain of re­la­tion­ships is the de­tailed rea­son why it is un­ac­cept­able to have a blase at­ti­tude to­wards the con­di­tions of ex­is­tence of quan­tum wor­lds. They must be re­garded as ex­ist­ing or not ex­ist­ing, in a com­pletely ob­jec­tive, ab­solute, non-rel­a­tive way, or the con­cept be­comes a non­sense, be­cause wor­lds must play the role of host­ing en­tities whose ex­is­tence is definitely not vague or rel­a­tive, namely, us.

Does no-one un­der­stand or sym­pa­thize with this line of thought?

I will get on with the philos­o­phy in a mo­ment. But I ask that those of you who may find your­selves in a pro­tracted de­bate with me over these tan­gen­tial ques­tions, please con­sider anew the fore­go­ing ar­gu­ment and ask your­self whether it is de­sir­able or even pos­si­ble to set­tle for a vague no­tion of “world”, given the the­o­ret­i­cal bur­den it has to bear.

Cale­do­nian asks what I mean by “ex­is­tence”. I con­fess that I am un­able to define it with­out us­ing a syn­onym, which is not much of a defi­ni­tion. There may be quite a few similar ba­sic in­defin­ables, which we nonethe­less man­age to talk about; “nega­tion” may be an­other ex­am­ple. It seems that all I can do is talk about it, and hope some recog­ni­tion dawns. I know I already have a dis­agree­ment with Cale­do­nian in this mat­ter, be­cause ear­lier this month he wrote here that ex­is­tence is rel­a­tive and de­pends on the pos­si­bil­ity of in­ter­ac­tion, some­thing I would never say, be­cause it con­fuses ex­is­tence per se with some­thing like knowa­bil­ity—the epistemic grounds whereby one ob­server may as­sert of one thing that it does in­deed ex­ist. We who live now, our ex­is­tence was not know­able to any­one who lived a thou­sand years ago. Nonethe­less, we do ex­ist, here and now, and it is a fal­lacy to rel­a­tivize our ex­is­tence, and say “we ex­ist for each other, but we don’t ex­ist for those peo­ple in the past”. It is a ba­sic con­fu­sion of knowa­bil­ity with re­al­ity.

Now, Un­known, what am I to do with you? Your line is that ex­is­tence is a vague con­cept be­cause I can­not define it with­out be­ing cir­cu­lar, or that I can­not define it in a way which offers a clear de­ci­sion pro­ce­dure for ex­is­tence. My line would be that we all know perfectly well what “ex­is­tence” refers to—the prop­erty of be­ing there, the prop­erty of be­ing a part of re­al­ity, the prop­erty of not be­ing nonex­is­tent—but that the meta­phys­i­cal depths of its na­ture are not so ob­vi­ous. Again, one is con­stantly mak­ing im­plicit judge­ments about what does and does not ex­ist. Does al Qaeda ex­ist? Does Xenu ex­ist? Does the spe­cial dis­count on milk at the cor­ner store still ex­ist? Ex­is­ten­tial judge­ments are ubiquitous in hu­man thought. We all pos­sess a ba­sic fa­cil­ity with the con­cept. Does the in­abil­ity to crisply define it or place it in an on­tolog­i­cal scheme mean that one only has a vague con­cept of ex­is­tence? I don’t think so, be­cause I think the crite­rion of vague­ness in a con­cept is that its refer­ent, the spe­cific thing which it des­ig­nates, is un­der­de­ter­mined (i.e. there are sev­eral differ­ent things it might be refer­ring to), not that the na­ture of the refer­ent re­mains in­com­pletely speci­fied. I think the par­tic­u­lar refer­ent of the hu­man con­cept of ex­is­tence is un­am­bigu­ously known, but the na­ture of that refer­ent may be ob­scure to the hu­man mind. But this is a com­pli­cated mat­ter.

And one more time: this meta­physics is an in­ter­est­ing and even vi­tal topic, but it is some­what of a tan­gent from the main is­sue, which is the mean­ing of “world” in many wor­lds.

• How do Eb­bo­ri­ans get differ­ent names when they fis­sion?

• Thanks for the vote of con­fi­dence, anony­mous; I hope you’re right.

By the way, be­fore any­one asks, this post is not in­tended to sug­gest that quan­tum physics takes place in the fourth di­men­sion—I just wanted to pre­sent the Eb­bo­ri­ans with a differ­ent but analo­gous bizarre puz­zle.

• To deny qualia to­day is like the be­hav­iorists who de­nied cog­ni­tive pro­cesses yes­ter­day. You are smarter than that!
Without be­ing able to offer func­tional crite­ria for judg­ing whether ‘cog­ni­tive pro­cesses’ ex­ist, it would cer­tainly be in­ap­pro­pri­ate to talk about them.

Cog­ni­tive psy­chol­o­gists not only man­aged to pro­duce an ex­per­i­ment that would dis­t­in­guish be­tween “hav­ing cog­ni­tive pro­cesses” and “not hav­ing cog­ni­tive pro­cesses”, they performed it and re­solved the dis­pute.

What ex­per­i­men­tal re­sults would be enough to con­vince you that ‘qualia’ did not ex­ist?

• The more I think about con­scious­ness, the more I think it’s ridicu­lous to have any­thing but the spars­est take on how to con­sider it. We should es­chew an­thro­pocen­trism (mind-cen­trism? mind pro­jec­tion?) at ev­ery step. While it may have some nasty-sound­ing con­se­quences in ar­eas like ethics, I can’t con­vince my­self that con­scious minds have any spe­cial place in the uni­verse, for any rea­son. It’s this cen­tury’s an­swer to hav­ing the Earth at the cen­tre of the uni­verse.

To re­late my point to the parable: I would say that if you have all the in­for­ma­tion about a brain-split—if you know the ex­act po­si­tion and mo­men­tum of ev­ery par­ti­cle at ev­ery point—but you’re still ask­ing ‘yes, but when does one con­scious­ness be­come two?’ then you’re ask­ing a wrong ques­tion. The con­se­quence of this is re­mov­ing that cen­tral name tag called ‘+/​-con­scious­ness’ in your neu­ral ‘at­tributes of con­scious­ness’ net­work.

When you think about it, draw­ing a line around all your neu­rons etc and say­ing ‘this is me’ is ridicu­lous. What about the dead cells? or the cells that have no ob­serv­able effect? how many neu­rons can I re­move be­fore you stop be­ing ‘you’? If you claim there’s a ‘self’ in­side your head that emerges, from all the wet­ware, unified and ir­re­ducible, you’re set­ting your­self up for im­pos­si­ble ques­tions just like the poor Eb­bore­ans. Difficult though it is, you have to drop the mind-cen­trism. Am I say­ing that con­scious­ness is an illu­sion? That’s not how I’d put it – af­ter all, illu­sions are things that minds per­ceive. But what is, is real. Con­scious­ness is. It’s just not ‘spe­cial’. You have no more ‘weight’ than a copy of your­self that as­sem­bles it­self at ran­dom for a frac­tion of a sec­ond in a dis­tant galaxy. Sorry. (By this ra­tio­nale, ‘zom­bies’ are a non­sense too. )

When you ask ‘Why is red red?’, for me you’ve already pro­jected your mind onto the ter­ri­tory. Red isn’t red in the world. It’s red in your map as a re­sult of how your brain en­tan­gles it­self with pho­tons at a cer­tain wave­length—an arte­fact of evolu­tion—and I don’t need to ex­plain that any more than I need to ex­plain why you want your eggs sunny side up. Don’t mis­take con­fu­sion and gaps in our knowl­edge for mys­ti­cism.

Really good stuff, Eliezer. Why do I get the feel­ing that we won’t ever get to hear what the big won­der­ful the­ory was? I’m just on the last chap­ter of GEB, and I’ve re­ally en­joyed it, but I’ve no doubt your fi­nal piece will be en­tirely your own—it cer­tainly de­serves to be. That said, I like the sound of Jaynes, Ein­stein, Bayes: A Ra­tional Steel Katana....

• A be­lated meta-re­sponse to Cale­do­nian: this is your ear­lier re­mark to which I referred. We may have no more than a ter­minolog­i­cal differ­ence. As I said above, I would (hope to) never say “A ex­ists rel­a­tive to B”, only that A was de­tectable, ra­tio­nally in­fer­able, etc., rel­a­tive to B. It’s too con­fus­ing to use “ex­is­tence” as if it only means “epistem­i­cally as­sert­ible ex­is­tence”.

• If I throw away the no­tion of sub­jec­tive an­ti­ci­pa­tion, then how do I differ­en­ti­ate the chaotic uni­verse from the or­derly one?

One can differ­en­ti­ate with the rele­vancy ra­zor. The con­clu­sion given by the rele­vancy ra­zor in this case is that it is safe to as­sume that you live in the or­derly uni­verse be­cause if you lived in the chaotic one, then you have no hope of af­fect­ing re­al­ity—no hope of achiev­ing any goal or car­ry­ing through any plan.

The rele­vancy ra­zor is a prin­ci­ple of gen­eral ap­pli­ca­bil­ity much like Oc­cam’s Ra­zor is a prin­ci­ple of gen­eral ap­pli­ca­bil­ity.

Here is a state­ment of the rele­vancy ra­zor:

Think­ing en­tails the con­tem­pla­tion of “pos­si­ble wor­lds”. It may be that you are un­sure of the na­ture of the world you find your­self in or it may be that you are fac­ing a de­ci­sion, and how you de­cide will de­ter­mine which pos­si­ble world you will end up in. In ei­ther case, you have to think about pos­si­ble wor­lds. The rele­vancy ra­zor says that you do not have to con­tem­plate any pos­si­ble world in which you—the cur­rent you, not the you of the fu­ture—can­not af­fect re­al­ity. (More­over, the greater your abil­ity to af­fect re­al­ity in a pos­si­ble world, the more at­ten­tion you should pay to that pos­si­ble world—just as the greater the prob­a­bil­ity of a pos­si­ble world, the more at­ten­tion you should pay to it.)

One other thing. Ask­ing, What is the differ­ence in an­ti­ci­pated ex­pe­rience be­tween X and Y? is a use­ful and pow­er­ful ques­tion. I ap­plaud Eliezer in en­courag­ing its use.

But there is an­other ques­tion that is just as use­ful and pow­er­ful, namely, What can I pre­dict or con­trol in the ob­jec­tive world if X that I can­not pre­dict or con­trol if Y? (John David Gar­cia has en­couraged the use of this ques­tion since the early 1980s.)

I pre­fer the sec­ond ques­tion be­cause it does not tend to pull one into view­ing sub­jec­tive ex­pe­rience as what ul­ti­mately mat­ters.

If you’re join­ing the con­ver­sa­tion late, then hi, I’m Richard Hol­ler­ith, and I want you to be­lieve that what mat­ters in the end about you is your effect on re­al­ity, not your sub­jec­tive ex­pe­rience.

(I do not say that sub­jec­tive ex­pe­rience should be com­pletely ig­nored: sub­jec­tive ex­pe­rience can yield valuable in­for­ma­tion that is im­prac­ti­cal or too ex­pen­sive to ac­quire any other way, and that valuable in­for­ma­tion can, in turn, be used to af­fect re­al­ity, which again, is the pur­pose of life.)

• That zom­bie has been branded as the former thing, of lit­tle prac­ti­cal con­cern, rather than the lat­ter thing, which I think would be of rea­son­able (and pos­si­bly near-term) prac­ti­cal con­cern to us is very an­noy­ing to me, be­cause I think it would be a great term for the lat­ter thing.
Then go com­plain to Chalmers, be­cause he’s the one who es­tab­lished that term! We had noth­ing to do with it.

if wor­lds are equiprob­a­ble, 1) why we’re not in a near-max­i­mum-en­tropy universe
It is highly un­likely that a given per­son who pur­chased a lot­tery ticket will win. If a per­son re­ceives data in­di­cat­ing that they have won, though, the un­like­li­ness of that out­come is not grounds for dis­card­ing the in­for­ma­tion out of hand. Some­times, peo­ple win.

It seems to me un­likely that a func­tion­ing or­ganism such as our­selves would be able to per­sist for very long in a near-max­i­mum-en­tropy uni­verse. Even if we pre­sume the lo­cal space were rel­a­tively and anoma­lously or­dered, it is far more likely for that to hap­pen at a point in time where the uni­verse as a whole is rel­a­tively or­dered than at a point where ev­ery­thing else is a mess.

• Good writ­ing, in­deed! I also love what you’ve done with the Ebor­rian anzrf (spoiler rot13-en­coded for the benefit of other read­ers since it hasn’t been men­tioned in the pre­vi­ous com­ments).

The split/​re­merge at­tack on en­tities that base their an­ti­ci­pa­tions of fu­ture in­put di­rectly on how many of their fu­ture selves they ex­pect to get spe­cific in­put is ex­tremely in­ter­est­ing to me. I origi­nally thought that this should be a fairly straight­for­ward prob­lem to solve, but it has turned out a lot harder (or my un­der­stand­ing a lot more lack­ing) than I ex­pected. I think the prob­lem might be in the group of 500,003 brains dou­ble-count­ing an­ti­ci­pated in­put af­ter the merge. They don’t stay ex­actly the same through the merge phase; in fact, for each of the 500,000 brains in green rooms, the re-in­te­grated pre­vi­ously-in-green-rooms brain only de­pends to a very small part on them in­di­vi­d­u­ally. In this par­tic­u­lar case, the re-in­te­grated brain will still be very similar to each of the pre-in­te­gra­tion brains; but that is just a re­sult of the pre-in­te­gra­tion brains all be­ing very similar to each other. Treat­ing the re-in­te­grated brain as a reg­u­lar fu­ture-self for the pur­poses of an­ti­ci­pat­ing fu­ture ex­pe­rience un­der these con­di­tions seems highly iffy to me.

• Agree, with anony­mous, just can´t wait to read the whole book and deepen my un­der­stand­ing. With the right edit­ing it will cer­tainly be a new and even more im­por­tant, in­sight­ful and rich book in the tra­di­tion of GEB! And I can´t wait to give it to all my friends and pro­fes­sors, and I can´t wait to hear the re­cep­tion from academia, they/​we have so much to learn. You will be on TED and on EDGE in no time! Your hard work, and deep thought is so pre­cious, I think it is rather won­der­ful that a so­cial ape can pen­e­trate so deep into re­al­ity. I just think this blogform is a bit strange choice for such high qual­ity ma­te­rial, a wiki would be much more ap­pro­pri­ate and easy to use. A table of con­tents would be of great use in this stage when you try to recom­mend this stuff to other peo­ple!

About the post: I guess you are not com­pletely rul­ing out the role of physics in the fi­nal un­der­stand­ing of con­scious­ness, be­cause even though QM may not play an im­por­tant role, a deeper un­der­stand­ing of space­time, may… But hope­fully it all lies in math­e­mat­ics as Hofs­tadter and Go­erzel sug­gests, that would be the most el­e­gant and also prac­ti­cal re­al­ity, sadly re­al­ity does not obey hopes...

I also hope that there is not a gen­eral bias amongst sci­en­tists fa­mil­iar with physics to­wards hav­ing con­scious­ness in­terfere with physics, be­cause they are so used to have them in differ­ent do­mains, and be­cause so much con­fu­sion has arisen when you have not made this dis­tinc­tion clear. And be­cause so many sloppy the­o­ries are rid­ing on this hy­poth­e­sis that is so much more in­tu­itive. It could still be the case that fun­da­men­tal physics can ex­plain some as­pect of say the hard prob­lem. There seems to be two com­pet­ing aes­thet­ics in the mat­ter of con­scious­ness and physics, and I am still not con­vinced it is more than aes­thet­ics, there­for I wait in tak­ing a stance my­self.

But I am so grate­ful Eli con­vinced me of the im­pos­si­bil­ity of Zom­bies, I used to be con­sider epifenom­e­nal­ism. But now I know bet­ter. But that leads me to the in­evitable ques­tion of the func­tional role of qualia. Why choco­late tastes good, which seems to have a causal effect on me want­ing to eat it, rather than me just hav­ing choco­late eat­ing be­hav­ior white­out the in­ter­me­di­ate of qualia. Does a dog have qualia be­havi­our or just eat­ing beahvi­our when it finds some­thing good? Qualia and con­scious­ness seems to be an in­ter­nal mon­i­tor­ing mechanism with very pre­cise in­put from evolu­tion­ary adap­tive qualia(like emo­tions, strong tastes, sen­sa­tions of the body parts of the op­po­site sex), but for what pur­pose, and why the qualia, and what are the effects on neu­ro­com­pu­ta­tion? Or if it IS the neu­ro­com­pu­ta­tion, then why am I ex­pe­rienc­ing such a nice whole and not a lot of differ­ent pro­cesses in my brain, why does it seem that I am only ex­pe­rienc­ing some out­put of the work of my brain? Why do I need to ex­pereience any­thing any­way? What does the re­source limi­ta­tions in my con­scious­ness de­pend on. Why can´t I ex­pe­riences all my mem­o­ries at the same time. Is coun­scious­ness a sep­a­rate brain­sys­tem(like the tha­la­mo­cor­ti­cal loop, bio­elec­tro­mag­netic field or even quan­tum mind) or just a product of the whole brain work­ing to­gether. If it is a sep­a­rable brain sys­tem, what are the mea­surable causal effects. If it is not, what dis­t­in­guishes con­scious from non con­scious pro­cess­ing. If con­scious­ness is a math­e­mat­i­cal phe­nomenon, then still, why is qualia so rich and strongly qual­i­ta­tive, com­pare the feel­ing of hunger to the color red or to the sound of a pi­ano? Maybe it is just the way in­for­ma­tion feels in the uni­verse. Which leads to the ques­tion if all in­for­ma­tion feels it­self, does a book feel the struc­ture of the words in it, if not then what makes in­for­ma­tion feel it­self, a strange loop? Well what about a TV set and a cam­era point­ing to­wards each other. But what is it with a uni­verse that al­lows such strange phe­nom­ena. Well any­thing is strange to us re­ally, and noth­ing is re­ally strange to the uni­verse. We will prob­a­bly always find ev­ery­thing rather ab­surd but won­der­ful if we think re­ally deep about it, even if we are no longer as con­fused in our un­der­stand­ing as we are now. But the uni­verse re­mains neu­tral on strangeness. “Since the be­gin­ning not one un­usual thing has hap­pened”

• Just so you all know, Clifford Alge­bra deriva­tions of quan­tized field the­ory show why the Born Prob­a­bil­ities are a squared pro­por­tion. I’m not sure there’s an in­tu­itively satis­fy­ing ex­pla­na­tion I can give you for why this is that uses words and not math, but here’s my best try.

In math­e­mat­i­cal sys­tems with max­i­mal alge­braic com­plex­ity for their given di­men­sion­al­ity, the mul­ti­pli­ca­tion of an ob­ject by its dual pro­vides an in­var­i­ant of the sys­tem, a quan­tity which can­not be al­tered. (And all phys­i­cal field the­o­ries (ex­cept grav­ity, at this time) can be de­rived in full from the as­sump­tion of max­i­mal alge­braic com­plex­ity for 1 pos­i­tive di­men­sion and 3 nega­tive di­men­sions). [Ob­ject refers to a math­e­mat­i­cal quan­tity, in the case of the field the­o­ries we’re con­cerned with, mostly bivec­tors].

The quan­tity de­scribing time evolu­tion then (com­plex phase am­pli­tudes) must have a cor­re­spond­ing in­var­i­ant quan­tity that is the mod squared of the com­plex phase. This mod squared quan­tity, be­ing the sys­tem in­var­i­ant whose sum de­scribes ‘bench­mark’ by which one judges rel­a­tive val­ues, is then the rele­vant value for eval­u­at­ing the phys­i­cal mean­ing of time evolu­tions. So the phys­i­cal re­al­ity one would ex­pect to ob­serve in prob­a­bil­ity dis­tri­bu­tions is then the mod squared of the un­der­ly­ing quan­tity (com­plex phase am­pli­tudes) rather than the quan­tity it­self.

To ex­plain it in a differ­ent way, be­cause I sus­pect the one way is not ad­e­quate with­out an un­der­stand­ing of the math.

Clifford Alge­bra ob­jects (i.e. the ac­tual con­structs the uni­verse works with, as best we can tell) do not in of them­selves con­tain in­for­ma­tion. In fact, they con­tain no uniquely iden­ti­fi­able in­for­ma­tion. All ob­jects can be mod­ified with an ar­bi­trary global phase fac­tor, turn­ing them into any one of an in­finite set of ob­jects. As such, ac­tual mea­sure­ment/​ob­ser­va­tion of an ob­ject is im­pos­si­ble. You can’t dis­t­in­guish be­tween the ob­ject be­ing A or Ae^ib, be­cause those are liter­ally in­dis­t­in­guish­able quan­tities. The ob­ject which could be those quan­tities lacks suffi­cient unique in­for­ma­tion to ac­tu­ally be one quan­tity or the other. So you’re shit out of luck when it comes to mea­sur­ing it. But though an ob­ject may not con­tain unique in­for­ma­tion, the ob­ject’s mod squared does (and if this vi­o­lates your in­tu­ition of how in­for­ma­tion works, may I re­mind you that your clas­sic-world in­tu­ition of in­for­ma­tion counts for ab­solutely noth­ing at the group the­ory level). This mod squared is the low­est level of re­al­ity which con­tains uniquely iden­ti­fi­able in­for­ma­tion.

So the low­est level of re­al­ity at which you can mean­ingfully iden­tify time evolu­tion prob­a­bil­ities is go­ing to be de­scribed as a square quan­tity.

Be­cause the math says so.

By the way, we’re re­ally, re­ally cer­tain about this math. Un­less the uni­verse has ad­di­tional spa­tial-tem­po­ral di­men­sions we don’t know about (and I kind of doubt that) and only con­tains par­tial alge­braic com­plex­ity in that space (and I re­ally, re­ally doubt that), this is it. There is no pos­si­ble ad­di­tional math­e­mat­i­cal struc­ture with which one could de­scribe our uni­verse that is not con­tained within the Cl_13 alge­bra. There is liter­ally no math­e­mat­i­cal way to de­scribe our uni­verse which ad­e­quately con­tains all of the struc­ture we have ob­served in elec­tro­mag­netism (and weak force and strong force and Higgs force) which does not im­ply this mod squared in­var­i­ant prop­erty as a con­se­quence.

Fur­ther­more, even be­fore this mod squared prop­erty was un­der­stood as a con­se­quence of full alge­braic com­plex­ity, Emmy Noether had de­scribed and rigor­ously proved this re­la­tion­ship as the epony­mous Noether’s the­o­rem, con­firmed its val­idity against known the­o­ries, and used it to pre­dict fu­ture re­sults in field the­ory. So this no­tion is pretty well backed up by a cen­tury of ex­per­i­men­tal ev­i­dence too.

Tl;DR: We (physi­cists who work with both differ­en­tial ge­ome­tries and quan­tum field the­ory and whom find an in­ter­est in group the­ory fun­da­men­tals be­yond what is needed to do con­ven­tional ex­per­i­men­tal or the­ory work) have known about why the Born Prob­a­bil­ities are a squared pro­por­tion since, oh, prob­a­bly the 1930s? Right af­ter Dirac first pub­lished the Dirac Equa­tion? It’s a pretty sim­ple thing to con­clude from the ob­ser­va­tion that quan­tum am­pli­tudes are a bivec­tor quan­tity. But you’ll still see physics text­books de­scribe it as a mys­tery and hear it pon­dered over philo­soph­i­cally, be­cause prop­a­ga­tion of the con­cept would re­quire a base of peo­ple ed­u­cated in Clifford Alge­bras to prop­a­gate through. And such a co­he­sive group of peo­ple just does not ex­ist.

• I don’t know much about Clifford alge­bras. But do you re­ally need them here? I thought the stan­dard for­mu­la­tion of ab­stract quan­tum me­chan­ics was that ev­ery sys­tem is de­scribed by a Hilbert space, the state of a sys­tem is de­scribed by a unit vec­tor, and evolu­tion of the sys­tem is given by uni­tary trans­for­ma­tions. The Born prob­a­bil­ities are con­cerned with the ques­tion: if the state of the uni­verse is the sum of $c_iv_i$ where $v_i$ are or­thog­o­nal unit vec­tors rep­re­sent­ing macro­scop­i­cally dis­tinct out­come states, then what is the sub­jec­tive prob­a­bil­ity of mak­ing ob­ser­va­tions com­pat­i­ble with the state $v_i$? The only rea­son­able an­swer to this is $|c_i|^2$, be­cause it is the only func­tion of $i$ that’s guaran­teed to sum to $1$ based on the setup. (I don’t mean this as an ab­solute state­ment; you can con­struct coun­terex­am­ples but they are not nat­u­ral.) By the way, for those who don’t know already, the rea­son that $|c_i|^2$ is guaran­teed to sum to $1$ is that since the state vec­tor $\sum\,c_iv_i$ is a unit vec­tor,

$1=\left\|\sum\,c_iv_i\right\|^2=\sum\langle\,c_iv_i,c_jv_j\rangle=\sum\,c_i\overline{c_j}\langle\,v_i,v_j\rangle=\sum\,c_i\overline{c_j}\delta_{i,j}=\sum\,c_i\overline{c_i}=\sum\,|c_i|^2.$

Of course, most of the time when peo­ple worry about the Born prob­a­bil­ities they are wor­ried about philo­soph­i­cal is­sues rather than jus­tify­ing the nat­u­ral­ness of the squared mod­u­lus mea­sure.

• It’s a bit late, but the Eb­bo­ri­ans still need to work some kinks out of their quan­tum me­chan­ics equiv­a­lent. Am­pli­tude is not a mea­sure, and so it can’t be the analogue of thick­ness.

• So now sup­pose that my 500,000 green selves are re­united into one Eb­bo­rian, and my 3 red selves are re­united into one Eb­bo­rian. Have I just sent nearly all of my “sub­jec­tive prob­a­bil­ity” into the green fu­ture self, even though it is now only one of two?
With only a lit­tle more work, you can see how a tem­po­rary ex­pen­di­ture of com­put­ing power, or a nicely re­fined brain-split­ter and a dose of anes­the­sia, would let you have a high sub­jec­tive prob­a­bil­ity of win­ning any lot­tery. At least any lot­tery that in­volved split­ting you into pieces.”

I don’t un­der­stand this part; some­one ex­plain it to me, please!

• Think of the Monty Hall prob­lem. Ac­cord­ing to the lines just above those, those 500,000 selves have greater prob­a­bil­ity mass than the 3 red selves. But then com­bin­ing them, you have a sin­gle green self with a greater prob­a­bil­ity mass than the sin­gle red self.

For the other part, see The An­thropic Trilemma about the Quan­tum Lot­tery thought ex­per­i­ment.

• An­noy­ing ques­tion:

How does an Eb­bo­rian fis­sion into more than 2 parts? Surely there aren’t enough or­gans to go round! Un­less you al­low for un­con­scious rounds of re­growth and re­fis­sion­ing...

• Bravo, great post. It’s an ar­gu­ment I’ve been try­ing to make for­ever but never seem to be able to do so in a way that peo­ple un­der­stand. You seem to have man­aged what was be­yond me.

Hol­ler­ith: What sub­jec­tive ex­pe­riences will ex­ist in a par­tic­u­lar world is an ob­jec­tive ques­tion. Now that is in­deed differ­ent from what ex­pe­riences I should ex­pect to have but still some­thing we can’t solve.

• Oh crap: I thought I was be­ing care­ful, pre­view­ing be­fore post­ing, but my com­ment got jum­bled. I’ll up­load the un­jum­bled ver­sion now, and hope that a mod­er­a­tor will delete what I just up­loaded (and this short “oh crap” too).

• If I throw away the no­tion of sub­jec­tive an­ti­ci­pa­tion, then how do I differ­en­ti­ate the chaotic uni­verse from the or­derly one?

One can differ­en­ti­ate in a sense with the rele­vancy ra­zor. The con­clu­sion given by the rele­vancy ra­zor in this case is that it is safe to as­sume that you live in the or­derly uni­verse be­cause if you lived in the chaotic one, then you have no hope of af­fect­ing re­al­ity—no hope of achiev­ing any goal or car­ry­ing through any plan.

The rele­vancy ra­zor is a prin­ci­ple of gen­eral ap­pli­ca­bil­ity much like Oc­cam’s Ra­zor is a prin­ci­ple of gen­eral ap­pli­ca­bil­ity.

Here is a state­ment of the rele­vancy ra­zor:

Think­ing en­tails the con­tem­pla­tion of “pos­si­ble wor­lds”. It may be that you are un­sure of the na­ture of the world you find your­self in or it may be that you are fac­ing a de­ci­sion, and how you de­cide will de­ter­mine which pos­si­ble world you will end up in. In ei­ther case, you have to think about pos­si­ble wor­lds. The rele­vancy ra­zor says that you do not have to con­tem­plate any pos­si­ble world in which you—the cur­rent you, not the you of the fu­ture—can­not af­fect re­al­ity. (More­over, the greater your abil­ity to af­fect re­al­ity in a pos­si­ble world, the more at­ten­tion you should pay to that pos­si­ble world—just as the greater the prob­a­bil­ity of a pos­si­ble world, the more at­ten­tion you should pay to it.)

One other thing. Ask­ing, What is the differ­ence in an­ti­ci­pated ex­pe­rience be­tween X and Y? is a use­ful and pow­er­ful ques­tion. I ap­plaud Eliezer in en­courag­ing its use.

But there is an­other ques­tion that is just as use­ful and pow­er­ful, namely, What can I pre­dict or con­trol in the ob­jec­tive world if X that I can­not pre­dict or con­trol if Y? (John David Gar­cia has en­couraged the use of this ques­tion since the early 1980s.)

I pre­fer the sec­ond ques­tion be­cause it does not tend to pull one into view­ing sub­jec­tive ex­pe­rience as what ul­ti­mately mat­ters.

If you’re join­ing the con­ver­sa­tion late, then hi, I’m Richard Hol­ler­ith, and I want you to be­lieve that what mat­ters in the end about you is your effect on re­al­ity, not your sub­jec­tive ex­pe­rience.

(I do not say that sub­jec­tive ex­pe­rience should be com­pletely ig­nored: sub­jec­tive ex­pe­rience can yield valuable in­for­ma­tion that is im­prac­ti­cal or too ex­pen­sive to ac­quire any other way, and that valuable in­for­ma­tion can, in turn, be used to af­fect re­al­ity, whichyou lived in the chaotic one, then you have no hope of af­fect­ing re­al­ity—no hope of achiev­ing any goal or car­ry­ing through any plan.

I pro­pose the rele­vancy ra­zor as a prin­ci­ple of gen­eral ap­pli­ca­bil­ity much like Oc­cam’s Ra­zor is a prin­ci­ple of gen­eral ap­pli­ca­bil­ity.

Here is a state­ment of the rele­vancy ra­zor:

Think­ing en­tails the con­tem­pla­tion of “pos­si­ble wor­lds”. It may be that you are un­sure of the na­ture of the world you find your­self in or it may be that you are fac­ing a de­ci­sion, and how you de­cide will de­ter­mine which pos­si­ble world you will end up in. In ei­ther case, you have to think about pos­si­ble wor­lds. The rele­vancy ra­zor says that you do not have to con­tem­plate any pos­si­ble world in which you—the cur­rent you, not the you of the fu­ture—can­not af­fect re­al­ity. (More­over, the greater your abil­ity to af­fect re­al­ity in a pos­si­ble world, the more at­ten­tion you should pay to that pos­si­ble world—just as the greater the prob­a­bil­ity of a pos­si­ble world, the more at­ten­tion you should pay to it.)

One other thing. Ask­ing, What is the differ­ence in an­ti­ci­pated ex­pe­rience be­tween X and Y? is a use­ful and pow­er­ful ques­tion. I ap­plaud Eliezer in en­courag­ing its use.

But there is an­other ques­tion that is just as use­ful and pow­er­ful, namely, What can I pre­dict or con­trol in the ob­jec­tive world if X that I can­not pre­dict or con­trol if Y? (John David Gar­cia has en­couraged the use of this ques­tion since the early 1980s.)

I pre­fer the sec­ond ques­tion be­cause it does not tend to pull one into view­ing sub­jec­tive ex­pe­rience as what ul­ti­mately mat­ters.

If you’re join­ing the con­ver­sa­tion late, then hi, I’m Richard Hol­ler­ith, and I want you to be­lieve that what mat­ters in the end about you is your effect on re­al­ity, not your sub­jec­tive ex­pe­rience.

(I do not say that sub­jec­tive ex­pe­rience should be com­pletely ig­nored: sub­jec­tive ex­pe­rience can yield valuable in­for­ma­tion that is im­prac­ti­cal or too ex­pen­sive to ac­quire any other way, and that valuable in­for­ma­tion can, in turn, be used to af­fect re­al­ity, which, again, is the pur­pose of life.)

• I know I already have a dis­agree­ment with Cale­do­nian in this mat­ter, be­cause ear­lier this month he wrote here that ex­is­tence is rel­a­tive and de­pends on the pos­si­bil­ity of in­ter­ac­tion, some­thing I would never say, be­cause it con­fuses ex­is­tence per se with some­thing like knowa­bil­ity

No. That is com­pletely wrong.You are con­fus­ing what things ARE, with what we THINK them to be. This is a com­mon prob­lem when deal­ing with mod­els of re­al­ity (in which we ex­ist), mod­els in which we (as ex­ter­nal ob­servers) ‘know’ ev­ery­thing about the model; we tend to con­fuse the ‘we’ within the model, which is ig­no­rant, with the ‘we’ look­ing at the model, which is es­sen­tially om­ni­scient. (This is a sim­plifi­ca­tion, as the full de­tails are be­yond the scope of this thread.)

Whether a thing ex­ists has noth­ing to do with what we know. Whether we can as­sert that a thing ex­ists has ev­ery­thing to do with what we know. It is en­tirely pos­si­ble for prop­er­ties of the sys­tem to be for­ever un­know­able for par­tic­u­lar en­tities within that sys­tem, or even en­tities in gen­eral within that sys­tem, and still ex­ist. But those en­tities are not en­ti­tled to make any claims about the ex­is­tence of said prop­er­ties.

• Tar­leton: Think­ing like re­al­ity would mean aban­don­ing “per­sonal con­ti­nu­ity” and just talk­ing about fre­quen­cies of ex­pe­riences, and you can still ex­press your sur­prise at find­ing such an or­derly ex­pe­rience.

Sure, if I had a well-defined way to talk about fre­quency (or weight or mea­sure) of ex­pe­rience, it would be a lot eas­ier to toss “per­sonal con­ti­nu­ity” out the win­dow. I want to save the no­tion of con­di­tional mea­sure if I can, but I sup­pose I could live with­out it.

• Un­known: sup­pose I say there’s a cen­tral sin­gu­lar­ity in each galaxy, so there’s one sin­gu­lar­ity to be­gin with, and two at the end, but there’s no ex­act mo­ment when one sin­gu­lar­ity be­comes two. That’s what you’re do­ing when you say there’s an ob­server in each world, and then vague out on the con­cept of “world”.

Does the truth of a sen­tence ex­ist? A proper dis­cus­sion of that might ex­plode the bound­aries of this blog again. But I’ll just say that I had os­ten­sive defi­ni­tions in mind, when I said that the refer­ent of a con­cept may be known even when its na­ture is not. If I point to a light in the sky and say, “that’s Venus”, you know what “Venus” refers to, even though you may not know much about it. And both “ex­is­tence” and “truth” similarly ad­mit of “defi­ni­tion”-by-ex­am­ple, i.e. by ex­hi­bi­tion of an in­stance.

• Why do you need an­ti­ci­pa­tion to ex­press sur­prise at the or­der­li­ness of the uni­verse? Think­ing like re­al­ity would mean aban­don­ing “per­sonal con­ti­nu­ity” and just talk­ing about fre­quen­cies of ex­pe­riences, and you can still ex­press your sur­prise at find­ing such an or­derly ex­pe­rience.

• Nick, you are right about the definite pro­por­tion, but this doesn’t re­quire definite quan­tities, since the pro­por­tion 50 to 100 is the same as the pro­por­tion 100 to 200. So an­thropic rea­son­ing only re­quires definite pro­por­tions, not definite quan­tities.

• I’m sym­pa­thetic to Mitchell’s po­si­tion, and would note that an­thropic rea­son­ing re­quires a definite an­swer to “what ob­servers ex­ist and in what pro­por­tion?”

• “But can I first ask: If a per­son said that ac­cord­ing to their the­ory of the uni­verse, at one time you have one of some­thing, and later on you have many copies of that same thing, but there’s no par­tic­u­lar mo­ment in time when the one be­comes the many, and that doesn’t mat­ter be­cause the some­thing only has a vague, fuzzy ex­is­tence… wouldn’t you think that the the­ory might have a few prob­lems, or at least be miss­ing a part?”

No. Robin im­plic­itly offered an ex­am­ple: if a galaxy were to di­vide into two galax­ies, it would be im­pos­si­ble to as­sign an ex­act mo­ment when the one be­came two. Nonethe­less, there clearly would be a time when it was one, and clearly a time when it was two.

As for the vague­ness of ex­is­tence, it is also vague in hav­ing an un­der­de­ter­mined refer­ent. For ex­am­ple, does the truth of a state­ment ex­ist? If so, then “ex­ists” is un­der­de­ter­mined, be­cause “truth” is un­der­de­ter­mined. The lat­ter is nec­es­sary be­cause if you at­tempt to give a com­plete defi­ni­tion of truth, you will fall into con­tra­dic­tions (e.g. “this state­ment is not true”.)

• Also about the choco­late eat­ing, you can get ad­dicted so that you no longer even need the qualia keep on eat­ing it. There seems to be a dis­tinc­tion be­tween qualia in­duced choclate eat­ing and ad­dic­tive choclate eat­ing where you con­tinue eat­ing al­though it does not taste so good any­more, wich if you no­tice the lame­ness of the qualia may make you stop eat­ing. Why is that, if qualia is a mere con­fu­sion, there should not be such dis­tinc­tions. It seems not ra­tio­nal to spend en­ergy on pro­duc­ing qualia if they are not use­ful in any sense. But use­ful for what? Still qualia af­fect­ing our de­ci­sions seems rather im­pos­sibe to me, but that has to be a fact about my own con­fu­sion not about the ter­ri­tory.

• Hope­fully Anony­mous:

It means that I used to be­live the ex­pe­rience of con­scious­ness/​qualia/​the hard prob­lem is just like the sound of the heart, i.e. whitout any func­tional role. I never thought zom­bies would re­ally be pos­si­ble… just in prin­ci­ple. And I had my doubts even then. Don´t laugh at me be­cause the func­tional role of qualia is not easy to un­der­stand.

poke:

I think you missed the point here. The ques­tion is why choclate eat­ing feels like any­thing, it seems that the qu­lia should be unesse­cary for the brain func­tion of choco­late eat­ing be­hav­ior. The same goes with or­gasm. They seem to be things that try to guide one part of the brain sys­tem with in­put with qualia from an­other in or­der to guide our be­hav­ior to­wards thinks that statis­ti­cally makes us sur­vive and re­pro­duce. If qualia has no func­tional role, then the zom­bie ar­gu­ment is sound. Or this is how I have un­der­stood it at­tend­ing con­scious­ness stud­ies here at Skövde with pro­fes­sor Antti Revon­suo who pub­lished his book In­ner Pressence on MIT Press. If qualia would just be a con­fu­sion it seems highly un­likely that evolu­tion would have spent any time mak­ing qualias just for the fun of it, and qualia has to be a real event tak­ing place in the uni­verse, a real event that needed some en­ergy and in­for­ma­tion con­tent to pro­duce.

To deny qualia to­day is like the be­hav­iorists who de­nied cog­ni­tive pro­cesses yes­ter­day. You are smarter than that!

Or it may be that some of you ac­tu­ally don´t ex­pe­rience qualia, some­times I en­counter peo­ple who I re­ally doubt ex­pe­riences qualia in the nor­mal way, es­pe­cially in the autism spec­trum. So if you suffer from any di­s­or­der, please men­tion that if you are talk­ing about qualia. But my quess is that ev­ery­body has qualia, it may just be eas­ier to deny them if they are not con­nected to emo­tions.

• In to­day’s post I would like to see De’da and/​or Wa’da ask Ha’ro, if wor­lds are equiprob­a­ble, 1) why we’re not in a near-max­i­mum-en­tropy uni­verse, and 2) if we can win the lot­tery by burn­ing stuff af­ter­ward. (Maybe there are le­gi­t­i­mate an­swers, I don’t know.)

• Robin Brandt,

I think “choco­late eat­ing be­hav­ior” already does “white­out the in­ter­me­di­ate of qualia.” We’ve just con­fused the is­sue by as­so­ci­at­ing “the ex­pe­rience I have when eat­ing choco­late” with things gen­er­ally con­sid­ered “good.” Your ex­pe­rience of eat­ing choco­late is just the sum of cog­ni­tive and phys­iolog­i­cal changes as­so­ci­ated with eat­ing choco­late. If we performed an ex­per­i­ment where you were sub­jected to a pain stim­u­lus but then we sub­tracted, one by one, the var­i­ous phys­iolog­i­cal and cog­ni­tive as­pects of pain, I think you would be con­vinced that there isn’t a pain qualia per se (that it is rather the sum of these as­pects and no one of them is more or less pain-proper than the other).

• “But I am so grate­ful Eli con­vinced me of the im­pos­si­bil­ity of Zom­bies”. What does that re­ally mean? That he con­vinced of the im­pos­si­bil­ity of some­thing phys­i­cally iden­ti­cal to you, that be­haves just like you, that claims to be con­scious? Be­cause that seems to be a silly con­struct well be­yond our dis­cern­ment tech­nol­ogy (whether or not some­thing is “phys­i­cally iden­ti­cal” to you), and con­nect­edly, not of much prac­ti­cal in­ter­est.

Or did he con­vince you what some peo­ple want to seem to be­lieve, even if the ev­i­dence doesn’t ex­tend that far: that some­thing that would con­vince the smartest of us to­day that it’s con­scious may not ac­tu­ally have your (or more to the point, my) sub­jec­tive con­scious ex­pe­rience, but may be in a real time equiv­a­lent to sleep walk­ing and sleep talk­ing or in a real time equiv­a­lent to an al­co­hol black­out -is im­pos­si­ble or un­likely to the point of near im­pos­si­bil­ity.

That zom­bie has been branded as the former thing, of lit­tle prac­ti­cal con­cern, rather than the lat­ter thing, which I think would be of rea­son­able (and pos­si­bly near-term) prac­ti­cal con­cern to us is very an­noy­ing to me, be­cause I think it would be a great term for the lat­ter thing.

• can you tell me why the sub­jec­tive prob­a­bil­ity of find­ing our­selves in a side of the split world, should be ex­actly pro­por­tional to the square of the thick­ness of that side?

Po’mi runs a trillion ex­per­i­ments, each of which have a one-trillionth 4D-thick­ness of say­ing B but is oth­er­wise A. In his “main­line prob­a­bil­ity”, he sees the all trillion ex­per­i­ments com­ing up A. (If he ran a sex­til­lion ex­per­i­ments he’d see about 1 come up B.)

Pre­sum­ably an ex­ter­nal four-di­men­sional ob­server sees it differ­ently: He sees only one-trillionth of Po’mi com­ing up all-A, and the rest of Po’mi saw about 1 B and are hud­dled in a cor­ner cry­ing that the uni­verse has no or­der. (Maybe the 4D ob­server would be un­able to see Po’mi at all be­cause Po’mi and all other in­hab­itants of the lawful “main­line prob­a­blity” that we’re talk­ing about have al­most in­finites­i­mal thick­ness from the 4D ob­server’s point of view.)

If I were Po’mi, I would start look­ing for a fifth di­men­sion.

• You could write a fic­tion short story se­ries (like this but re­fined for print) with a sum­mary at the end of what your try­ing to ex­plain. I think it would be worth buy­ing and it could com­bine en­ter­tain­ing with ed­u­ca­tional.

this story (and maybe oth­ers) could be a bit like the 10,000 year old man movie.

• Cale­do­nian, or­di­nar­ily that would be true, but the point is that in the MWI the num­ber of wor­lds in­creases ex­po­nen­tially as a func­tion of en­tropy, so very soon high-en­tropy wor­lds out­num­ber low-en­tropy wor­lds by a fac­tor of (if I’m not mis­taken) ten-to-the-power-avo­gadrillions. As just one ex­am­ple, there should be a lot more Everett wor­lds where we used up all the fos­sil fuels than where we didn’t. AFAIK that’s still true if you con­sider only non-man­gled wor­lds, but maybe I mi­s­un­der­stood the model and the num­ber of non-man­gled wor­lds ac­tu­ally stays con­stant un­der en­tropy in­crease.