# Timeless Identity

Peo­ple have asked me, “What prac­ti­cal good does it do to dis­cuss quan­tum physics or con­scious­ness or zom­bies or per­sonal iden­tity? I mean, what’s the ap­pli­ca­tion for me in real life?”

Be­fore the end of to­day’s post, we shall see a real-world ap­pli­ca­tion with prac­ti­cal con­se­quences, for you, yes, you in to­day’s world. It is built upon many pre­req­ui­sites and deep foun­da­tions; you will not be able to tell oth­ers what you have seen, though you may (or may not) want des­per­ately to tell them. (Short of hav­ing them read the last sev­eral months of OB.)

In No In­di­vi­d­ual Par­ti­cles we saw that the in­tu­itive con­cep­tion of re­al­ity as lit­tle billiard balls bop­ping around, is en­tirely and ab­solutely wrong; the ba­sic on­tolog­i­cal re­al­ity, to the best of any­one’s pre­sent knowl­edge, is a joint con­figu­ra­tion space. Th­ese con­figu­ra­tions have math­e­mat­i­cal iden­tities like “A par­ti­cle here, a par­ti­cle there”, rather than “par­ti­cle 1 here, par­ti­cle 2 there” and the differ­ence is ex­per­i­men­tally testable. What might ap­pear to be a lit­tle billiard ball, like an elec­tron caught in a trap, is ac­tu­ally a mul­ti­plica­tive fac­tor in a wave­func­tion that hap­pens to ap­prox­i­mately fac­tor. The fac­tor­iza­tion of 18 in­cludes two fac­tors of 3, not one fac­tor of 3, but this doesn’t mean the two 3s have sep­a­rate in­di­vi­d­ual iden­tities—quan­tum me­chan­ics is sort of like that. (If that didn’t make any sense to you, sorry; you need to have fol­lowed the se­ries on quan­tum physics.)

In Iden­tity Isn’t In Spe­cific Atoms, we took this coun­ter­in­tu­itive truth of phys­i­cal on­tol­ogy, and pro­ceeded to kick hell out of an in­tu­itive con­cept of per­sonal iden­tity that de­pends on be­ing made of the “same atoms”—the in­tu­ition that you are the same per­son, if you are made out of the same pieces. But be­cause the brain doesn’t re­peat its ex­act state (let alone the whole uni­verse), the joint con­figu­ra­tion space which un­der­lies you, is nonover­lap­ping from one frac­tion of a sec­ond to the next. Or even from one Planck in­ter­val to the next. I.e., “you” of now and “you” of one sec­ond later do not have in com­mon any on­tolog­i­cally ba­sic el­e­ments with a shared per­sis­tent iden­tity.

Just from stan­dard quan­tum me­chan­ics, we can see im­me­di­ately that some of the stan­dard thought-ex­per­i­ments used to pump in­tu­itions in philo­soph­i­cal dis­cus­sions of iden­tity, are phys­i­cal non­sense. For ex­am­ple, there is a thought ex­per­i­ment that runs like this:

“The Scan­ner here on Earth will de­stroy my brain and body, while record­ing the ex­act states of all my cells. It will then trans­mit this in­for­ma­tion by ra­dio. Trav­el­ling at the speed of light, the mes­sage will take three min­utes to reach the Repli­ca­tor on Mars. This will then cre­ate, out of new mat­ter, a brain and body ex­actly like mine. It will be in this body that I shall wake up.”

This is Derek Parfit in the ex­cel­lent Rea­sons and Per­sons, p. 199—note that Parfit is de­scribing thought ex­per­i­ments, not nec­es­sar­ily en­dors­ing them.

There is an ar­gu­ment which Parfit de­scribes (but does not him­self en­dorse), and which I have seen many peo­ple spon­ta­neously in­vent, which says (not a quote):

Ah, but sup­pose an im­proved Scan­ner were in­vented, which scanned you non-de­struc­tively, but still trans­mit­ted the same in­for­ma­tion to Mars . Now, clearly, in this case, you, the origi­nal have sim­ply stayed on Earth, and the per­son on Mars is only a copy. There­fore this tele­porter is ac­tu­ally mur­der and birth, not travel at all—it de­stroys the origi­nal, and con­structs a copy!

Well, but who says that if we build an ex­act copy of you, one ver­sion is the priv­ileged origi­nal and the other is just a copy? Are you un­der the im­pres­sion that one of these bod­ies is con­structed out of the origi­nal atoms—that it has some kind of phys­i­cal con­ti­nu­ity the other does not pos­sess? But there is no such thing as a par­tic­u­lar atom, so the origi­nal-ness or new-ness of the per­son can’t de­pend on the origi­nal-ness or new-ness of the atoms.

(If you are now say­ing, “No, you can’t dis­t­in­guish two elec­trons yet, but that doesn’t mean they’re the same en­tity—” then you have not been fol­low­ing the se­ries on quan­tum me­chan­ics, or you need to reread it. Physics does not work the way you think it does. There are no lit­tle billiard balls bounc­ing around down there.)

If you fur­ther re­al­ize that, as a mat­ter of fact, you are split­ting all the time due to or­di­nary de­co­her­ence, then you are much more likely to look at this thought ex­per­i­ment and say: “There is no copy; there are two origi­nals.”

In­tu­itively, in your imag­i­na­tion, it might seem that one billiard ball stays in the same place on Earth, and an­other billiard ball has popped into place on Mars; so one is the “origi­nal”, and the other is the “copy”. But at a fun­da­men­tal level, things are not made out of billiard balls.

A sen­tient brain con­structed to atomic pre­ci­sion, and copied with atomic pre­ci­sion, could un­dergo a quan­tum evolu­tion along with its “copy”, such that, af­ter­ward, there would ex­ist no fact of the mat­ter as to which of the two brains was the “origi­nal”. In some Feyn­man di­a­grams they would ex­change places, in some Feyn­man di­a­grams not. The two en­tire brains would be, in ag­gre­gate, iden­ti­cal par­ti­cles with no in­di­vi­d­ual iden­tities.

Parfit, hav­ing dis­cussed the tele­por­ta­tion thought ex­per­i­ment, coun­ters the in­tu­itions of phys­i­cal con­ti­nu­ity with a differ­ent set of thought ex­per­i­ments:

“Con­sider an­other range of pos­si­ble cases: the Phys­i­cal Spec­trum. Th­ese cases in­volve all of the differ­ent pos­si­ble de­grees of phys­i­cal con­ti­nu­ity...

“In a case close to the near end, sci­en­tists would re­place 1% of the cells in my brain and body with ex­act du­pli­cates. In the case in the mid­dle of the spec­trum, they would re­place 50%. In a case near the far end, they would re­place 99%, leav­ing only 1% of my origi­nal brain and body. At the far end, the ‘re­place­ment’ would in­volve the com­plete de­struc­tion of my brain and body, and the cre­ation out of new or­ganic mat­ter of a Replica of me.”

(Rea­sons and Per­sons, p. 234.)

Parfit uses this to ar­gue against the in­tu­ition of phys­i­cal con­ti­nu­ity pumped by the first ex­per­i­ment: if your iden­tity de­pends on phys­i­cal con­ti­nu­ity, where is the ex­act thresh­old at which you cease to be “you”?

By the way, al­though I’m crit­i­ciz­ing Parfit’s rea­son­ing here, I re­ally liked Parfit’s dis­cus­sion of per­sonal iden­tity. It re­ally sur­prised me. I was ex­pect­ing a re­hash of the same ar­gu­ments I’ve seen on tran­shu­man­ist mailing lists over the last decade or more. Parfit gets much fur­ther than I’ve seen the mailing lists get. This is a sad ver­dict for the mailing lists. And as for Rea­sons and Per­sons, it well de­serves its fame.

But al­though Parfit ex­e­cuted his ar­gu­ments com­pe­tently and with great philo­soph­i­cal skill, those two par­tic­u­lar ar­gu­ments (Parfit has lots more!) are doomed by physics.

There just is no such thing as “new or­ganic mat­ter” that has a per­sis­tent iden­tity apart from “old or­ganic mat­ter”. No fact of the mat­ter ex­ists, as to which elec­tron is which, in your body on Earth or your body on Mars. No fact of the mat­ter ex­ists, as to how many elec­trons in your body have been “re­placed” or “left in the same place”. So both thought ex­per­i­ments are phys­i­cal non­sense.

Parfit seems to be enun­ci­at­ing his own opinion here (not Devil’s ad­vo­cat­ing) when he says:

“There are two kinds of same­ness, or iden­tity. I and my Replica are qual­i­ta­tively iden­ti­cal, or ex­actly al­ike. But we may not be nu­mer­i­cally iden­ti­cal, one and the same per­son. Similarly, two white billiard balls are not nu­mer­i­cally but may be qual­i­ta­tively iden­ti­cal. If I paint one of these balls red, it will cease to be qual­i­ta­tively iden­ti­cal with it­self as it was. But the red ball that I later see and the white ball that I painted red are nu­mer­i­cally iden­ti­cal. They are one and the same ball.” (p. 201.)

In the hu­man imag­i­na­tion, the way we have evolved to imag­ine things, we can imag­ine two qual­i­ta­tively iden­ti­cal billiard balls that have a fur­ther fact about them—their per­sis­tent iden­tity—that makes them dis­tinct.

But it seems to be a ba­sic les­son of physics that “nu­mer­i­cal iden­tity” just does not ex­ist. Where “qual­i­ta­tive iden­tity” ex­ists, you can set up quan­tum evolu­tions that re­fute the illu­sion of in­di­vi­d­u­al­ity—Feyn­man di­a­grams that sum over differ­ent per­mu­ta­tions of the iden­ti­cals.

We should always have been sus­pi­cious of “nu­mer­i­cal iden­tity”, since it was not ex­per­i­men­tally de­tectable; but physics swoops in and drop-kicks the whole ar­gu­ment out the win­dow.

Parfit p. 241:

“Re­duc­tion­ists ad­mit that there is a differ­ence be­tween nu­mer­i­cal iden­tity and ex­act similar­ity. In some cases, there would be a real differ­ence be­tween some per­son’s be­ing me, and his be­ing some­one else who is merely ex­actly like me.”

This re­duc­tion­ist ad­mits no such thing.

Parfit even de­scribes a wise-seem­ing re­duc­tion­ist re­fusal to an­swer ques­tions as to when one per­son be­comes an­other, when you are “re­plac­ing” the atoms in­side them. P. 235:

(The re­duc­tion­ist says:) “The re­sult­ing per­son will be psy­cholog­i­cally con­tin­u­ous with me as I am now. This is all there is to know. I do not know whether the re­sult­ing per­son will be me, or will be some­one else who is merely ex­actly like me. But this is not, here, a real ques­tion, which must have an an­swer. It does not de­scribe two differ­ent pos­si­bil­ities, one of which must be true. It is here an empty ques­tion. There is not a real differ­ence here be­tween the re­sult­ing per­son’s be­ing me, and his be­ing some­one else. This is why, even though I do not know whether I am about to die, I know ev­ery­thing.”

Al­most but not quite re­duc­tion­ist enough! When you mas­ter quan­tum me­chan­ics, you see that, in the thought ex­per­i­ment where your atoms are be­ing “re­placed” in var­i­ous quan­tities by “differ­ent” atoms, noth­ing what­so­ever is ac­tu­ally hap­pen­ing—the thought ex­per­i­ment it­self is phys­i­cally empty.

So this re­duc­tion­ist, at least, triumphantly says—not, “It is an empty ques­tion; I know ev­ery­thing that there is to know, even though I don’t know if I will live or die”—but sim­ply, “I will live; noth­ing hap­pened.”

This whole epi­sode is one of the main rea­sons why I hope that when I re­ally un­der­stand mat­ters such as these, and they have ceased to be mys­ter­ies unto me, that I will be able to give definite an­swers to ques­tions that seem like they ought to have definite an­swers.

And it is a rea­son why I am sus­pi­cious, of philoso­phies that too early—be­fore the dis­pel­ling of mys­tery—say, “There is no an­swer to the ques­tion.” Some­times there is no an­swer, but then the ab­sence of the an­swer comes with a shock of un­der­stand­ing, a click like thun­der, that makes the ques­tion van­ish in a puff of smoke. As op­posed to a dull empty sort of feel­ing, as of be­ing told to shut up and stop ask­ing ques­tions.

And an­other les­son: Though the thought ex­per­i­ment of hav­ing atoms “re­placed” seems easy to imag­ine in the ab­stract, any­one know­ing a fully de­tailed phys­i­cal vi­su­al­iza­tion would have im­me­di­ately seen that the thought ex­per­i­ment was phys­i­cal non­sense. Let zom­bie the­o­rists take note!

Ad­di­tional physics can shift our view of iden­tity even fur­ther:

In Time­less Physics, we looked at a spec­u­la­tive, but even more beau­tiful view of quan­tum me­chan­ics: We don’t need to sup­pose the am­pli­tude dis­tri­bu­tion over the con­figu­ra­tion space is chang­ing, since the uni­verse never re­peats it­self. We never see any par­tic­u­lar joint con­figu­ra­tion (of the whole uni­verse) change am­pli­tude from one time to an­other; from one time to an­other, the uni­verse will have ex­panded. There is just a time­less am­pli­tude dis­tri­bu­tion (aka wave­func­tion) over a con­figu­ra­tion space that in­cludes com­pressed con­figu­ra­tions of the uni­verse (early times) and ex­panded con­figu­ra­tions of the uni­verse (later times).

Then we will need to dis­cover peo­ple and their iden­tities em­bod­ied within a time­less set of re­la­tions be­tween con­figu­ra­tions that never re­peat them­selves, and never change from one time to an­other.

As we saw in Time­less Beauty, time­less physics is beau­tiful be­cause it would make ev­ery­thing that ex­ists ei­ther perfectly global—like the uniform, ex­cep­tion­less laws of physics that ap­ply ev­ery­where and ev­ery­when—or perfectly lo­cal—like points in the con­figu­ra­tion space that only af­fect or are af­fected by their im­me­di­ate lo­cal neigh­bor­hood. Every­thing that ex­ists fun­da­men­tally, would be qual­i­ta­tively unique: there would never be two fun­da­men­tal en­tities that have the same prop­er­ties but are not the same en­tity.

(Note: The you on Earth, and the you on Mars, are not on­tolog­i­cally ba­sic. You are fac­tors of a joint am­pli­tude dis­tri­bu­tion that is on­tolog­i­cally ba­sic. Sup­pose the in­te­ger 18 ex­ists: the fac­tor­iza­tion of 18 will in­clude two fac­tors of 3, not one fac­tor of 3. This does not mean that in­side the Pla­tonic in­te­ger 18 there are two lit­tle 3s hang­ing around with per­sis­tent iden­tities, liv­ing in differ­ent houses.)

We also saw in Time­less Causal­ity that the end of time is not nec­es­sar­ily the end of cause and effect; causal­ity can be defined (and de­tected statis­ti­cally!) with­out men­tion­ing “time”. This is im­por­tant be­cause it pre­serves ar­gu­ments about per­sonal iden­tity that rely on causal con­ti­nu­ity rather than “phys­i­cal con­ti­nu­ity”.

Pre­vi­ously I drew this di­a­gram of you in a time­less, branch­ing uni­verse:

To un­der­stand many-wor­lds: The gold head only re­mem­bers the green heads, cre­at­ing the illu­sion of a unique line through time, and the in­tu­itive ques­tion, “Where does the line go next?” But it goes to both pos­si­ble fu­tures, and both pos­si­ble fu­tures will look back and see a sin­gle line through time. In many-wor­lds, there is no fact of the mat­ter as to which fu­ture you per­son­ally will end up in. There is no copy; there are two origi­nals.

To un­der­stand time­less physics: The heads are not pop­ping in and out of ex­is­tence as some Global Now sweeps for­ward. They are all just there, each think­ing that now is a differ­ent time.

In Time­less Causal­ity I drew this di­a­gram:

This was part of an illus­tra­tion of how we could statis­ti­cally dis­t­in­guish left-flow­ing causal­ity from right-flow­ing causal­ity—an ar­gu­ment that cause and effect could be defined re­la­tion­ally, even the ab­sence of a chang­ing global time. And I said that, be­cause we could keep cause and effect as the glue that binds con­figu­ra­tions to­gether, we could go on try­ing to iden­tify ex­pe­riences with com­pu­ta­tions em­bod­ied in flows of am­pli­tude, rather than hav­ing to iden­tify ex­pe­riences with in­di­vi­d­ual con­figu­ra­tions.

But both di­a­grams have a com­mon flaw: they show dis­crete nodes, con­nected by dis­crete ar­rows. In re­al­ity, physics is con­tin­u­ous.

So if you want to know “Where is the com­pu­ta­tion? Where is the ex­pe­rience?” my best guess would be to point to some­thing like a di­rec­tional braid:

This is not a braid of mov­ing par­ti­cles. This is a braid of in­ter­ac­tions within close neigh­bor­hoods of time­less con­figu­ra­tion space.

Every point in­ter­sected by the red line is unique as a math­e­mat­i­cal en­tity; the points are not mov­ing from one time to an­other. How­ever, the am­pli­tude at differ­ent points is re­lated by phys­i­cal laws; and there is a di­rec­tion of causal­ity to the re­la­tions.

You could say that the am­pli­tude is flow­ing, in a river that never changes, but has a di­rec­tion.

Em­bod­ied in this time­less flow are com­pu­ta­tions; within the com­pu­ta­tions, ex­pe­riences. The ex­pe­riences’ com­pu­ta­tions’ con­figu­ra­tions might even over­lap each other:

In the causal re­la­tions cov­ered by the rec­t­an­gle 1, there would be one mo­ment of Now; in the causal re­la­tions cov­ered by the rec­t­an­gle 2, an­other mo­ment of Now. There is a causal di­rec­tion be­tween them: 1 is the cause of 2, not the other way around. The rec­t­an­gles over­lap—though I re­ally am not sure if I should be draw­ing them with over­lap or not—be­cause the com­pu­ta­tions are em­bod­ied in some of the same con­figu­ra­tions. Or if not, there is still causal con­ti­nu­ity be­cause the end state of one com­pu­ta­tion is the start state of an­other.

But on an on­tolog­i­cally fun­da­men­tal level, noth­ing with a per­sis­tent iden­tity moves through time.

Even the braid it­self is not on­tolog­i­cally fun­da­men­tal; a hu­man brain is a fac­tor of a larger wave­func­tion that hap­pens to fac­tor­ize.

Then what is pre­served from one time to an­other? On an on­tolog­i­cally ba­sic level, ab­solutely noth­ing.

But you will re­call that I ear­lier talked about any per­tur­ba­tion which does not dis­turb your in­ter­nal nar­ra­tive, al­most cer­tainly not be­ing able to dis­turb what­ever is the true cause of your say­ing “I think there­fore I am”—this is why you can’t leave a per­son phys­i­cally un­altered, and sub­tract their con­scious­ness. When you look at a per­son on the level of or­ga­ni­za­tion of neu­rons firing, any­thing which does not dis­turb, or only in­finites­i­mally dis­turbs, the pat­tern of neu­rons firing—such as flip­ping a switch from across the room—ought not to dis­turb your con­scious­ness, or your per­sonal iden­tity.

If you were to de­scribe the brain on the level of neu­rons and synapses, then this de­scrip­tion of the fac­tor of the wave­func­tion that is your brain, would have a very great deal in com­mon, across differ­ent cross-sec­tions of the braid. The pat­tern of synapses would be “al­most the same”—that is, the de­scrip­tion would come out al­most the same—even though, on an on­tolog­i­cally ba­sic level, noth­ing that ex­ists fun­da­men­tally is held in com­mon be­tween them. The in­ter­nal nar­ra­tive goes on, and you can see it within the vastly higher-level view of the firing pat­terns in the con­nec­tion of synapses. The com­pu­ta­tional pat­tern com­putes, “I think there­fore I am”. The nar­ra­tive says, to­day and to­mor­row, “I am Eliezer Yud­kowsky, I am a ra­tio­nal­ist, and I have some­thing to pro­tect.” Even though, in the river that never flows, not a sin­gle drop of wa­ter is shared be­tween one time and an­other.

If there’s any ba­sis what­so­ever to this no­tion of “con­ti­nu­ity of con­scious­ness”—I haven’t quite given up on it yet, be­cause I don’t have any­thing bet­ter to cling to—then I would guess that this is how it works.

Oh… and I promised you a real-world ap­pli­ca­tion, didn’t I?

Well, here it is:

Many through­out time, tempted by the promise of im­mor­tal­ity, have con­sumed strange and of­ten fatal elix­irs; they have tried to bar­gain with dev­ils that failed to ap­pear; and done many other silly things.

But like all su­per­pow­ers, long-range life ex­ten­sion can only be ac­quired by see­ing, with a shock, that some way of get­ting it is perfectly nor­mal.

If you can see the mo­ments of now braided into time, the causal de­pen­den­cies of fu­ture states on past states, the high-level pat­tern of synapses and the in­ter­nal nar­ra­tive as a com­pu­ta­tion within it—if you can viscer­ally dis­pel the clas­si­cal hal­lu­ci­na­tion of a lit­tle billiard ball that is you, and see your nows strung out in the river that never flows—then you can see that sign­ing up for cry­on­ics, be­ing vit­rified in liquid ni­tro­gen when you die, and hav­ing your brain nan­otech­nolog­i­cally re­con­structed fifty years later, is ac­tu­ally less of a change than go­ing to sleep, dream­ing, and for­get­ting your dreams when you wake up.

You should be able to see that, now, if you’ve fol­lowed through this whole se­ries. You should be able to get it on a gut level—that be­ing vit­rified in liquid ni­tro­gen for fifty years (around 3e52 Planck in­ter­vals) is not very differ­ent from wait­ing an av­er­age of 2e26 Planck in­ter­vals be­tween neu­rons firing, on the gen­er­ous as­sump­tion that there are a hun­dred trillion synapses firing a thou­sand times per sec­ond. You should be able to see that there is noth­ing pre­served from one night’s sleep to the morn­ing’s wak­ing, which cry­onic sus­pen­sion does not pre­serve also. As­sum­ing the vit­rifi­ca­tion tech­nol­ogy is good enough for a suffi­ciently pow­er­ful Bayesian su­per­in­tel­li­gence to look at your frozen brain, and figure out “who you were” to the same re­s­olu­tion that your morn­ing’s wak­ing self re­sem­bles the per­son who went to sleep that night.

Do you know what it takes to se­curely erase a com­puter’s hard drive? Writ­ing it over with all ze­roes isn’t enough. Writ­ing it over with all ze­roes, then all ones, then a ran­dom pat­tern, isn’t enough. Some­one with the right tools can still ex­am­ine the fi­nal state of a sec­tion of mag­netic mem­ory, and dis­t­in­guish the state, “This was a 1 writ­ten over by a 1, then a 0, then a 1” from “This was a 0 writ­ten over by a 1, then a 0, then a 1″. The best way to se­curely erase a com­puter’s hard drive is to de­stroy it with ther­mite.

I re­ally don’t think that care­fully vit­rify­ing a brain to pre­vent ice crys­tal for­ma­tion and then freez­ing it in liquid ni­tro­gen is go­ing to be a se­cure erase pro­ce­dure, if you can ex­am­ine atomic-level differ­ences in the synapses.

Some­one hears about cry­on­ics and thinks for 10 sec­onds and says, “But if you’re frozen and then re­vived, are you re­ally the same per­son?

And if they hap­pened to know all about quan­tum physics and could ap­ply the ab­stract knowl­edge to real life, and they had fol­lowed the whole de­bate about zom­bies and re­solved it against epiphe­nom­e­nal­ism in gen­eral, then they would be able to vi­su­al­ize the braids in the river that never flows, and say, “Yes.”

But this knowl­edge is not com­mon.

So they die.

There are nu­mer­ous other rea­sons that peo­ple seize on, when they search for a ra­tio­nal­iza­tion for a nega­tive ini­tial flinch against cry­on­ics. And nu­mer­ous other knowl­edges that would be re­quired to an­swer those ob­jec­tions. “But wouldn’t it be bor­ing to live such a long time?” (Can be an­swered if you know he­do­nic psy­chol­ogy, and have de­vel­oped a the­ory of fun, and can vi­su­al­ize ac­cessible fun spaces that in­crease in vol­ume with in­creas­ing in­tel­li­gence.) “Why would fu­ture civ­i­liza­tions bother to re­vive me?” (Re­quires un­der­stand­ing ei­ther eco­nomic growth diminish­ing the cost, or knowl­edge of his­tory and how so­cieties have be­come kinder over time, or know­ing about Friendly AI.) “Isn’t it wrong to live so long?” (Re­quires know­ing about the “sour grapes” bias. See also tran­shu­man­ism as sim­plified hu­man­ism and the mean­ing that im­mor­tal­ity gives to life.) Then there’s the meta-knowl­edge of how to ques­tion all these deeply wise cached thoughts that pop into your head about the fu­til­ity of life; and the abil­ity to do things that might make peo­ple look at you weird, and so on...

Some of these are se­ries of posts I haven’t done yet. But if you an­ti­ci­pate up­dat­ing your prob­a­bil­ities when you read those fu­ture posts, then you should up­date them now. Or, if you pre­fer, trust me:

If you would rather live hap­pily ever af­ter, than die, and you are will­ing to spend be­tween \$300 and \$2000 per year(*) to ex­press this prefer­ence, then sign up for cry­on­ics.

If you’ve been cry­ocras­ti­nat­ing, putting off sign­ing up for cry­on­ics “un­til later”, don’t think that you’ve “got­ten away with it so far”. Many wor­lds, re­mem­ber? There are branched ver­sions of you that are dy­ing of can­cer, and not signed up for cry­on­ics, and it’s too late for them to get life in­surance.

It might en­courage you to get around to get­ting health in­surance, too, or wear­ing a helmet on your mo­tor­cy­cle, or what­ever: don’t think you’ve got­ten away with it so far.

And if you’re plan­ning to play the lot­tery, don’t think you might win this time. A van­ish­ingly small frac­tion of you wins, ev­ery time. So ei­ther learn to dis­count small frac­tions of the fu­ture by shut­ting up and mul­ti­ply­ing, or spend all your money on lot­tery tick­ets—your call.

It is a very im­por­tant les­son in ra­tio­nal­ity, that at any time, the En­vi­ron­ment may sud­denly ask you al­most any ques­tion, which re­quires you to draw on 7 differ­ent fields of knowl­edge. If you missed study­ing a sin­gle one of them, you may suffer ar­bi­trar­ily large penalties up to and in­clud­ing cap­i­tal pun­ish­ment. You can die for an an­swer you gave in 10 sec­onds, with­out re­al­iz­ing that a field of knowl­edge ex­isted of which you were ig­no­rant.

This is why there is a virtue of schol­ar­ship.

150,000 peo­ple die ev­ery day. Some of those deaths are truly un­avoid­able, but most are the re­sult of in­ad­e­quate knowl­edge of cog­ni­tive bi­ases, ad­vanced fu­tur­ism, and quan­tum me­chan­ics.(**)

If you dis­agree with my premises or my con­clu­sion, take a mo­ment to con­sider nonethe­less, that the very ex­is­tence of an ar­gu­ment about life-or-death stakes, what­ever po­si­tion you take in that ar­gu­ment, con­sti­tutes a suffi­cient les­son on the sud­den rele­vance of schol­ar­ship.

(*) The way cry­on­ics works is that you get a life in­surance policy, and the policy pays for your cry­onic sus­pen­sion. The Cry­on­ics In­sti­tute is the cheap­est provider, Al­cor is the high-class one. Rudi Hoff­man set up my own in­surance policy, with CI. I have no af­fili­ate agree­ments with any of these en­tities, nor, to my knowl­edge, do they have af­fili­ate agree­ments with any­one. They’re try­ing to look re­spectable, and so they rely on al­tru­ism and word-of-mouth to grow, in­stead of paid sales­peo­ple. So there’s a vastly smaller wor­ld­wide mar­ket for im­mor­tal­ity than lung-can­cer-in-a-stick. Wel­come to your Earth; it’s go­ing to stay this way un­til you fix it.

(**) Most deaths? Yes: If cry­on­ics were widely seen in the same terms as any other med­i­cal pro­ce­dure, economies of scale would con­sid­er­ably diminish the cost; it would be ap­plied rou­tinely in hos­pi­tals; and for­eign aid would en­able it to be ap­plied even in poor coun­tries. So chil­dren in Africa are dy­ing be­cause cit­i­zens and poli­ti­ci­ans and philan­thropists in the First World don’t have a gut-level un­der­stand­ing of quan­tum me­chan­ics.

Added: For some of the ques­tions that are be­ing asked, see Al­cor’s FAQ for sci­en­tists and Ben Best’s Cry­on­ics FAQ (archived snap­shot).

Next post: “Thou Art Physics

Pre­vi­ous post: “Time­less Causal­ity

• Where can I sign up for cry­on­ics if I live out­side the United States and Europe?

• Eliezer, your ac­count seems to give peo­ple two new ex­cuses for not sign­ing up for cry­on­ics:

1) It seems to im­ply Quan­tum Im­mor­tal­ity any­way.

2) Since there is noth­ing that per­sists on a fun­da­men­tal level, the only rea­son new hu­man be­ings in the fu­ture aren’t “me” is that they don’t re­mem­ber me. But I also don’t re­mem­ber be­ing two years old, and the two year old who be­came me didn’t ex­pect it. So the psy­cholog­i­cal con­ti­nu­ity be­tween my past self and my pre­sent self is no greater, in the case of my two year old self, than be­tween my­self and fu­ture hu­man be­ings. This doesn’t bother me in the case of the two year old, so it seems like it might not bother me in my own case. In other words, why should I try to live for­ever? There will be other hu­man be­ings any­way, and they will be just as good as me, and there will be just as much iden­tity on a fun­da­men­tal level.

You may think that these ar­gu­ments don’t work, but that doesn’t mat­ter. The point is that be­cause cry­on­ics is “strange” to peo­ple, they are look­ing for rea­sons not to do it. So given that these ar­gu­ments are plau­si­ble, they will em­brace them im­me­di­ately.

• The ar­gu­ment that “there is no such thing as a par­tic­u­lar atom, there­fore nei­ther du­pli­cate has a preferred sta­tus as the origi­nal” looks so­phis­ti­cal, and it may even be pos­si­ble to show that it is within your preferred quan­tum frame­work. Con­sider a ben­zene ring. That’s a ring of six car­bon atoms. If it oc­curs as part of a larger molecule, there will be co­va­lent bonds be­tween par­tic­u­lar atoms in the ring and atoms ex­te­rior to it. Now sup­pose I ver­ify the pres­ence of the ben­zene ring through some non­de­struc­tive pro­ce­dure, and then cre­ate an­other ben­zene ring el­se­where, us­ing other atoms. In fact, sup­pose I have a ma­chine which will cre­ate that sec­ond ben­zene ring only if the in­ves­tiga­tive pro­ce­dure ver­ifies the ex­is­tence of the first. I have cre­ated a copy, but are you re­ally go­ing to say there’s no fact of the mat­ter about which is the origi­nal? There’s even a hint of how you can dis­t­in­guish be­tween the two given your on­tolog­i­cal frame­work, when I stipu­lated that the origi­nal ring is bonded to some­thing else; some­thing not true of the du­pli­cate. If you in­sist on think­ing there is no con­ti­nu­ity of iden­tity of in­di­vi­d­ual par­ti­cles, at least you can say that one of the car­bon atoms in the first ring is en­tan­gled with an out­side atom in a way that none of the atoms in the du­pli­cate ring is, and dis­t­in­guish be­tween them that way. You may be able to in­di­vi­d­u­ate atoms within struc­tures by look­ing at their quan­tum cor­re­la­tions; you won’t be able to say ‘this atom has prop­erty X, that atom has prop­erty Y’ but you’ll be able to say ‘there’s an atom with prop­erty X, and there’s an atom with prop­erty Y’.

As­sum­ing that this is on the right track, the deeper re­al­ity is go­ing to be field con­figu­ra­tions any­way, not par­ti­cle con­figu­ra­tions. Par­ti­cle num­ber is frame-de­pen­dent (see: Un­ruh effect), and a quan­tum par­ti­cle is just a sort of wave­func­tion over field con­figu­ra­tions—a blob of am­pli­tude in field con­figu­ra­tion space.

• A sen­tient brain con­structed to atomic pre­ci­sion, and copied with atomic pre­ci­sion, could un­dergo a quan­tum evolu­tion along with its “copy”, such that, af­ter­ward, there would ex­ist no fact of the mat­ter as to which of the two brains was the “origi­nal”.

On the other hand, an or­di­nary hu­man brain could un­dergo 100 years worth of or­di­nary quan­tum evolu­tion along with its “copy”, and prob­a­bly 99 out of 100 naive hu­man ob­servers would still agree which one is the “origi­nal” and which is the “copy”. It seems there must be a fact of the mat­ter in this case, or else how did they reach agree­ment? By magic?

Given that phys­i­cal con­ti­nu­ity is an ob­vi­ous fact of daily life, in our EEA and now, why can’t “car­ing about phys­i­cal con­ti­nu­ity” be a part of our prefer­ences/​moral­ity? In other words, if the above spe­cially con­structed sen­tient brain were to host a hu­man mind, it doesn’t seem im­plau­si­ble that it would con­sider both post-evolu­tion ver­sions of it­self to be less valuable “copies” (due to loss of clear phys­i­cal con­ti­nu­ity) and would choose to avoid un­der­go­ing such quan­tum evolu­tion if it could. This “phys­i­cal con­ti­nu­ity” may not have a sim­ple defi­ni­tion in terms of fun­da­men­tal physics, but then no­body said our val­ues had to be sim­ple...

EDIT: I’ve ex­panded this crit­i­cism into a dis­cus­sion post.

“Con­sider an­other range of pos­si­ble cases: the Phys­i­cal Spec­trum. Th­ese cases in­volve all of the differ­ent pos­si­ble de­grees of phys­i­cal con­ti­nu­ity...

“In a case close to the near end, sci­en­tists would re­place 1% of the cells in my brain and body with ex­act du­pli­cates. In the case in the mid­dle of the spec­trum, they would re­place 50%. In a case near the far end, they would re­place 99%, leav­ing only 1% of my origi­nal brain and body. At the far end, the ‘re­place­ment’ would in­volve the com­plete de­struc­tion of my brain and body, and the cre­ation out of new or­ganic mat­ter of a Replica of me.”

(Rea­sons and Per­sons, p. 234.)

Parfit uses this to ar­gue against the in­tu­ition of phys­i­cal con­ti­nu­ity pumped by the first ex­per­i­ment: if your iden­tity de­pends on phys­i­cal con­ti­nu­ity, where is the ex­act thresh­old at which you cease to be “you”?

Isn’t this just a var­i­ant of the Sorites para­dox? (I can use it to ar­gue that iden­tity can’t have any­thing to do with synapse con­nec­tions: sup­pose I de­stroy your synapses one at a time, where is the ex­act thresh­old at which you cease to be “you”?) I’m sur­prised at Parfit’s high rep­u­ta­tion if he made ar­gu­ments like this one.

• Cry­on­i­cists have a say­ing: “Be­ing cry­on­i­cally sus­pended is the sec­ond worst thing that can hap­pen to you.”

• Is there re­ally any­one who would sign up for cry­on­ics ex­cept that they are wor­ried that their fu­ture re­vived self wouldn’t be made of the same atoms and thus would not be them? The case for cry­on­ics (a case that per­suades me) should be sim­pler than this.

• I agree. I’d be more wor­ried about civil­i­sa­tion col­laps­ing in the in­terim be­tween be­ing frozen and the point when peo­ple would have worked out how to re­vive me.

• Why would you worry about that? Wouldn’t you worry in­stead about the op­por­tu­nity costs of sign­ing-up for cry­on­ics?

• Do you know what it takes to se­curely erase a com­puter’s hard drive? Writ­ing it over with all ze­roes isn’t enough. Writ­ing it over with all ze­roes, then all ones, then a ran­dom pat­tern, isn’t enough. Some­one with the right tools can still ex­am­ine the fi­nal state of a sec­tion of mag­netic mem­ory, and dis­t­in­guish the state,

Minor note: this claim is ob­so­lete and should not be used to make the point you’re try­ing to make.

Peter Gut­mann’s origi­nal list of steps to erase a hard drive is ob­so­lete. Gut­mann him­self is par­tic­u­larly an­noyed that it ap­pears to have taken on the sta­tus of a voodoo rit­ual. As that Wikipe­dia ar­ti­cle notes, “There is yet no pub­lished ev­i­dence as to in­tel­li­gence agen­cies’ abil­ity to re­cover files whose sec­tors have been over­writ­ten, al­though pub­lished Govern­ment se­cu­rity pro­ce­dures clearly con­sider an over­writ­ten disk to still be sen­si­tive. Com­pa­nies spe­cial­iz­ing in re­cov­ery of dam­aged me­dia (e.g., me­dia dam­aged by fire, wa­ter or oth­er­wise) can­not re­cover com­pletely over­writ­ten files. No pri­vate data re­cov­ery com­pany cur­rently claims that it can re­con­struct com­pletely over­writ­ten data.” Over­writ­ing with ran­dom data is enough in prac­tice in 2011, and was in 2008 for that mat­ter.

• Scien­tists have played with elec­tron micro­scopes and es­tab­lished that in prin­ci­ple some­one with the right tools could ex­am­ine the fi­nal state of a sec­tion of mag­netic mem­ory and dis­t­in­guish an ear­lier state. It’s just that no­body has said tools in prac­tice and the en­g­ineer­ing tasks to cre­ate tools that worked re­li­ably for the task is an ab­solute night­mare.

One could ar­gue that the quoted claim is tech­ni­cally cor­rect.

• Ci­ta­tion needed, one talk­iing about hard disks as of 2008 at the ear­liest, or an equiv­a­lent mag­netic prob­lem.

A sup­port­ing claim need­ing to be stretched as far as “well, it’s not tech­ni­cally false!” still strikes me as not be­ing a good ex­am­ple to try to per­suade peo­ple with.

• Ci­ta­tion needed, one talk­iing about hard disks as of 2008 at the ear­liest, or an equiv­a­lent mag­netic prob­lem.

I am re­luc­tant to com­ply with de­mands for cita­tions on some­thing that is not par­tic­u­larly con­tro­ver­sial and, more im­por­tantly, does not con­tra­dict the refer­ences you your­self pro­vided. Apart from read­ing your own refer­ences (Gut­mann and wikipe­dia) you can look at the most sub­stan­tial crit­i­cism of the idea that there are real world agen­cies who could re­cover your over­writ­ten data, that by Daniel Feen­berg.

Gut­mann men­tions that af­ter a sim­ple setup of the MFM de­vice, that bits start flow­ing within min­utes. This may be true, but the bits he refers to are not from from disk files, but pix­els in the pic­tures of the disk sur­face. Charles Sobey has posted an in­for­ma­tive pa­per “Re­cov­er­ing Un­re­cov­er­able Data” with some quan­ti­ta­tive in­for­ma­tion on this point. He sug­gests that it would take more than a year to scan a sin­gle plat­ter with re­cent MFM tech­nol­ogy, and tens of ter­abytes of image data would have to be pro­cessed.

His gen­eral point is that while there has been some limited suc­cess with play­ing with pow­er­ful micro­scopes the cur­rent pro­cess is so ridicu­lously im­prac­ti­cal and un­re­li­able that there is no chance any ex­ist­ing in­tel­li­gence agency would be able to pull it off.

A sup­port­ing claim need­ing to be stretched as far as “well, it’s not tech­ni­cally false!” still strikes me as not be­ing a good ex­am­ple to try to per­suade peo­ple with.

Not a po­si­tion I have ar­gued against, nor would I be in­clined to.

• Fair enough!

• Kaj: No, more aren’t born ev­ery minute, they are all sim­ply there, and if one can­not tol­er­ate van­ish­ingly small fre­quen­cies or prob­a­bil­ities then there will always be things other than your brain spon­ta­neously con­figur­ing them­selves into “your brain re­solved to aban­don those you had re­solved to help” for ev­ery real or hy­po­thet­i­cal “some­one” you might re­solve to help. For what its worth though, if “you” is the clas­si­cal com­pu­ta­tion ap­prox­i­mated by your neu­rons then it isn’t “you” in the per­sonal con­ti­nu­ity rele­vant sense that does any given highly im­prob­a­ble thing. The causal re­la­tions that cause un­likely be­hav­iors ex­ist only in the con­figu­ra­tion space of the uni­verse. They differ from the causal re­la­tions that ex­ist in the ab­stract de­ter­minis­tic com­pu­ta­tion that you prob­a­bly ex­pe­rience be­ing.

Frank: See Kaj

Eliezer: What’s up with con­tin­u­ous physics from an in­finite set athe­ist?

Un­known: 2 seems plau­si­ble but it’s definitely not an ar­gu­ment that most peo­ple would accept

Will Pear­son: Shut up and mul­ti­ply. 150K/​day adds up to about 3B af­ter 60 years, which is a con­ser­va­tively high es­ti­mate for how long we need. Heads have a vol­ume of a few liters, call it 3.33 for con­ve­nience, so that’s 10M cu­bic me­ters. Cool­ing in­volves mas­sive economies of scale, as only sur­faces mat­ter. All we are talk­ing about is, as­sum­ing a hemi­spher­i­cal fa­cil­ity, 168 me­ters of ra­dius and 267,200 square me­ters of sur­face area. Not a lot to in­su­late. One small power plant could eas­ily power the main­te­nance of such a fa­cil­ity at liquid ni­tro­gen tem­per­a­tures.

• So what’s time­less iden­tity?

I read this ar­ti­cle with the ti­tle “Time­less Iden­tity”, and there was a bunch of state­ments of the form “iden­tity isn’t this” and “iden­tity isn’t that”, and at the end I didn’t see a pos­i­tive state­ment about how time­less iden­tity works. Does the ar­ti­cle fail to solve the prob­lem it set out to solve, or did I read too fast?

Per­son­ally, I think the no­tion of iden­tity is mud­dled and should be dis­carded. There is a vague prefer­ence about which way the world should be moved, there’s presently one blob of pro­to­plasm (wear­ing a badge with “Tim Free­man” writ­ten on it, as I write) that does a sloppy job of mak­ing that hap­pen, and if cry­on­ics or peo­ple-copy­ing or an AI apoc­a­lypse or up­load­ing hap­pen, there will be a differ­ent num­ber of blobs of some­thing tak­ing ac­tion to make it hap­pen. The vague prefer­ence is more likely to be en­acted if things ex­ist in the world that are try­ing to make it hap­pen, hence self-preser­va­tion is ra­tio­nal. No iden­tity needed. The Bud­dhists are right—there a tran­sient col­lec­tion of skand­has, not an in­dwelling essence, so there is no iden­tity, time­less or oth­er­wise.

So I’m not con­cerned about the pos­si­bil­ity of there be­ing no such thing as time­less iden­tity, but I am slightly con­cerned that ei­ther the ar­ti­cle has some­thing good I missed, or group­think is hap­pen­ing to the ex­tent that none of the up­voted com­ments on this ar­ti­cle are scream­ing “The Em­peror has no clothes!”, and I don’t know which.

Thanks for the poin­ter to Parfit’s work. I’ve added it to my read­ing list. Upvoted the ar­ti­cle be­cause of the refer­ence to Parfit and the idea that maybe the in­ter­minable de­bates on the var­i­ous tran­shu­man­ist mailing lists ac­tu­ally didn’t make sig­nifi­cant progress on the is­sue.

Nit­pick 1: if the odds of ac­tual im­ple­men­ta­tions of cry­on­ics work­ing is less than 50%, then maybe most of those 150K deaths ac­tu­ally are un­avoid­able, on the av­er­age. One failure mode is cry­on­ics not work­ing be­cause we will lose an AI apoc­a­lypse, for ex­am­ple.

Nit­pick 2: If the forces that pre­vent food and clean wa­ter from get­ting to the dy­ing chil­dren in Africa would also pre­vent de­liv­ery of cry­on­ics, then we can’t blame ig­no­rant first-wor­lders for their deaths.

Nit­pick 3: I think cry­on­ics would still make just as much sense in a de­ter­minis­tic world, so IMO you don’t have to un­der­stand quan­tum me­chan­ics to prop­erly eval­u­ate it.

I call these nit­picks be­cause the essence of the ar­gu­ment is that there are many, many avoid­able deaths hap­pen­ing ev­ery day on the av­er­age, and I agree with that.

• The Bud­dhists are right

I always cringe at state­ments like this. I’m quite fa­mil­iar with the Bud­dhist no­tion of no self, but I don’t think for a sec­ond that study of Bud­dhist philos­o­phy would con­vince any­one that a cry­on­i­cally frozen per­son will wake up as them­selves—in fact, given the huge stretch of time be­tween freeze and un­freeze, there is a strong (but wrong) ar­gu­ment from Bud­dhist philos­o­phy that cry­on­ics wouldn’t work.

And so if it bears a su­perfi­cial similar­ity but doesn’t out­put the same an­swers … it is about as right as a logic gate that looks like AND but performs ALWAYS RETURN FALSE.

• I’m quite fa­mil­iar with the Bud­dhist no­tion of no self, but I don’t think for a sec­ond that study of Bud­dhist philos­o­phy would con­vince any­one that a cry­on­i­cally frozen per­son will wake up as themselves

If there is no self, then cry­on­ics ob­vi­ously nei­ther works nor doesn’t work at mak­ing a per­son wake up as them­selves, since they don’t have a self to wake up as. From this point of view, cry­on­ics works if some­one wakes up, and the per­son who origi­nally signed up for cry­on­ics would have preferred for that per­son to wake up over not hav­ing that per­son wake up, given the op­por­tu­nity costs in­curred when do­ing cry­on­ics.

Cry­on­ics is similar in kind to sleep or the pas­sage of time in that way.

Whether most Bud­dhists are able to figure that out is an­other ques­tion. I agree that I’m not de­scribing the Bud­dhist con­sen­sus on cry­on­ics, and I agree that Bud­dhist philos­o­phy does not mo­ti­vate do­ing cry­on­ics. My only points are that they’re con­sis­tent, and that Bud­dhist philos­o­phy frees me from ur­gently try­ing to puz­zle out what “Time­less Iden­tity” is sup­posed to mean.

I’m slightly con­cerned that the OP ap­par­ently doesn’t say how time­less iden­tity is sup­posed to work, and no­body seems to have no­ticed that.

• I’m slightly con­cerned that the OP ap­par­ently doesn’t say how time­less iden­tity is sup­posed to work, and no­body seems to have no­ticed that.

The ex­pla­na­tion of iden­tity starts when he kicks off around the many-wor­lds heads di­a­gram. Speci­fi­cally the part that makes time­less iden­tity work (as long as you ac­cept most re­duc­tion­ist phys­i­cal de­scrip­tions of iden­tity—con­figu­ra­tions of neu­rons and synapses and such) is this:

We also saw in Time­less Causal­ity that the end of time is not nec­es­sar­ily the end of cause and effect; causal­ity can be defined (and de­tected statis­ti­cally!) with­out men­tion­ing “time”. This is im­por­tant be­cause it pre­serves ar­gu­ments about per­sonal iden­tity that rely on causal con­ti­nu­ity rather than “phys­i­cal con­ti­nu­ity”.

• Ah. The as­sump­tion that iden­tity = con­scious­ness was es­sen­tial to rec­og­niz­ing that this was an at­tempt to an­swer the ques­tion of how time­less iden­tity works. He only men­tions iden­tity = con­scious­ness in pass­ing once, and I missed it the first time around, so the prob­lem was that I was read­ing too fast. Thanks.

If you need a no­tion of iden­tity, I agree that iden­tity = con­scious­ness is a rea­son­able stand to take.

• I knew this was where we were headed when you started talk­ing about zom­bies and I knew ex­actly what the er­ror would be.

Even if I ac­cept your premises of many-wor­lds and time­less physics, the iden­tity ar­gu­ment still has ex­actly the same form as it did be­fore. Most peo­ple are aware that atomic-level iden­tity is prob­le­matic even if they’re not aware of the im­pli­ca­tions of quan­tum physics. They know this be­cause they con­sume and ex­crete ma­te­rial. No­body who’s thought about this for more than a few sec­onds thinks their iden­tity lies in the iden­tity of the atoms that make up their bod­ies.

Your view of the world ac­tu­ally makes it eas­ier to hold a po­si­tion of phys­i­cal iden­tity. If you can say “this chunk of Pla­to­nia is over­lap­ping com­pu­ta­tions that make up me” I can equally say “this chunk of Pla­to­nia is over­lap­ping bio­chem­i­cal pro­cesses that make up me.” Or I can talk about the cel­lu­lar level or what­ever. Your physics has given us free­dom to choose an ar­bi­trary level of de­scrip­tion. So your ar­gu­ment re­duces to to the usual sub­jec­tivist ar­gu­ment for psy­cholog­i­cal iden­tity (i.e., “no no­tice­able differ­ence”) with­out the physics do­ing any work.

• I have two (un­re­lated) com­ments:

1) I very much en­joyed the con­cept of “time­less physics”, and in a MWI frame­work it sounds par­tic­u­larly el­e­gant and in­tu­itive. How does rel­a­tivity fit into the pic­ture? What I mean is, the speed of light, c, some­how gives an in­trin­sic mea­sure of time to us. In what does c trans­lates in time­less terms?

2) About your ar­gu­ment for cry­on­ics, what about ir­re­versible pro­cesses? Is quan­tum physics giv­ing you a chance to beat en­tropy? When you die, a lot of ir­re­versible pro­cesses hap­pen into your brain (e.g. pro­teins and mem­branes break down). It is true that prob­a­bly there are less changes in a cry­onized brain than in a sleep­ing one, but it’s the na­ture of the changes that’s fun­da­mently differ­ent. Of course no in­for­ma­tion is re­ally lost, but it’s ir­re­versibly dis­persed in the en­vi­ron­ment well be­fore you have a chance to get it back. Cry­on­ics to me looks like an at­tempt to un­scram­ble an egg -only with the egg be­ing frozen when it starts to scram­ble, but already a bit scram­bled. I ad­mit it is bet­ter than rot­ting in a grave (and I’d like to sign up for it) but has any­one tried to mea­sure the hopes?

• Eliezer, why no men­tion of the no-clon­ing the­o­rem?

Also, some thoughts this has trig­gered:

Dist­in­guisha­bil­ity can be shown to ex­ist for some types of ob­jects in just the same way that it can be shown to not ex­ist for elec­trons. Flip two coins. If the coins are in­dis­t­in­guish­able, then the HT state is the same as the TH state, and you only have three pos­si­ble states. But if the coins are dis­t­in­guish­able, then HT is not TH, and there are four pos­si­ble states. You can ex­per­i­men­tally ver­ify that the prob­a­bil­ity obeys the lat­ter situ­a­tion, and not the former. And of course, you can ex­per­i­men­tally ver­ify that elec­tron pairs obeys the former situ­a­tion, and not the lat­ter. This is prob­a­bly just be­cause the coins are qual­i­ta­tively dis­tinct, while the elec­trons are not.

But it seems that if you did make a quan­tum copy (no-clon­ing the­o­rem be damned!) then af­ter a bit of in­ter­ac­tion with the differ­ent en­vi­ron­ments, the two would be­come dis­t­in­guish­able (on the ba­sis of de­vel­op­ing differ­ent qual­i­ta­tive iden­tities) and start be­hav­ing more like the coins than the elec­trons. In fact, if you’re ac­tu­ally us­ing the light­speed limit then the re­con­structed you would be sev­eral years younger, and im­me­di­ately dis­t­in­guish­able from what the scanned you has since evolved into. At the time of re­con­struc­tion, the two are already act­ing like coins and not elec­trons. Does this break the ar­gu­ment? I’m not re­ally sure, be­cause the re­con­structed you at the time of re­con­struc­tion would still be in­dis­t­in­guish­able from the you at the time of scan­ning, if you could some­how get them both around at the same time.

Bonus! The re­con­structed you could be seen to have a very qual­i­ta­tively differ­ent time-evolu­tion. The scanned you evolves through­out its en­tire his­tory via a Hamil­to­nian which it­self changes con­tin­u­ously as scanned-you moves con­tin­u­ously through your en­vi­ron­ment. Re­con­structed you, how­ever, has a clear dis­con­ti­nu­ity in its Hamil­to­nian at the time of re­con­struc­tion (the state is effec­tively in­stantly moved from one en­vi­ron­ment into a com­pletely differ­ent en­vi­ron­ment). The state of the re­con­structed you would still evolve con­tin­u­ously, it would just have a dis­con­tin­u­ous deriva­tive. So I’m not re­ally sure if re­con­structed you would fail to pass the bar of hav­ing a “con­ti­nu­ity of iden­tity” that a lot of peo­ple talk about when deal­ing with the con­cept of self. My gut says no, but I’m not sure why.

• Eliezer, why no men­tion of the no-clon­ing the­o­rem?

In­deed. It is dis­ap­point­ing to see this buried at the bot­tom of the page. I don’t think the no-clon­ing and no-tele­por­ta­tion the­o­rems have any se­ri­ous im­pli­ca­tions for Eliezer’s ar­gu­ments for life ex­ten­sion (al­though, it might have some im­pli­ca­tions for how he an­ti­ci­pates be­ing re­cov­ered later). But, it does have some im­pli­ca­tions for the ideas about iden­tity pre­sented here. Here is the rele­vant text:

Are you un­der the im­pres­sion that one of these bod­ies is con­structed out of the origi­nal atoms—that it has some kind of phys­i­cal con­ti­nu­ity the other does not pos­sess? But there is no such thing as a par­tic­u­lar atom, so the origi­nal-ness or new-ness of the per­son can’t de­pend on the origi­nal-ness or new-ness of the atoms.

In fact, hav­ing read the en­tire QM se­quence, I am not un­der the im­pres­sion that I am made out of atoms at all! I am an ever-de­co­her­ing con­figu­ra­tion of am­pli­tude dis­tri­bu­tions. Fur­ther­more since I know my con­figu­ra­tion can never be de­com­posed and trans­mit­ted via clas­si­cal means, I also know that the scan­ner/​tele­porter so-defined can’t pos­si­bly ex­ist.

Now, if you want to talk about en­tan­gling my body at point A, with some mat­ter at point B, and via some ad­di­tional in­for­ma­tion trans­mit­ted via nor­mal chan­nels, move me from point A to point B that way—now we have some­thing to talk about. But the origi­nal propo­si­tion, of a tele­porter which can move me from point A to point B, but can also, with some minor tweak­ing, be turned into a scan­ner which would “merely” cre­ate a copy of me at point A, is an ab­sur­dity. It is im­pos­si­ble to copy the con­figu­ra­tion that makes up “me”. The origi­nal clas­si­cal tele­porter kills the peo­ple who use it, be­cause the con­figu­ra­tion of am­pli­tude con­structed at point B can’t pos­si­bly match, even in prin­ci­ple the one de­stroyed at point A.

• Since you’re a com­puter guy (and I imag­ine many peo­ple you talk to are also com­puter-savvy), I’m sur­prised you don’t use file/​pro­cess analogues for iden­tity.

• If I move a file’s phys­i­cal lo­ca­tion on my hard drive, it’s ob­vi­ously still the same file, be­cause it has han­dle and data con­ti­nu­ity. This is analo­gous to ex­ist­ing in differ­ent lo­ca­tions, be­ing ex­pressed with differ­ent atoms.

• If I change the con­tent of the file, it’s ob­vi­ously still the same file, be­cause it has han­dle and lo­ca­tion con­ti­nu­ity. This is analo­gous to chang­ing over not-tech­ni­cally-time-but-causal-effect-chains-that-we-may-as-well-call-time-for-con­ve­nience.

• If I delete the file (ac­tu­ally just re­mov­ing its’ file han­dle in most mod­ern sys­tems) and use a util­ity to re­cover it, it’s ob­vi­ously still the same file, be­cause it has lo­ca­tion and data con­ti­nu­ity. This is analo­gous to cry­on­ics.

Iden­tity is thus de­scrib­able with three com­po­nents: han­dle, data, and lo­ca­tion con­ti­nu­ity, only two of which are re­quired at any given point. As for hav­ing just one:

• If you have only han­dle con­ti­nu­ity, you have two dis­tinct ob­jects with the same name.

• If you have only data con­ti­nu­ity, then you have du­pli­cate work.

• If you have only lo­ca­tion con­ti­nu­ity, you’ve re­for­mat­ted.

All three break file iden­tity.

As for cry­on­ics, I would sign up if I could be con­vinced that I would not be­come ob­so­lete or even detri­men­tal to a so­ciety that re­s­ur­rects me. And look­ing at some of the prob­lems in my coun­try already be­ing caused by merely hav­ing a nor­mally ag­ing pop­u­la­tion at cur­rent so­cial de­vel­op­ment rate, I don’t even think it’s a given that I could con­tribute mean­ingfully to so­ciety dur­ing the twilight of my at-pre­sent-nat­u­ral life.

• I re­al­ize this is an old post and no one will read this com­ment… but I just wanted to say thank you. I my­self signed up for cy­ron­ics just a month ago, but did, for ex­am­ple, won­der—will I be the same per­son? I still won­der that, but with slightly more per­spec­tive.

• The Ben Best Cry­on­ics FAQ link is dead, or at least frozen.

• Added link to a snap­shot on In­ter­net Archive (last snap­shot was 31 Dec 2009, so it’s pos­si­bly not available for some time now, but maybe not).

• Why do time­less physics re­quire ab­sence of re­peat­ing? How would things change even if uni­verse re­peated it­self?

• How would things change even if uni­verse re­peated it­self?

Even then there would be no differ­ence be­tween re­peat­ing and non-re­peat­ing uni­verse.

As an ex­am­ple, try to imag­ine 100 uni­verses, each one ex­actly the same as our, in ev­ery last de­tail. Is it some­how differ­ent from hav­ing only 1 uni­verse? No. Even in­finitely many uni­verses, as long as they are ex­actly the same, don’t make any differ­ence.

Now try to imag­ine one uni­verse that some­how (de­spite the sec­ond law of ther­mo­dy­nam­ics) re­peats. It fol­lows the same laws, so it re­peats ex­actly the same way, in ev­ery last de­tail. Is it some­how differ­ent from only re­peat­ing once? No.

• I think this sen­tence does not make sense. If a uni­verse has some con­figu­ra­tion, then it IS the UNIVERSE. It does not make sense that there are 100 of them

I imag­ine it like a se­quence of num­bers. There is 0, then there is 1 etc. It does not make sense that if you have se­quence:

1,2,8,5,1

That there are two differ­ent oc­curences of a thing “num­ber one. No mat­ter how “many times” the num­ber was used, it is still fun­da­men­taly the num­ber.

I think Eliz­ier him­self used a very good ex­am­ple of how things work like. Imag­ine that ev­ery­thing you know about our uni­verse can be coded into a se­quence of num­bers. Every­thing—all its his­tory etc. Now what mean­ing it has from in­side of this uni­verse that some pow­er­ful alien race with a lot of com­put­ing power can take this se­quence and load it into a mem­ory of some su­per­com­puter? What if they load it and delete it it twice or mil­lion times? What if they load it on two com­put­ers si­mul­ta­neously? It does not mat­ter from within the uni­verse. It just is.

• I’m in­clined to think “yes”, ac­tu­ally. I think re­dun­dancy mat­ters..

Be­fore ex­pound­ing on that, though, could you point me at any ma­te­rial that says it doesn’t?

• I have no ma­te­rial that di­rectly says that.

Indi­rectly, though, what ex­pe­rience do you ex­pect if there are 100 uni­verses ex­actly the same as our in ev­ery de­tail, as op­posed to if there is only 1 such uni­verse?

• The same, if that was the en­tirety of ex­is­tence.

Since we’re pos­tu­lat­ing mul­ti­ple uni­verses, that’s prob­a­bly pretty un­likely though. I would ex­pect to have an in­creased prob­a­bil­ity of ex­ist­ing in a uni­verse with more copies, pro­por­tion­ally to the num­ber of copies.

• So, let’s sup­pose there is a uni­verse A which ex­ists only once, and a uni­verse B which ex­actly re­peats for­ever. Those uni­verses are differ­ent, but a situ­a­tion of “me, now” can hap­pen in both of them. (Both uni­verses hap­pen to con­tain the same con­figu­ra­tion of par­ti­cles in a limited space at some time.) Then, I should ex­pect to be in the uni­verse B, be­cause that is in­finitely more prob­a­ble.

Un­for­tu­nately, I don’t know whether I just wrote a non­sen­si­cal se­quence of words, or whether there is some real mean­ing in them.

• No, it sounds pretty mean­ingful to me.

I’m mod­el­ing this as if we have an (un­bounded) com­puter ex­e­cut­ing all pos­si­ble pro­grams, some of which in­volve in­tel­li­gences.. usu­ally em­bed­ded in a uni­verse. The usual dove­tailer model, that is.

In the case you de­scribed, there would be two pro­grams in­volved. One com­putes my life once, and then halts. The sec­ond runs the same pro­gram as the first, but in an in­finite loop. And yes, in this case I would ex­pect to find my­self in pro­gram B in most sam­ples. (Although, as I wrote that, there’s no way to tell the differ­ence.. make the ob­vi­ous cor­rec­tion to fix that.)

I de­scribed it as all pos­si­ble pro­grams, though, which would cer­tainly in­clude things such as boltz­mann brains. The rea­son I don’t see that as a prob­lem is.. the com­pu­ta­tional den­sity of (my?) mind, strictly speak­ing, is what mat­ters; not just the to­tal num­ber of in­stan­ti­a­tions, which over an in­finite run­time is a non­sen­si­cal thing to ask about. Cer­tainly there would be an in­finite num­ber of boltz­mann brains, but they’re rare; much rarer than, say, a cyclic uni­verse.

Well. That said, the ap­par­ent scarcity of life in this uni­verse, as op­posed to com­pu­ta­tion-hun­gry but bor­ing things like stars, seems to be a de­cent coun­ter­ar­gu­ment. I’m not sure how it’d work out, re­ally. :O

• Bump­ing an old com­ment be­cause I was won­der­ing this too.

• Re: “In re­al­ity, physics is con­tin­u­ous.”

That has yet to be es­tab­lished.

The uni­verse could turn out to be finite and dis­crete—e.g. see my site:

http://​​finite­na­ture.com/​​

It is con­fu­sion to ar­gue from the con­ti­nu­ity of the wave equa­tion to the
con­ti­nu­ity of the un­der­ly­ing physics—since there is no com­pel­ling rea­son to think that the wave equa­tion is the fi­nal word on the is­sue—and dis­crete phe­nom­ena of­ten look con­tin­u­ous if you ob­serve them from a suffi­ciently great dis­tance—e.g. see lat­tice gasses.

http://​​en.wikipe­dia.org/​​wiki/​​Loop_quan­tum_grav­ity is an ex­am­ple of a more mod­ern dis­crete the­ory.

• I get the feel­ing a lot of pro­po­nents of cry­on­ics are a bit like those who crit­i­cize pre­dic­tion mar­kets, but re­fuse to bet on them. If you re­ally be­lieve that sign­ing up for cry­on­ics is so im­por­tant, why aren’t you be­ing frozen now? Surely there are large num­bers of branches in which your brain gets ir­re­triev­ably de­stroyed to­mor­row—if the re­ward for be­ing frozen is so big, why wait?

• Only if Many-Wor­lds isn’t true and the uni­verse is finite or re­peats with a finite pe­riod and Teg­mark’s ul­ti­mate en­sem­ble the­ory is false. Per­son­ally, I find that prospect more dis­turb­ing for some rea­son.

• @Kaj:

I find lit­tle com­fort in the prospect of the “be­trayal branches” be­ing van­ish­ingly few in fre­quency—in ab­solute num­bers, their amount is still uni­mag­in­ably large, and more are born ev­ery mo­ment.

Kaj, you have to learn to take com­fort in this. Not tak­ing com­fort in it is not a vi­able op­tion.

I’m se­ri­ous. Other­wise you’ll buy lot­tery tick­ets be­cause some ver­sion of you wins, make in­con­sis­tent choices on the Allais para­dox, choose SPECKS over TORTURE...

Shut up and mul­ti­ply. In a Big World there is no other choice.

• Frank, it’s not log­i­cally nec­es­sary but it seems highly likely to be true—the spread in wor­lds in­clud­ing “you” seems like it ought to in­clude wor­lds where each com­bi­na­tion of lot­tery balls turns up. Pos­si­bly even wor­lds where your friend screams and runs out of the room, though that might be a van­ish­ingly small frac­tion un­less pre­dis­posed.

Eliezer,

I have to ask now, be­cause this is a topic that’s been both­er­ing me for months, and oc­ca­sion­ally been mak­ing it real hard for me to take plea­sure in any­thing.

How strongly does MWI im­ply that wor­lds will show up where I even do things that I con­sider im­mensly un­de­sir­able—for in­stance, stab some­body I love with a knife, and then when they lay there dy­ing and look at me, I hon­estly can’t tell them or my­self why I did it—be­cause what hap­pened was caused by a very-low prob­a­bil­ity event that mo­men­tar­ily caused my brain to give my arm that com­mand? (I know I’m not us­ing any­where near the cor­rect QM ter­minol­ogy, but you know what I mean.) Or that my brain would spon­ta­neously re­con­figure parts of it­self so that I ended up coldly aban­don­ing some­body who had trusted me and who I’d promised to always be with, etc.

The thought of I—and yes, since there are no origi­nals or copies, the very I writ­ing this—hav­ing a guaran­teed cer­tainty of end­ing up do­ing that causes me so much an­guish that I can’t help but think­ing that if true, hu­man­ity should be de­stroyed in or­der to min­i­mize the amount of branches where peo­ple end up in such situ­a­tions. I find lit­tle com­fort in the prospect of the “be­trayal branches” be­ing van­ish­ingly few in fre­quency—in ab­solute num­bers, their amount is still uni­mag­in­ably large, and more are born ev­ery mo­ment.

• I still don’t get the point of time­less physics. It seems to me like two differ­ent ways of look­ing at the same thing, like clas­si­cal con­figu­ra­tion space vs re­la­tional con­figu­ra­tion space. Sure, it may make more sense to for­mu­late the laws of physics with­out time, and it may make the equa­tions much sim­pler, but how ex­actly does it change your ex­pected ob­ser­va­tions? In what ways does a time­less uni­verse differ from a time­ful uni­verse?

Also, I don’t think it’s nec­ces­sary to study quan­tum me­chan­ics in or­der to un­der­stand per­sonal iden­tity. I’ve reached the same con­clu­sions about iden­tity with­out know­ing any­thing about QM, I feel it’s just sim­ple de­duc­tions from ma­te­ri­al­ism.

• Great sum­mary I have sent the link to all my friends! In the wait for some kind of TOC this is the best link yet to send peo­ple con­cern­ing this se­ries.

I would like to know your opinion on Max Teg­marks ul­ti­mate en­sem­ble the­ory! Or if some­one knows Elis opinion on this won­der­ful the­ory, please tell me!

Are other bright sci­en­tists and philoso­phers aware of this blog? Do you send links to peo­ple when there is a topic that re­lates to them? Do you send links to the peo­ple you men­tion? Does Chalmers, Den­nett, Pinker, Deutsch, Bar­bour, Pearl, Teg­mark, Dawk­ins, Vinge, Egan, Hof­tadter, McCarthy, Kurzweil, Smolin, Wit­ten, Taleb, Sher­mer, Khane­man, Tooby, Cos­mides, Au­mann, Pen­rose, Hameroff etc. etc. know about all this?

They may all be wrong in one way or an­other, but they are cer­tainly not stupid blind peo­ple.

And I think your writ­ing would definitely in­ter­est all of these peo­ple and con­tribute to their work and jour­ney to­wards the truth. So it would both be al­tru­is­tic to send them the links, and ex­cit­ing if they would com­ment!

Espe­cially it would be nice if these peo­ple would com­ment on the posts where you show your dis­sagree­ment!

• Eliezer, why are all of your posts so long? I un­der­stand how most of them would be—be­cause you’re try­ing to con­vey com­plex ideas—but how come none of the ideas you con­vey are con­cise? Some of them seem like at­tempts to pad with ex­ces­sive “back­ground” ma­te­rial when sim­ple tell-it-like-it-is brevity would suffice.

I thought this post was le­gi­t­i­mately long, but this just came to mind when re­flect­ing on past posts.

• Co­va­lent bonds with ex­ter­nal atoms are just one form of “cor­re­la­tion with the en­vi­ron­ment”.

I wish to pos­tu­late a perfect copy, in the sense that the in­ter­nal cor­re­la­tions are iden­ti­cal to the origi­nal, but in which the cor­re­la­tions to the rest of the uni­verse are differ­ent (e.g. “on Mars” rather than “on Earth”).

There is some con­fu­sion here in the switch­ing be­tween in­di­vi­d­ual con­figu­ra­tions, and con­figu­ra­tion space. An atom is already a blob in con­figu­ra­tion space (e.g. “one elec­tron in the ground-state or­bital”) rather than a sin­gle con­figu­ra­tion, with re­spect to a par­ti­cle ba­sis.

While we can­not in­di­vi­d­u­ate par­ti­cles in a rel­a­tive con­figu­ra­tion, we can in­di­vi­d­u­ate wave pack­ets trav­el­ing in rel­a­tive con­figu­ra­tion space. And since even an atom already ex­ists at that level, it is far from clear to me that the at­tempt to aban­don con­ti­nu­ity of iden­tity car­ries over to com­pli­cated struc­tures.

• Frank, it’s not log­i­cally nec­es­sary but it seems highly likely to be true—the spread in wor­lds in­clud­ing “you” seems like it ought to in­clude wor­lds where each com­bi­na­tion of lot­tery balls turns up. Pos­si­bly even wor­lds where your friend screams and runs out of the room, though that might be a van­ish­ingly small frac­tion un­less pre­dis­posed.

Roland, the Cry­on­ics In­sti­tute seems to ac­cept pa­tients from any­where that can be ar­ranged to be shipped: http://​​www.cry­on­ics.org/​​euro.html. Not sure about Al­cor.

• [Eliezer says:] And if you’re plan­ning to play the lot­tery, don’t think you might win this time. A van­ish­ingly small frac­tion of you wins, ev­ery time.

I think this is, strictly speak­ing, not true. A more ex­treme ex­am­ple: While re­cently talk­ing with a friend, he as­serted that “In one of the fu­ture wor­lds, I might jump up in a minute and run out onto the street, scream­ing loudly!”. I said: “Yes, maybe, but only if you are already strongly pre­dis­posed to do so. MWI means that ev­ery pos­si­ble fu­ture ex­ists, not ev­ery ar­bi­trary imag­in­able fu­ture.”. Although your as­ser­tion in the case about lot­tery is much weaker, I don’t be­lieve it’s strictly true.

• Roland, I do not know. There is an or­ga­ni­za­tion in Rus­sia. The Cry­on­ics In­sti­tute ac­cepts bod­ies shipped to them packed in ice. I’m not sure about Al­cor, which tries to do on-scene sus­pen­sion. Al­cor lists a \$25K sur­charge (which would be paid out of life in­surance) for sus­pen­sion out­side the US/​UK/​Canada, but I’m not sure how far abroad they’d go. Where are you?

Mitchell: You may be able to in­di­vi­d­u­ate atoms within struc­tures by look­ing at their quan­tum cor­re­la­tions; you won’t be able to say ‘this atom has prop­erty X, that atom has prop­erty Y’ but you’ll be able to say ‘there’s an atom with prop­erty X, and there’s an atom with prop­erty Y’.

Cer­tainly. That’s how we dis­t­in­guish Eliezer from Mitchell.

• Eliezer...the main is­sue that keeps me from cry­on­ics is not whether the “real me” wakes up on the other side.

The first ques­tion is about how ac­cu­rate the re­con­struc­tion will be. When you wipe a hard drive with a mag­net, you can re­cover some of the con­tent, but usu­ally not all of it. Re­cov­er­ing “some” of a hu­man, but not all of it, could eas­ily cre­ate a men­tally hand­i­capped, bro­ken con­scious­ness.

But lets set that aside, as it is a tech­ni­cal prob­lem. There is an sec­ond is­sue. If and when im­mor­tal­ity and AI are achieved, what value would my re­vived con­scious­ness con­tribute to such a so­ciety?

You’ve thus far es­tab­lished that death isn’t a bad thing when a copy of the in­for­ma­tion is pre­served and later re­vived. You’ve ex­plained that you are will­ing to treat con­scious­ness much like you would a com­puter file—you’ve ex­plained that you would be will­ing to de­stroy one of two re­dun­dant du­pli­cates of your­self.

Tell me, why ex­actly is it okay to de­stroy a re­dun­dant du­pli­cate of your­self? You can’t say that it’s okay to de­stroy it sim­ply be­cause it is re­dun­dant, be­cause that also de­stroys the point of cry­on­ics. There will be countless hu­mans and AIs that will come into ex­is­tence, and each of those minds will re­quire re­sources to main­tain. Why is it so im­por­tant that your, or my, con­scious­ness be one among this swarm? Is that not similarly re­dun­dant?

For the same rea­sons that you would be will­ing to de­stroy one of two iden­ti­cal copies of your­self be­cause hav­ing two copies is re­dun­dant, I am won­der­ing just how much I care that my own con­scious­ness sur­vives for­ever. My mind is not ex­cep­tional among all the pos­si­ble con­scious­nesses that re­sources could be de­voted to. Keep­ing my mind pre­served through the ages seems to me just as re­dun­dant as mak­ing twenty copies of your­self and care­fully pre­serv­ing each one.

I’m not say­ing I don’t want to live for­ever...I do want to. I’m say­ing that I feel one aught to have a rea­son for pre­serv­ing ones con­scious­ness that goes be­yond the sim­ple de­sire for at least one copy of ones con­scious­ness to con­tinue ex­ist­ing.

When we de­con­struct the no­tion of con­scious­ness as thor­oughly as we are do­ing in this dis­cus­sion, the con­cept of “life” and “death” be­come mean­ingless over-ap­prox­i­ma­tions, much like “free will”. Once so­ciety reaches that point, we are go­ing to have to de­con­struct those ideas and ask our­selves why it is so im­por­tant that cer­tain in­for­ma­tion never be deleted. Other­wise, it’s go­ing to get a lit­tle silly...a “21st cen­tury hu­man brain max­i­mizer” is not that much differ­ent from a pa­per­clip max­i­mizer, in the grand scheme of things.

• It seems you place less value on your life than I do on mine. I’m glad we’ve reached agree­ment.

• I agree, it’s quite pos­si­ble that some­one might de­con­struct “me” and “life” and “death” and “sub­jec­tive ex­pe­rience” to the same ex­tent that I have and still value never delet­ing cer­tain in­for­ma­tion that is com­pu­ta­tion­ally de­scended from them­selves more than all the other things that would be done with the re­sources that are used to main­tain them.

Hell, I might value it to that ex­tent. This isn’t some­thing I’m cer­tain about. I’m still ex­plor­ing this. My de­fault an­swer is to live for­ever—I just want to make sure that this is re­ally what I want af­ter con­sid­er­a­tion and not just a kick­ing, scream­ing sur­vival in­stinct (AKA a first or­der prefer­ence)

• This seems to me like an or­thog­o­nal ques­tion. (A ques­tion that can be en­tirely ex­tri­cated and sep­a­rated from the cry­on­ics ques­tion).

You’re talk­ing about whether you are a valuable enough in­di­vi­d­ual that you can jus­tify re­sources be­ing spent on main­tain­ing your ex­is­tence. That’s a ques­tion that can be asked just as eas­ily even if you have no con­cept of cry­on­ics. For in­stance: if your life de­pends on get­ting med­i­cal treat­ment that costs a mil­lion dol­lars, is it worth it? Or should you pre­fer that the money be spent on sav­ing other lives more effi­ciently?

(In­ci­den­tally, i know that util­i­tar­i­anism gen­er­ally favours the sec­ond op­tion. But I would never blame any­one for choos­ing the first op­tion if the money was offered to them.)

I would ac­cept an end to my ex­is­tence if it al­lowed ev­ery­one else on earth to live for as long as they wished, and ex­pe­rience an ex­is­ten­tially fulfilling form of hap­piness. I wouldn’t ac­cept an end to my ex­is­tence if it al­lowed one stranger to en­joy an ice cream. There are sce­nar­ios where I would think it was worth us­ing re­sources to main­tain my ex­is­tence, and sce­nar­ios where I would ac­cept that the re­sources should be used differ­ently. I think this is true when we con­sider cry­on­ics, and equally true if we don’t.

The cry­on­ics ques­tion is quite differ­ent.

For the sake of ar­gu­ment, I’ll as­sume that you’re al­ive and that you in­tend to keep on liv­ing, for at least the next 5 years. I’ll as­sume that If you ex­pe­rienced a life-threat­en­ing situ­a­tion to­mor­row, and some­one was able to in­ter­vene med­i­cally and grant you (at least) 5 more years of life, then you would want them to.

There are many differ­ent life-threat­en­ing sce­nar­ios, and many differ­ent pos­si­ble in­ter­ven­tions. But for de­ci­sion mak­ing pur­poses, you could prob­a­bly group them into “in­ter­ven­tions which ex­tend my life in a mean­ingful way” and in­ter­ven­tions that don’t. For in­stance, an in­ter­ven­tion that kept your body al­ive but left you com­pletely brain-dead would prob­a­bly go in the sec­ond cat­e­gory. Coronary by­pass surgery would prob­a­bly go in the first.

The cry­on­ics ques­tion here is sim­ply: “If a doc­tor offered to freeze you, then re­vive you 50 years later” would you put this in the same cat­e­gory as other “life-sav­ing” in­ter­ven­tions? Would you con­sider it an ex­ten­sion of your life, in the same way as a heart trans­plant would be? And would you value it similarly in your con­sid­er­a­tions?

And of course, we can ask the same ques­tion for a differ­ent in­ter­ven­tion, where you are frozen, then scanned, then recre­ated years later in one (or more) simu­la­tions.

• 30 Sep 2013 16:21 UTC
−1 points
Parent

The main is­sue that keeps me from cry­on­ics is not whether the “real me” wakes up on the other side.

How do you go to sleep at night, not know­ing if it is the “real you” that wakes up on the other side of con­scious­ness?

• Your com­ment would make more sense to me if I re­moved the word “not” from the sen­tence you quote. (Also, if I don’t read past that sen­tence of some­onewron­gonthenet’s com­ment.)

That said, I agree com­pletely that the kinds of vague iden­tity con­cerns about cry­on­ics that the quoted sen­tence with “not” re­moved would be rais­ing would also arise, were one con­sis­tent, about rou­tine con­tinu­a­tion of ex­is­tence over time.

• 1 Oct 2013 0:14 UTC
0 points
Parent

That said, I agree com­pletely that the kinds of vague iden­tity con­cerns about cry­on­ics that the quoted sen­tence with “not” re­moved would be rais­ing would also arise, were one con­sis­tent, about rou­tine con­tinu­a­tion of ex­is­tence over time.

There are things that when I go to bed to wake up eight hours later are very nearly pre­served but if I woke up sixty years later wouldn’t be, e.g. other peo­ple’s mem­o­ries of me (see I Am a Strange Loop) or the cul­ture of the place where I live (see Good Bye, Lenin!).

(I’m not say­ing whether this is one of the main rea­sons why I’m not signed up for cry­on­ics.)

• Point.

• Hrm.. am­bigu­ous se­man­tics. I took it to im­ply ac­cep­tance of the idea but not ele­va­tion of its im­por­tance, but I see how it could be in­ter­preted differ­ently. And yes, the rest of the post ad­dresses some­thing com­pletely differ­ent. But if I can con­tinue for a mo­ment on the tan­gent, ex­pand­ing my com­ment above (even if it doesn’t ap­ply to the OP):

You ac­tu­ally con­tinue func­tion­ing when you sleep, it’s just that you don’t re­mem­ber de­tails once you wake up. A more use­ful ex­am­ple for such dis­cus­sion is gen­eral anes­the­sia, which shuts down the re­gions of the brain as­so­ci­ated with con­scious­ness. If per­sonal iden­tity is in fact de­rived from con­ti­nu­ity of com­pu­ta­tion, then it is plau­si­ble that gen­eral anes­the­sia would re­sult in a “differ­ent you” wak­ing up af­ter the op­er­a­tion. The ap­pli­ca­tion to cry­on­ics de­pends greatly on the sub­tle dis­tinc­tion of whether vit­rifi­ca­tion (and more im­por­tantly, the re­cov­ery pro­cess) slows downs or stops com­pu­ta­tion. This has been a source of philo­soph­i­cal angst for me per­son­ally, but I’m still a cry­on­ics mem­ber.

More trou­bling is the ap­pli­ca­tion to up­load­ing. I haven’t done this yet, but I want my Al­cor con­tract to ex­plic­itly for­bid up­load­ing as a restora­tion pro­cess, be­cause I am un­con­vinced that a simu­la­tion of my de­struc­tively scanned frozen brain would re­ally be a con­tinu­a­tion of my per­sonal iden­tity. I was hop­ing that “Time­less Iden­tity” would ad­dress this point, but sadly it punts the is­sue.

• Well, if the idea is unim­por­tant to the OP, pre­sum­ably that also helps ex­plain how they can sleep at night.

WRT the tan­gent… my own po­si­tion wrt preser­va­tion of per­sonal iden­tity is that while it’s difficult to ar­tic­u­late pre­cisely what it is that I want to pre­serve, and I’m not en­tirely cer­tain there is any­thing co­gent I want to pre­serve that is uniquely as­so­ci­ated with me, I’m pretty sure that what­ever does fall in that cat­e­gory has noth­ing to do with ei­ther con­ti­nu­ity of com­pu­ta­tion or similar­ity of phys­i­cal sub­strate. I’m about as san­guine about con­tin­u­ing my ex­is­tence as a soft­ware up­load as I am about con­tin­u­ing it as this biolog­i­cal sys­tem or as an en­tirely differ­ent biolog­i­cal sys­tem, as long as my sub­jec­tive ex­pe­rience in each case is not trau­mat­i­cally differ­ent.

• I wrote up about a page-long re­ply, then re­al­ized it prob­a­bly de­serves its own post­ing. I’ll see if I can get to that in the next day or so. There’s a wide spec­trum of pos­si­ble solu­tions to the per­sonal iden­tity prob­lem, from phys­i­cal con­ti­nu­ity (falsified) to pat­tern con­ti­nu­ity and causal con­ti­nu­ity (de­scribed by Eliezer in the OP), to com­pu­ta­tional con­ti­nu­ity (my own view, I think). It’s not a minor point though, whichever view turns out to be cor­rect has im­mense ram­ifi­ca­tions for moral­ity and time­less de­ci­sion the­ory, among other things...

• What rele­vance does per­sonal iden­tity have to TDT? TDT doesn’t de­pend on whether the other in­stances of TDT are in copies of you, or in other peo­ple who merely use the same de­ci­sion the­ory as you.

• It has rele­vance for the basilisk sce­nario, which I’m not sure I should say any more about.

• When you write up the post, you might want to say a few words about what it means for one of these views to be “cor­rect” or “in­cor­rect.”

• Ok I will, but that part is easy enough to state here: I mean cor­rect in the re­duc­tion­ist sense. The sim­plest ex­pla­na­tion which re­solves the origi­nal ques­tion and/​or as­so­ci­ated con­fu­sion, while adding to our pre­dic­tive ca­pac­ity and not in­tro­duc­ing new con­fu­sion.

• Mm. I’m not sure I un­der­stood that prop­erly; let me echo my un­der­stand­ing of your view back to you and see if I got it.

Sup­pose I get in some­thing that is billed as a trans­porter, but which does not pre­serve com­pu­ta­tional con­ti­nu­ity. Sup­pose, for ex­am­ple, that it de­struc­tively scans my body, sends the in­for­ma­tion to the des­ti­na­tion (a pro­cess which is not in­stan­ta­neous, and dur­ing which no com­pu­ta­tion can take place), and re­con­structs an iden­ti­cal body us­ing that in­for­ma­tion out of lo­cal raw ma­te­ri­als at my des­ti­na­tion.

If it turns out that com­pu­ta­tional or phys­i­cal con­ti­nu­ity is the cor­rect an­swer to what pre­serves per­sonal iden­tity, then I in fact never ar­rive at my des­ti­na­tion, al­though the thing that gets con­structed at the des­ti­na­tion (falsely) be­lieves that it’s me, knows what I know, etc. This is, as you say, an is­sue of great moral con­cern… I have been de­stroyed, this new per­son is un­fairly given credit for my ac­com­plish­ments and pe­nal­ized for my er­rors, and in gen­eral we’ve just screwed up big time.

Con­versely, if it turns out that pat­tern or causal con­ti­nu­ity is the cor­rect an­swer, then there’s no prob­lem.

There­fore it’s im­por­tant to dis­cover which of those facts is true of the world.

Yes? This fol­lows from your view? (If not, I apol­o­gize; I don’t mean to put up straw­men, I’m gen­uinely mi­s­un­der­stand­ing.)

If so, your view is also that if we want to know whether that’s the case or not, we should look for the sim­plest an­swer to the ques­tion “what does my per­sonal iden­tity com­prise?” that does not in­tro­duce new con­fu­sion and which adds to our pre­dic­tive ca­pac­ity. (What is there to pre­dict here?)

Yes?

EDIT: Ah, I just read this post where you say pretty much this. OK, cool; I un­der­stand your po­si­tion.

• Sup­pose I get in some­thing that is billed as a trans­porter, but which does not pre­serve com­pu­ta­tional con­ti­nu­ity. Sup­pose, for ex­am­ple, that it de­struc­tively scans my body, sends the in­for­ma­tion to the des­ti­na­tion (a pro­cess which is not in­stan­ta­neous, and dur­ing which no com­pu­ta­tion can take place), and re­con­structs an iden­ti­cal body us­ing that in­for­ma­tion out of lo­cal raw ma­te­ri­als at my des­ti­na­tion.

I don’t know what “com­pu­ta­tion” or “com­pu­ta­tional con­ti­nu­ity” means if it’s con­sid­ered to be sep­a­rate from causal con­ti­nu­ity, and I’m not sure other philoso­phers have any stan­dard idea of this ei­ther. From the per­spec­tive of the Planck time, your brain is do­ing ex­tremely slow ‘com­pu­ta­tions’ right now, it shall stand mo­tion­less a quin­til­lion ticks and more be­fore what­ever ar­bi­trary thresh­old you choose to call a neu­ral firing. Or from a faster per­spec­tive, the 50 years of in­ter­ven­ing time might as well be one clock tick. There can be no ba­sic on­tolog­i­cal dis­tinc­tion be­tween fast and slow com­pu­ta­tion, and aside from that I have no idea what any­one in this thread could be talk­ing about if it’s dis­tinct from causal con­ti­nu­ity.

• (shrug) It’s Mark’s term and I’m usu­ally will­ing to make good-faith efforts to use other peo­ple’s lan­guage when talk­ing to them. And, yes, he seems to be draw­ing a dis­tinc­tion be­tween com­pu­ta­tion that oc­curs with rapid enough up­dates that it seems con­tin­u­ous to a hu­man ob­server and com­pu­ta­tion that doesn’t. I have no idea why he con­sid­ers that dis­tinc­tion im­por­tant to per­sonal iden­tity, though… as far as I can tell, the whole thing de­pends on the im­plicit idea of iden­tity as some kind of ghost in the ma­chine that dis­si­pates into the ether if not ac­tively pre­served by a mea­surable state change ev­ery N microsec­onds. I haven’t con­firmed that, though.

• 2 Oct 2013 1:30 UTC
−2 points
Parent

Hy­poth­e­sis: con­scious­ness is what a phys­i­cal in­ter­ac­tion feels like from the in­side.

Im­por­tantly, it is a prop­erty of the in­ter­act­ing sys­tem, which can have var­i­ous de­grees of co­her­ence—a differ­ent con­cept than quan­tum co­her­ence, which I am still de­vel­op­ing: some­thing along the lines of nega­tive-en­tropic com­plex­ity. There is there­fore a deep cor­re­la­tion be­tween ne­gen­tropy and con­scious­ness. Ran­dom ther­mo­dy­namic mo­tion in a gas is about as min­i­mum-con­scious as you can get (lots of ran­dom in­ter­ac­tions, but all short lived and de­co­her­ent). A rock is slightly more con­scious due to its crys­tal­line struc­ture, but prob­a­bly leads a rather bor­ing ex­is­tence (by our stan­dards, at least). And so on, all the way up to the very ne­gen­tropic pri­mate brain which ex­pe­riences a high de­gree of co­her­ent ex­pe­rience that we call “con­scious­ness” or “self.”

I know this sounds like mak­ing think­ing an on­tolog­i­cally ba­sic con­cept. It’s rather the re­verse—I am build­ing the ex­pe­rience of think­ing up from phys­i­cal phe­nomenon: con­scious­ness is the ex­pe­rience of or­ga­nized phys­i­cal in­ter­ac­tions. But I’m not yet con­vinced of it ei­ther. If you throw out the con­cept of co­her­ent in­ter­ac­tion (what I have been call­ing com­pu­ta­tion con­ti­nu­ity), then it does re­duce to causal con­ti­nu­ity. But causal con­ti­nu­ity does have it’s prob­lems which make me sus­pect it as not be­ing the fi­nal, ul­ti­mate an­swer...

• Hy­poth­e­sis: con­scious­ness is what a phys­i­cal in­ter­ac­tion feels like from the in­side.
...
con­scious­ness is the ex­pe­rience of or­ga­nized phys­i­cal in­ter­ac­tions.

How do you ex­plain the ex­is­tence of the phe­nomenon of “feel­ing like” and of “ex­pe­rience”?

• I agree that the grand­par­ent has cir­cum­vented ad­dress­ing the crux of the mat­ter, how­ever I feel (heh) that the no­tion of “ex­plain” of­ten comes with un­re­al­is­tic ex­pec­ta­tions. It bears re­mem­ber­ing that we merely de­scribe re­la­tion­ships as suc­cinctly as pos­si­ble, then that de­scrip­tion is the “ex­pla­na­tion”.

While we would e.g. ex­pect/​hope for there to be some non-con­tra­dic­tory set of de­scrip­tions ap­ply­ing to both grav­ity and quan­tum phe­nom­ena (for which we’d eat a large com­plex­ity penalty, since com­plex but ac­cu­rate de­scrip­tions always beat out sim­ple but in­ac­cu­rate de­scrip­tions; Oc­cam’s Ra­zor ap­plies only to choos­ing among fit­ting/​not yet falsified de­scrip­tions), as soon as we’ve found some pinned-down de­scrip­tion in some pre­cise lan­guage, there’s no guaran­tee—or strictly speak­ing, need—of an even sim­pler ex­pla­na­tion.

A world run­ning ac­cord­ing to cur­rently en-vogue physics, plus a box which can­not be de­scribed as an ex­ten­sion of said physics, but only in some other way, could in fact be fully ex­plained, with no fur­ther ex­planans for the ex­planan­dum.

It seems pretty straight­for­ward to note that there’s no way to “de­rive” phe­nom­ena such as “feel­ing like” in the cur­rent physics frame­work, ex­cept of course to de­scribe which states of mat­ters/​en­ergy cor­re­spond to which qualia.

Such a de­scrip­tion could be the ex­pla­na­tion, with noth­ing fur­ther to be ex­plained:

If it em­piri­cally turned out that a spe­cific kind mat­ter needs to be ar­ranged in the spe­cific pat­tern of a ver­te­brate brain to cor­re­late to qualia, that would “ex­plain” con­scious­ness. If it turned out (as we all ex­pect) that the pat­tern alone suffi­cies, then cer­tain classes of in­stan­ti­ated al­gorithms (re­gard­less of the hard­ware/​wet­ware) would be con­scious. Re­gard­less, ei­ther de­scrip­tion (if it turned out to be em­piri­cally sound) would be the ex­pla­na­tion.

I also won­der, what could any an­swer within the cur­rent physics frame­work pos­si­bly look like, other than an as­ter­isk be­hind the equa­tions with the ad­den­dum of “val­ues n1nk for pa­ram­e­ters p1pk cor­re­late with qualia x”?

• 2 Oct 2013 7:56 UTC
−1 points
Parent

How do you ex­plain “feel­ing like” and “ex­pe­rience” in gen­eral? This is LW so I as­sume you have a re­duc­tion­ist back­ground and would offer an ex­pla­na­tion based on in­for­ma­tion pat­terns, neu­ron firings, hor­mone lev­els, etc. But ul­ti­mately all of that re­duces down to a big col­lec­tion of quarks, each tak­ing part in mostly ran­dom in­ter­ac­tions on the scale of fem­tosec­onds. The ap­par­ent or­ga­ni­za­tion of the brain is in the map, not the ter­ri­tory. So if sub­jec­tive ex­pe­rience re­duces down to neu­rons, and neu­rons re­duce down to molecules, and molecules re­duce to quarks and lep­tons, where then does the con­scious­ness reside? “In­for­ma­tion pat­terns” alone is an in­ad­e­quate an­swer—that’s at the level of the map, not the ter­ri­tory. Quarks and lep­tons com­bine into molecules, molecules into neu­ral synapses, and the neu­rons con­nect into the 3lb in­for­ma­tion pro­cess­ing net­work that is my brain. Some­where along the line, the sub­jec­tive ex­pe­rience of “con­scious­ness” arises. Where, ex­actly, would you pro­pose that hap­pens?

We know (from our own sub­jec­tive ex­pe­rience) that some­thing we call “con­scious­ness” ex­ists at the scale of the en­tire brain. If you as­sume that the work­ings of the brain is fully ex­plained by its parts and their con­nec­tions, and those parts ex­plained by their sub-com­po­nents and de­signs, etc. you even­tu­ally reach the on­tolog­i­cally ba­sic level of quarks and lep­tons. Fun­da­men­tally the brain is noth­ing more than the in­ter­ac­tion of a large num­ber of quarks and lep­tons. So what is the pre­cise in­ter­ac­tion of fun­da­men­tal par­ti­cles is the ba­sic unit of con­scious­ness? What level of com­plex­ity is re­quired be­fore sim­ply or­ganic mat­ter be­comes a con­scious mind?

It sounds ridicu­lous, but if you as­sume that quarks and lep­tons are “con­scious,” or rather that con­scious­ness is the in­ter­ac­tion of these var­i­ous on­tolog­i­cally prim­i­tive, fun­da­men­tal par­ti­cles, a re­mark­ably con­sis­tent the­ory emerges: one which dis­solves the mys­tery of sub­jec­tive con­scious­ness by ex­plain­ing it as the mere ag­gre­ga­tion of in­ter­de­pen­dent in­ter­ac­tions. Be­sides be­ing sim­ple, this is also pre­dic­tive: it al­lows us to as­sert for a given situ­a­tion (e.g. a tele­porter or halted simu­la­tion) whether loss of per­sonal iden­tity oc­curs, which has im­pli­ca­tions for moral­ity of real situ­a­tions en­coun­tered in the con­struc­tion of an AI.

• The ap­par­ent or­ga­ni­za­tion of the brain is in the map, not the ter­ri­tory.

What do you mean by this? Are fMRIs a big con­spir­acy?

Fun­da­men­tally the brain is noth­ing more than the in­ter­ac­tion of a large num­ber of quarks and lep­tons.

This de­scrip­tion ap­plies equally to all ob­jects. When you de­scribe the brain this way, you leave out all its in­ter­est­ing char­ac­ter­is­tics, ev­ery­thing that makes it differ­ent from other blobs of in­ter­act­ing quarks and lep­tons.

• 2 Oct 2013 19:57 UTC
−1 points
Parent

What I’m say­ing is that the high-level or­ga­ni­za­tion is not on­tolog­i­cally prim­i­tive. When we talk about or­ga­ni­za­tional pat­terns of the brain, or the op­er­a­tion of neu­ral synapses, we’re tak­ing about very high level ab­strac­tions. Yes, they are use­ful ab­strac­tions pri­mar­ily be­cause they ig­nore un­nec­es­sary de­tail. But that de­tail is how they are ac­tu­ally im­ple­mented. The brain is soup of or­ganic par­ti­cles with very high rates of par­ti­cle in­ter­ac­tion due sim­ply to ther­mo­dy­namic noise. At the nanome­ter and fem­tosec­ond scale, there is very lit­tle sig­nal to noise, how­ever at the microm­e­ter and mil­lisec­ond scale gen­eral trends start to emerge, phe­nomenon which form the sub­strate of our com­pu­ta­tion. But these high level ab­strac­tions don’t ac­tu­ally ex­ist—they are just av­er­age ap­prox­i­ma­tions over time of lower level, noisy in­ter­ac­tions.

I as­sume you would agree that a nor­mal adult brain in a hu­man ex­pe­riences a sub­jec­tive feel­ing of con­scious­ness that per­sists from mo­ment-to-mo­ment. I also think it’s a fair bet that you would not think that a sin­gle elec­tron bounc­ing around in some part of a synap­tic path­way or elec­tronic tran­sis­tor has any­thing re­sem­bling a con­scious ex­pe­rience. But some­how, a big ag­gre­ga­tion of these ran­dom mo­tions does add up to you or me. So at what point in the for­ma­tion of a hu­man brain, or con­struc­tion of an AI does it be­come con­scious? At what point does it mere dead mat­ter trans­form into sen­tience? Is this a hard cut­off? Is it grad­ual?

Speak­ing of gra­da­tions, cer­tain an­i­mals can’t rec­og­nize them­selves in a mir­ror. If you use self-aware­ness as a met­ric as was ar­gued el­se­where, does that mean they’re not con­scious? What about in­sects, which op­er­ate with a more dis­tributed neu­ral sys­tem. Dung bee­tles seem to ac­com­plish most tasks by in­nate re­flex re­sponse. Do they have at least a lit­tle, tiny sub­jec­tive ex­pe­rience of con­scious­ness? Or is their ex­is­tence no more mean­ingful than that of a sta­pler?

Yes, this ob­jec­tion ap­plies equally to all ob­jects. That’s pre­cisely my point. Brains are not made of any kind of “mind stuff”—that’s sub­stance du­al­ism which I re­ject. Fur­ther­more, minds don’t have a sub­jec­tive ex­pe­rience sep­a­rate from what is phys­i­cally ex­plain­able—that’s epiphe­nom­e­nal­ism, similarly re­jected. “Minds ex­ist in in­for­ma­tion pat­terns” is a mys­te­ri­ous an­swer—in­for­ma­tion pat­terns are them­selves merely evolv­ing ex­pres­sions in the con­figu­ra­tion space of quarks & lep­tons. Any re­sult of the in­for­ma­tion pat­tern must be ex­plain­able in terms of the in­ter­ac­tions of its com­po­nent parts, or else we are no longer talk­ing about a re­duc­tion­ist uni­verse. If I am com­ing at this with a par­tic­u­lar bias, it is this: all as­pects of mind in­clud­ing con­scious­ness, sub­jec­tive ex­pe­rience, qualia, or what­ever you want to call it are fun­da­men­tally re­ducible to forces act­ing on el­e­men­tary par­ti­cles.

I see only two re­duc­tion­ist paths for­ward to take: (1) posit a new, fun­da­men­tal law by which at some ag­gre­gate level of com­plex­ity or or­ga­ni­za­tion, a com­pu­ta­tional sub­strate be­comes con­scious. How & why is not ex­plained, and as far as I can tell there is no ex­per­i­men­tal way to de­ter­mine where this cut­off is. But as­sume it is there. Or, (2) ac­cept that like ev­ery­thing else in the uni­verse, con­scious­ness re­duces down to the prop­er­ties of fun­da­men­tal par­ti­cles and their in­ter­ac­tions (it is the in­ter­ac­tion of par­ti­cles). A quark and a lep­ton ex­chang­ing a pho­ton is some min­i­mal quan­tum Plank-level of con­scious ex­pe­rience. Yes, that means that even a rock and a sta­pler ex­pe­rience some level of con­scious ex­pe­rience—barely dis­t­in­guish­able from ther­mal noise, but nonzero—but the pay­off is a more pre­dic­tive re­duc­tion­ist model of the uni­verse. In terms of bit­ing bul­lets, I think ac­cept­ing many-wor­lds took more gump­tion than this.

• I also think it’s a fair bet that you would not think that a sin­gle elec­tron bounc­ing around in some part of a synap­tic path­way or elec­tronic tran­sis­tor has any­thing re­sem­bling a con­scious ex­pe­rience. But some­how, a big ag­gre­ga­tion of these ran­dom mo­tions does add up to you or me. So at what point in the for­ma­tion of a hu­man brain, or con­struc­tion of an AI does it be­come con­scious? At what point does it mere dead mat­ter trans­form into sen­tience? Is this a hard cut­off? Is it grad­ual?

This is a Wrong Ques­tion. Con­scious­ness, what­ever it is, is (P=.99) a re­sult of a com­pu­ta­tion. My com­puter ex­hibits a microsoft word be­hav­ior, but if I zoom in to the elec­trons and tran­sis­tors in the CPU, I see no such microsoft word na­ture. It is silly to zoom in to quarks and lep­tons look­ing for the true essence of microsoft word. This is the way com­pu­ta­tions work—a small piece of the com­pu­ta­tion sim­ply does not dis­play be­hav­ior that is like the en­tire com­pu­ta­tion. The CPU is not the com­pu­ta­tion. It is not the atoms of the brain that are con­scious, it is the al­gorithm that they run, and the atoms are not the al­gorithm. Con­scious­ness is pro­duced by non-con­scious things.

“Minds ex­ist in in­for­ma­tion pat­terns” is a mys­te­ri­ous an­swer—in­for­ma­tion pat­terns are them­selves merely evolv­ing ex­pres­sions in the con­figu­ra­tion space of quarks & lep­tons. Any re­sult of the in­for­ma­tion pat­tern must be ex­plain­able in terms of the in­ter­ac­tions of its com­po­nent parts, or else we are no longer talk­ing about a re­duc­tion­ist uni­verse. If I am com­ing at this with a par­tic­u­lar bias, it is this: all as­pects of mind in­clud­ing con­scious­ness, sub­jec­tive ex­pe­rience, qualia, or what­ever you want to call it are fun­da­men­tally re­ducible to forces act­ing on el­e­men­tary par­ti­cles.

Minds ex­ist in some al­gorithms (“in­for­ma­tion pat­tern” sounds too static for my taste). Your de­sire to re­duce things to forces on el­e­men­tary par­ti­cles is mis­guided, I think, be­cause you can do the same com­pu­ta­tion with many differ­ent sub­strates. The im­por­tant thing, the thing we care about, is the com­pu­ta­tion, not the sub­strate. Sure, you can un­der­stand microsoft word at the level of quarks in a CPU ex­e­cut­ing as­sem­bly lan­guage, but it’s much more use­ful to un­der­stand it in terms of func­tions and al­gorithms.

• You’ve com­pletely missed /​ ig­nored my point, again. Microsoft Word can be func­tion­ally re­duced to elec­trons in tran­sis­tors. The brain can be func­tion­ally re­duced to bio­chem­istry. Un­less you re­sort to some form of du­al­ism, the mind (qualia) is also similarly re­duced.

just as com­pu­ta­tion can be brought down to the atomic scale (or smaller, with quan­tum com­put­ing), so too can con­scious ex­pe­riences be con­structed out of such com­pu­ta­tional events. In­deed they are one and the same thing, just viewed from differ­ent per­spec­tives.

• The brain can be func­tion­ally re­duced to bio­chem­istry. Un­less you re­sort to some form of du­al­ism, the mind (qualia) is also similarly re­duced.

I thought du­al­ism meant you thought that there was on­tolog­i­cally ba­sic con­cious­ness stuff sep­a­rate from or­di­nary mat­ter?

I think the mind should be re­duced to al­gorithms, and bio­chem­istry is an im­ple­men­ta­tion de­tail. This may make me a du­al­ist by your us­age of the word.

I think that it’s equally silly to ask, “where is the microsoft-word-ness” about a sub­set of tran­sis­tors in your CPU as it is to ask “where is the con­scious­ness” about a sub­set of neu­rons in your brain. I see this as de­scribing how non-on­tolog­i­cally-ba­sic con­scious­ness can be pro­duced by non-con­scious stuff.

You’ve com­pletely missed /​ ig­nored my point, again.

• 2 Oct 2013 22:04 UTC
−1 points
Parent

I’m ar­gu­ing that if you think the mind can be re­duced to al­gorithms im­ple­mented on com­pu­ta­tional sub­strate, then it is a log­i­cal con­se­quence from our un­der­stand­ing of the rules of physics and the na­ture of com­pu­ta­tion that what we call sub­jec­tive ex­pe­rience must also scale down as you re­duce a com­pu­ta­tional ma­chine down to its parts. After all, the al­gorithms them­selves too also re­ducible down to step­wise ax­io­matic log­i­cal op­er­a­tions, im­ple­mented as tran­sis­tors or in­ter­pretable ma­chine code.

The only way to pre­serve the com­mon in­tu­ition that “it takes (simu­la­tion of) a brain or equiv­a­lent to pro­duce a mind” is to posit some form of du­al­ism. I don’t think it is silly to ask “where is the microsoft-word-ness” about a sub­set of a com­puter—you can for ex­am­ple point to the re­gions of mem­ory and disk where the spel­lchecker is lo­cated, and say “this is the part that matches user in­put against ta­bles of lin­guis­tic data,” just like we point to re­gions of the brain and say “this is your lan­guage pro­cess­ing cen­ters.”

The ex­pe­rience of hav­ing a sin­gle, unified me di­rect­ing my con­scious ex­pe­rience is an illu­sion—it’s what the in­te­gra­tion pro­cess feels like from the in­side, but it does not cor­re­spond to re­al­ity (we have psy­cholog­i­cal data to back this up!). I am in fact a so­ciety of agents, each sim­pler but also rely­ing on an en­tire bu­reau­cracy of other agents in an enor­mous dis­tributed struc­ture. Even­tu­ally though, things re­duce down to in­di­vi­d­ual cir­cuits, then ul­ti­mately to the level of in­di­vi­d­ual cell re­cep­tors and chem­i­cal path­ways. At no point along the way is there a clear di­vi­sion where it is ob­vi­ous that con­scious ex­pe­rience ends and what fol­lows is merely me­chan­i­cal, elec­tri­cal, and chem­i­cal pro­cesses. In fact as I’ve tried to point out the di­vi­sions be­tween higher level ab­strac­tions and their messy im­ple­men­ta­tions is in the map, not the ter­ri­tory.

To as­sert that “this level of al­gorith­mic com­plex­ity is a mind, and be­low that is mere ma­chines” is a re­treat to du­al­ism, though you may not yet see it in that way. What you are as­sert­ing is that there is this on­tolog­i­cally ba­sic mind-ness which spon­ta­neously emerges when an al­gorithm has reached a cer­tain level of com­plex­ity, but which is not the ag­gre­ga­tion of smaller phe­nomenon.

• I think we have re­ally differ­ent mod­els of how al­gorithms and their sub-com­po­nents work.

it is a log­i­cal con­se­quence from our un­der­stand­ing of the rules of physics and the na­ture of com­pu­ta­tion that what we call sub­jec­tive ex­pe­rience must also scale down as you re­duce a com­pu­ta­tional ma­chine down to its parts.

Sup­pose I have a com­pu­ta­tion that pro­duces the digits of pi. It has sub­rou­tines which mul­ti­ply and add. Is it an ac­cu­rate de­scrip­tion of these sub­rou­tines that they have a scaled down prop­erty of com­putes-pi-ness? I think this is not a use­ful way to un­der­stand things. Subrou­tines do not have a scaled-down per­centage of the prop­er­ties of their con­tain­ing al­gorithm, they do a dis­crete chunk of its work. It’s just mad­ness to say that, e.g., your lan­guage pro­cess­ing cen­ter is 57% con­scious.

The ex­pe­rience of hav­ing a sin­gle, unified me di­rect­ing my con­scious ex­pe­rience is an illu­sion...

I agree with all this. Hu­mans prob­a­bly are not the min­i­mal con­scious sys­tem, and there are prob­a­bly sub­sets of our com­po­nent cir­cuitry which main­tain the prop­erty of con­cious­ness. But yes, I main­tain that even­tu­ally, you’ll get to an al­gorithm that is con­scious while none of its sub­rou­tines are.

If this makes me a du­al­ist then I’m a du­al­ist, but that doesn’t feel right. I mean, the only way you can re­ally ex­plain a thing is to show how it arises from some­thing that’s not like it in the first place, right?

• 3 Oct 2013 0:13 UTC
0 points
Parent

I think we have differ­ent mod­els of what con­scious­ness is. In your pi ex­am­ple, the mul­ti­plier has mul­ti­ply-ness, and the ad­der has add-ness prop­er­ties, and when com­bined to­gether in a cer­tain way you get com­putes-pi-ness. Like­wise our minds have many, many, many differ­ent com­po­nents which—some­how, some­way—each have a small ex­pe­ri­en­tial qualia which when you sum to­gether yield the hu­man con­di­tion.

Through brain dam­age stud­ies, for ex­am­ple, we have de­scrip­tions of what it feels like to live with­out cer­tain men­tal ca­pa­bil­ities. I think you would agree with this, but for oth­ers read­ing take this thought ex­per­i­ment: imag­ine that I were to sys­tem­at­i­cally shut down por­tions of your brain, or in simu­la­tion, delete re­gions of your mem­ory space. For the pur­pose of the ar­gu­ment I do it slowly over time in rel­a­tively small amounts, and clean­ing up dan­gling refer­ences so the whole sys­tem doesn’t shut down. Cer­tainly as time goes by your men­tal func­tion­al­ity is re­duced, and you stop be­ing ca­pa­ble of hav­ing ex­pe­riences you once took for granted. But at what point, pre­cisely, do you stop ex­pe­rienc­ing at all qualia of any form? When you’re down to just a billion neu­rons? A mil­lion? A thou­sand? When you’re down to just one pro­cess­ing re­gion? Is one tiny al­gorithm on a sin­gle cir­cuit enough?

Hu­mans prob­a­bly are not the min­i­mal con­scious sys­tem, and there are prob­a­bly sub­sets of our com­po­nent cir­cuitry which main­tain the prop­erty of con­scious­ness. But yes, I main­tain that even­tu­ally, you’ll get to an al­gorithm that is con­scious while none of its sub­rou­tines are.

What is the min­i­mal con­scious sys­tem? It’s easy and per­haps ac­cu­rate to say “I don’t know.” After all, nei­ther one of us know enough neu­ral and cog­ni­tive sci­ence to make this call, I as­sume. But we should be able to an­swer this ques­tion: “if pre­sented crite­ria for a min­i­mally-con­scious-sys­tem, what would con­vince me of its val­idity?”

If this makes me a du­al­ist then I’m a du­al­ist, but that doesn’t feel right. I mean, the only way you can re­ally ex­plain a thing is to show how it arises from some­thing that’s not like it in the first place, right?

Eliezer’s post on re­duc­tion­ism is rele­vant here. In a re­duc­tion­ist uni­verse, any­thing and ev­ery­thing is fully defined by its con­stituent el­e­ments—no more, no less. There’s a pop­u­lar phrase that has no place is re­duc­tion­ist the­o­ries: “the whole is greater than the sum of its parts.” Typ­i­cally what this ac­tu­ally means is that you failed to count the “parts” cor­rectly: a part list should also in­clude spa­tial con­figu­ra­tions and ini­tial con­di­tions, which to­gether im­ply the dy­namic be­hav­iors as well. For ex­am­ple, a pul­ley is more than a hunk of metal and some rope, but it is fully defined if you spec­ify how the metal is shaped, how the rope is threaded through it and fixed to ob­jects with knots, how the whole con­trap­tion is ori­ented with re­spect to grav­ity, and the pro­ce­dure for ap­ply­ing rope-pul­ling-force. Com­bined with the fun­da­men­tal laws of physics, this is a fully re­duc­tive ex­pla­na­tion of a rope-pul­ley sys­tem which is the sum of its fully-defined parts.

And so it goes with con­scious­ness. Un­less we are com­fortable with the mys­te­ri­ous an­swers pro­vided by du­al­ism—or em­piri­cal ev­i­dence like con­fir­ma­tion of psy­chic phe­nomenon com­pels us to go there—then we must de­mand that an ex­pla­na­tion be pro­vided that ex­plains con­scious­ness fully as the ag­gre­ga­tion of smaller pro­cesses.

When I look a ex­pla­na­tions of the work­ings of the brain, start­ing with the high­est level psy­cholog­i­cal the­o­ries and neu­ral struc­ture, and work­ing the way all the way down the ab­strac­tion hi­er­ar­chy to in­di­vi­d­ual neu­ral synapses and bio­chem­i­cal path­ways, nowhere along the way do I see an ob­vi­ous place to stop and say “here is where con­scious­ness be­gins!” Like­wise, I can start from the level of mere atoms and work my way up to the full neu­ral ar­chi­tec­ture, with­out find­ing any step that adds some­thing which could be con­scious­ness, but which isn’t fun­da­men­tally like the lev­els be­low it. But when you get to the high­est level, you’ve de­scribed the full brain with­out find­ing con­scious­ness any­where along the way.

I can see how this leads oth­er­wise in­tel­li­gent philoso­phers like David Chalmers to epiphe­nom­e­nal­ism. But I’m not go­ing to go down that path, be­cause the whole situ­a­tion is the re­sult of men­tal con­fu­sion.

The Stan­dard Ra­tion­al­ist An­swer is that men­tal pro­cesses are in­for­ma­tion pat­terns, noth­ing more, and tat con­scious­ness is an illu­sion, end of story. But that still leaves me con­fused! It’s not like free will for ex­am­ple, where be­cause of the mind pro­jec­tion fal­lacy I think I have free will due to how a de­ter­minis­tic de­ci­sion the­ory al­gorithm feels from the in­side. I get that. No, the an­swer of “that sub­jec­tive ex­pe­rience of con­scious­ness isn’t real, get over it” is un­satis­fac­tory be­cause if I don’t have con­scious, how am I ex­pe­rienc­ing think­ing in the first place? Cog­ito ergo sum.

How­ever there is a way out. I went look­ing for a source of con­scious­ness be­cause I like nearly ev­ery other philoso­pher as­sumed that there was some­thing spe­cial and unique which set brains aside as hav­ing minds which other more mun­dane ob­jects—like rocks and sta­plers—do not pos­sess. That’s so ob­vi­ously true, but hon­estly I have no real jus­tifi­ca­tion for that be­lief. So let’s try negat­ing it. What is pos­si­ble if we don’t ex­clude mun­dane things from hav­ing minds too?

Well, what does it feel like to be a quark and a lep­ton ex­chang­ing a pho­ton? I’m not re­ally sure, but let’s call that ap­prox­i­mately the min­i­mum pos­si­ble “ex­pe­rience”, and for the du­ra­tion of the in­ter­ac­tion con­tin­u­ous in­ter­ac­tion over time, the two par­ti­cles share a “mind”. Ar­range a num­ber of these ob­jects to­gether and you get an atom, which it­self also has a shared/​merged ex­pe­rience so long as the par­ti­cles re­main in bonded in­ter­ac­tion. Ar­range a lot of atoms to­gether and you get a elec­tri­cal tran­sis­tor. Now we’re fi­nally start­ing to get to a level where I have some idea of what the “shared ex­pe­rience of be­ing a tran­sis­tor” would be (rather bor­ing, by my stan­dards), and more im­por­tantly, it’s clear how that ex­pe­rience is ag­gre­gated to­gether from its con­stituent parts. From here, com­put­ing the­ory takes over as more com­plex in­ter­de­pen­dent sys­tems are con­structed, each merg­ing ex­pe­riences to­gether into a shared hive mind, un­til you reach the level of the hu­man be­ing or AI.

Are you at least fol­low­ing what I’m say­ing, even if you don’t agree?

• That was a very long com­ment (thank you for your effort) and I don’t think I have the en­ergy to ex­haus­tively go through it.

I be­lieve I fol­low what you’re say­ing. It doesn’t make much sense to me, so maybe that be­lief is false.

I think the fact that if you start with a brain, which is pre­sum­ably con­scious, and zoom in all the way look­ing for the con­cious­ness bound­ary, and then start with a quark, which is pre­sum­ably not con­scious, and zoom all the way out to the en­tire brain, also with­out find­ing a con­scious­ness bar­rier—I think this means that the best we can do at the mo­ment is set up­per and lower bounds.

A min­i­mally con­scious sys­tem—say, some­thing that can con­vince me that it thinks it is con­scious. “echo ‘I’m con­scious!’” doesn’t quite cut it, things that rec­og­nize them­selves in mir­rors prob­a­bly do, and I could go ei­ther way on the stuff in be­tween.

I think your re­duc­tion­ism is a lit­tle mis­ap­plied. My pi-calcu­lat­ing pro­gram de­vel­ops a new prop­erty of pi-com­pu­ta­tion when you put the ad­ders and mul­ti­pli­ers to­gether right, but is com­pletely de­scribed in terms of ad­ders and mul­ti­pli­ers. I ex­pect con­scious­ness to be ex­actly the same; it’ll be com­pletely de­scribed in terms of qualia gen­er­at­ing al­gorithms (or some such), which won’t them­selves have the con­scious­ness prop­erty.

This is hard to see be­cause the al­gorithms are writ­ten in spaghetti code, in the wiring be­tween neu­rons. In com­puter terms, we have ac­cess to the I/​O sys­tem and all the gates in the CPU, but we don’t cur­rently know how they’re con­nected. Look­ing at more or fewer of the gates doesn’t help, be­cause the crit­i­cal piece of in­for­ma­tion is how they’re con­nected and what al­gorithm they im­ple­ment.

IMO, my guess (P=.65) is that qualia are go­ing to turn out to be some­thing like vec­tors in a fea­ture space. Un­der this model, clearly sys­tems in­ca­pable of rep­re­sent­ing such a vec­tor can’t have any qualia at all. Rocks and sin­gle molecules, for ex­am­ple.

• How do you ex­plain “feel­ing like” and “ex­pe­rience” in gen­eral? This is LW so I as­sume you have a re­duc­tion­ist back­ground and would offer an ex­pla­na­tion based on in­for­ma­tion pat­terns, neu­ron firings, hor­mone lev­els, etc.

I in­deed have a re­duc­tion­ist back­ground, but I offer no ex­pla­na­tion, be­cause I have none. I do not even know what an ex­pla­na­tion could pos­si­bly look like; but nei­ther do I take that as proof that there can­not be one. The story you tell sur­rounds the cen­tral mys­tery with many phys­i­cal de­tails, but even in your own ac­cont of it the mys­tery re­mains un­re­solved:

Some­where along the line, the sub­jec­tive ex­pe­rience of “con­scious­ness” arises.

How­ever much you as­sert that there must be an ex­pla­na­tion, I see here no ad­vance to­wards ac­tu­ally hav­ing one. What does it mean to at­tribute con­scious­ness to sub­atomic par­ti­cles and rocks? Does it pre­dict any­thing, or does it only pre­dict that we could make pre­dic­tions about tele­porters and simu­la­tions if we had a phys­i­cal ex­pla­na­tion of con­scious­ness?

• Hy­poth­e­sis: con­scious­ness is what a phys­i­cal in­ter­ac­tion feels like from the in­side.

I would imag­ine that con­scious­ness (in a sense of self-aware­ness) is the abil­ity to in­tro­spect into your own al­gorithm. The more you un­der­stand what makes you tick, rather than mind­lessly fol­low­ing the in­ex­pli­ca­ble urges and in­stincts, the more con­scious you are.

• Yes, that is not only 100% ac­cu­rate, but de­scribes where I’m headed.

I am look­ing for the sim­plest ex­pla­na­tion of the sub­jec­tive con­ti­nu­ity of per­sonal iden­tity, which ei­ther an­swers or dis­solves the ques­tion. Fur­ther, the ex­pla­na­tion should ei­ther ex­plain which tele­por­ta­tion sce­nario is cor­rect (iden­tity trans­fer, or mur­der+birth), or satis­fac­to­rily ex­plain why it is a mean­ingless dis­tinc­tion.

What is there to pre­dict here?

If I, the per­son stand­ing in front of the trans­porter door, will ex­pe­rience walk­ing on Mars, or oblivion.

Yes, it is per­haps likely that this will never be ex­per­i­men­tally ob­serv­able. That may even be a tau­tol­ogy since we are talk­ing about sub­jec­tive ex­pe­rience. But still, a re­duc­tion­ist the­ory of con­scious­ness could provide a sim­ple, easy to un­der­stand ex­pla­na­tion for the ori­gin of per­sonal iden­tity (e.g., what an com­pu­ta­tional ma­chine feels like from the in­side) and which pre­dicts iden­tity trans­fer or mur­der + birth. That would be enough for me, at least as long as there’s not com­pet­ing equally sim­ple the­o­ries.

• What is there to pre­dict here?
If I, the per­son stand­ing in front of the trans­porter door, will ex­pe­rience walk­ing on Mars, or oblivion.

Well, you cer­tainly won’t ex­pe­rience oblivion, more or less by defi­ni­tion. The ques­tion is whether you will ex­pe­rience walk­ing on Mars or not.

But there is no dis­tinct ob­ser­va­tion to be made in these two cases. That is, we agree that ei­ther way there will be an en­tity hav­ing all the ob­serv­able at­tributes (both sub­jec­tive and ob­jec­tive; this is not about ex­per­i­men­tal proof, it’s about the pres­ence or ab­sence of any­thing differ­en­tially ob­serv­able by any­one) that Mark Frien­de­bach has, walk­ing on Mars.

So, let me rephrase the ques­tion: what ob­ser­va­tion is there to pre­dict here?

• So, let me rephrase the ques­tion: what ob­ser­va­tion is there to pre­dict here?

That’s not the di­rec­tion I was go­ing with this. It isn’t about em­piri­cal ob­ser­va­tion, but rather as­pects of moral­ity which de­pend on sub­jec­tive ex­pe­rience. The pre­dic­tion is un­der what con­di­tions sub­jec­tive ex­pe­rience ter­mi­nates. Even if not testable, that is still an im­por­tant thing to find out, with moral im­pli­ca­tions.

Is it moral to use a tele­porter? From what I can tell, that de­pends on whether the per­son’s sub­jec­tive ex­pe­rience is ter­mi­nated in the pro­cess. From the util­ity point of view the out­comes are very nearly the same—you’ve mur­dered one per­son, but given “birth” to an iden­ti­cal copy in the pro­cess. How­ever if the origi­nal, now de­stroyed per­son didn’t want to die, or wouldn’t have wanted his clone to die, then it’s a net nega­tive.

As I said el­se­where, the tele­porter is the eas­iest way to think of this, but the re­sult has many other im­pli­ca­tions from gen­eral anes­the­sia, to cry­on­ics, to Pas­cal’s mug­ging and the basilisk.

• OK. I’m tap­ping out here. Thanks for your time.

• I want my Al­cor con­tract to ex­plic­itly for­bid up­load­ing as a restora­tion pro­cess, be­cause I am un­con­vinced that a simu­la­tion of my de­struc­tively scanned frozen brain would re­ally be a con­tinu­a­tion of my per­sonal iden­tity.

Like TheOtherDave (I pre­sume), I con­sider my iden­tity to be ad­e­quately de­scribed by what­ever Tur­ing ma­chine that can em­u­late my brain, or at least its pre­frontal cor­tex + rele­vant mem­ory stor­age. I sus­pect that a faith­ful simu­la­tion of just my Brod­mann area 10 cou­pled with a large chunk of my mem­o­ries would re­store enough of my self-aware­ness to be con­sid­ered “me”. This sim-me would prob­a­bly lose most of my emo­tions with­out the rest of the brain, but it is still in­finitely bet­ter than none.

• a large chunk of my mem­o­ries

You’ll need the rest of the brain be­cause these other mem­o­ries would be dis­tributed through­out the rest of your cor­tex. The hip­pocam­pus only con­tains re­cent epi­sodic mem­o­ries.

If you lost your tem­po­ral lobe, for ex­am­ple, you’d lose all non-epi­sodic knowl­edge con­cern­ing what the names of things are, how they are cat­e­go­rized, and what the re­la­tion­ships be­tween them are.

• That said, I’m not sure why I should care much about hav­ing my non-epi­sodic knowl­edge re­placed with an off-the-shelf en­cy­clo­pe­dia mod­ule. I don’t iden­tify with it much.

• If you only kept the hip­pocam­pus, you’d lose your non-re­cent epi­sodic mem­o­ries too. But tech­ni­cal is­sues aside, let me defend the “en­cy­clo­pe­dia”:

Epi­sodic mem­ory is ba­si­cally a cas­sette reel of your life, along with a few per­son­al­ized as­so­ci­a­tions and maybe mem­o­ries of thoughts and emo­tions. Every­thing that we as­so­ci­ate with the word knowl­edge is non-epi­sodic. It’s not just ver­bal la­bels—that was just a handy ex­am­ple that I hap­pened to know the brain re­gion for. I’d ac­tu­ally care about that stuff more about non-epi­sodic mem­o­ries than the epi­sodic stuff.

Things like “what is your wife’s name and what does her face look like” are non-epi­sodic mem­ory. You don’t have to think back to a time when you speci­fi­cally saw your wife to re­mem­ber what her name and face is, and that you love her—that in­for­ma­tion is treated as a fact in­de­pen­dent of any spe­cific mem­ory, in­delibly etched into your model of the world. Cog­ni­tively speak­ing, “I love my wife stacy, she looks like this” is as much of a fact as “grass is a green plant” and they are both non-epi­sodic mem­o­ries. Your epi­sodic mem­ory reel wouldn’t even make sense with­out that sort of in­for­ma­tion. I’d still iden­tify some­one with mem­ory loss, but re­tain­ing my non-epi­sodic mem­ory, as me. I’d iden­tify some­one with only my epi­sodic mem­o­ries as some­one else, look­ing at a reel of mem­ory that does not be­long to them and means noth­ing to them.

(Trig­ger Warn­ing: link con­tains writ­ing in di­ary which is sad, hor­rify­ing, and non­fic­tion.): This is what com­plete epi­sodic mem­ory loss looks like. Pa­tients like this can still re­mem­ber the names of faces of peo­ple they love.

Iron­i­cally...the (area 10) might ac­tu­ally be re­place­able. I’m not sure whether any per­son­al­ized mem­o­ries are kept there—I don’t know what that spe­cific re­gion does but it’s in an area that mostly deals with ex­ec­u­tive func­tion—which is im­por­tant for per­son­al­ity, but not nec­es­sar­ily in­di­vi­d­u­al­ity.

• Iron­i­cally...the (area 10) might ac­tu­ally be re­place­able. I’m not sure whether any per­son­al­ized mem­o­ries are kept there—I don’t know what that spe­cific re­gion does but it’s in an area that mostly deals with ex­ec­u­tive func­tion—which is im­por­tant for per­son­al­ity, but not nec­es­sar­ily in­di­vi­d­u­al­ity.

What’s the differ­ence be­tween per­son­al­ity and in­di­vi­d­u­al­ity?

Per­son­al­ity is a set of di­choto­mous vari­ables plot­ted on a bell curve. “Ein­stein was ex­tro­verted, charis­matic, non­con­form­ing, and prone to ab­sent-mind­ed­ness” de­scribes his per­son­al­ity. We all have these traits in var­i­ous amounts. You can some of these per­son­al­ity nobs re­ally eas­ily with drugs. I can’t spec­ify Ein­stein out of ev­ery per­son in the world us­ing only his per­son­al­ity traits—I can only spec­ify in­di­vi­d­u­als similar to him.

In­di­vi­d­u­al­ity is stuff that’s spe­cific to the per­son. “Ein­stein’s sec­ond mar­riage was to his cousin and he had at least 6 af­fairs. He ad­mired Spinoza, and was a con­tem­po­rary of Tagore. He was a so­cial­ist and cared about civil rights. He had always thought there was some­thing wrong about re­friger­a­tors.” Not all of these are di­choto­mous vari­ables—you ei­ther spoke to Tagore or you didn’t. And it makes no sense to put peo­ple on a “satis­fac­tion with Refriger­a­tors” spec­trum, even though I sup­pose you could if you wanted to. And all this in­for­ma­tion to­gether speci­fi­cally points to Ein­stein, and no one else in the world. Every­one in the world a set of unique traits like finger­prints—and it doesn’t even make sense to ask what the “av­er­age” is, since most of the vari­ables don’t ex­ist on the same di­men­sion.

And...well, when it comes to Area 10, just in­tu­itively, do you re­ally want to define your­self by a few vari­ables that in­fluence your ex­ec­u­tive func­tion? Per­son­ally I define my­self par­tially by my ideas, and par­tially by my val­ues...and the former is definitely in the “in­di­vi­d­u­al­ity” ter­ri­tory.

• OK, I un­der­stand what you mean by per­son­al­ity vs in­di­vi­d­u­al­ity. How­ever, I doubt that the func­tion­al­ity of BA10 can be de­scribed “by a few vari­ables that in­fluence your ex­ec­u­tive func­tion”. Then again, no one knows any­thing definite about it.

• I take it you’re as­sum­ing that in­for­ma­tion about my hus­band, and about my re­la­tion­ship to my hus­band, isn’t in the en­cy­clo­pe­dia mod­ule along with in­for­ma­tion about mice and omelettes and your re­la­tion­ship to your wife.

If that’s true, then sure, I’d pre­fer not to lose that in­for­ma­tion.

• I take it you’re as­sum­ing

Well...yeah, I was. I thought the whole idea of hav­ing an en­cy­clo­pe­dia was to elimi­nate re­dun­dancy through stan­dard­iza­tion of the parts of the brain that were not im­por­tant for in­di­vi­d­u­al­ity?

If your hus­band and my hus­band, your omelette and my omelette, are all stored in the en­cy­clo­pe­dia, it wouldn’t be a “off-the-shelf en­cy­clo­pe­dia mod­ule” any­more. It would be an in­dex con­tain­ing in­di­vi­d­ual peo­ple’s non-epi­sodic knowl­edge. At that point, it’s just an in­dex of par­tial up­loads. We can’t stan­dard­ize that en­cy­clo­pe­dia to ev­ery­one: If the the thing that stores your omelette and your hus­band went around view­ing my epi­sodic reel and know­ing all the per­sonal stuff about my omelette and hus­band...that would be weird and the re­sult­ing be­ing would be very con­fused (let alone if the en­tire hu­man race was in there—I’m not sure how that would even work).

(Also, go­ing back into the tech­ni­cal stuff, there may or may not be a solid di­vid­ing line be­tween very old epi­sodic mem­ory and non-epi­sodoc memory

• Sure, if your omelette and my omelette are so dis­tinct that there is no com­mon data struc­ture that can serve as a refer­ent for both, and ditto for all the other peo­ple in the world, then the whole idea of an en­cy­clo­pe­dia falls apart. But that doesn’t seem ter­ribly likely to me.

Your con­cept of an omelette prob­a­bly isn’t ex­actly iso­mor­phic to mine, but there’s prob­a­bly a parametriz­able omelette data struc­ture we can con­struct that, along with a hand­ful of pa­ram­e­ter set­tings for each in­di­vi­d­ual, can cap­ture ev­ery­one’s omelette. The pa­ram­e­ter set­tings go in the rep­re­sen­ta­tion of the in­di­vi­d­ual; the omelette data struc­ture goes in the en­cy­clo­pe­dia.

And, in ad­di­tion, there’s a bunch of in­di­vi­d­u­al­iz­ing epi­sodic mem­ory on top of that… mem­o­ries of cook­ing par­tic­u­lar omelettes, of learn­ing to cook an omelette, of learn­ing par­tic­u­lar recipes, of that time what ought to have been an omelette turned into a black smear on the pan, etc. And each of those epi­sodic mem­o­ries refers to the shared omelette data struc­ture, but is stored with and is unique to the up­loaded agent. (Maybe. It may turn out that our in­di­vi­d­ual epi­sodic mem­o­ries have a lot in com­mon as well, such that we can store a stan­dard life­time’s mem­o­ries in the shared en­cy­clo­pe­dia and just store a few mil­lion bits of pa­ram­e­ter set­tings in each in­di­vi­d­ual pro­file. I sus­pect we over­es­ti­mate how unique our per­sonal nar­ra­tives are, hon­estly.)

Similarly, it may be that our re­la­tion­ships with our hus­bands are so dis­tinct that there is no com­mon data struc­ture that can serve as a refer­ent for both. But that doesn’t seem ter­ribly likely to me. Your re­la­tion­ship with your hus­band isn’t ex­actly iso­mor­phic to mine, of course, but it can likely similarly be cap­tured by a com­mon pa­ram­e­ter­i­z­able re­la­tion­ship-to-hus­band data struc­ture.

As for the ac­tual in­di­vi­d­ual who hap­pens to be my hus­band, well, the ma­jor­ity of the in­for­ma­tion about him is com­mon to all kinds of re­la­tion­ships with any num­ber of peo­ple. He is his father’s son and his step­mother’s step­son and my mom’s son-in-law and so on and so forth. And, sure, each of those peo­ple knows differ­ent things, but they know those things about the same per­son; there is a cen­tral core. That core goes in the en­cy­clo­pe­dia, and poin­t­ers to what sub­set each per­son knows about him goes in their in­di­vi­d­ual pro­files (along with their per­sonal ex­pe­riences and what­ever idiosyn­cratic be­liefs they have about him).

So, yes, I would say that your hus­band and my hus­band and your omelette and my omelette are all stored in the en­cy­clo­pe­dia. You can call that an in­dex of par­tial up­loads if you like, but it fails to in­cor­po­rate what­ever ad­di­tional com­pu­ta­tions that cre­ate first-per­son ex­pe­rience. It’s just a pas­sive data struc­ture.

In­ci­den­tally and un­re­lat­edly, I’m not nearly as com­mit­ted as you sound to pre­serv­ing our cur­rent ig­no­rance of one an­other’s per­spec­tive in this new ar­chi­tec­ture.

• I’m re­ally skep­ti­cal that para­met­ric func­tions which vary on di­men­sions con­cern­ing omelets (Egg species? Color? in­gre­di­ents? How does this even work?) are a more effi­cient or more ac­cu­rate way of pre­serv­ing what our wet­ware en­code when com­pared to simu­lat­ing the neu­ral net­works de­voted deal­ing with omelettes. I wouldn’t even know how to start work­ing on the prob­lem map­ping a con­cep­tual rep­re­sen­ta­tion of an omelette into para­met­ric func­tions (un­less we’re just us­ing the para­met­ric func­tions to model the prop­er­ties of in­di­vi­d­ual neu­rons—that’s fine).

Can you give an ex­am­ple con­cern­ing what sort of di­men­sion you would parametrize so I have a bet­ter idea of what you mean?

In­ci­den­tally and un­re­lat­edly, I’m not nearly as com­mit­ted as you sound to pre­serv­ing our cur­rent ig­no­rance of one an­other’s per­spec­tive in this new ar­chi­tec­ture.

I was more wor­ried that it might break stuff (as in, re­sult­ing be­ings would need to be built quite differ­ently in or­der to func­tion) if one-an­other’s per­spec­tives would over­lap. Also, that brings us back to the origi­nal ques­tion I was rais­ing about liv­ing for­ever—what ex­actly is it that we value and want to pre­serve?

• Can you give an ex­am­ple con­cern­ing what sort of di­men­sion you would parametrize so I have a bet­ter idea of what you mean?

Not re­ally. If I were se­ri­ous about im­ple­ment­ing this, I would start col­lect­ing dis­tinct in­stances of omelette-con­cepts and an­a­lyz­ing them for vari­a­tion, but I’m not go­ing to do that. My ex­pec­ta­tion is that if I did, the most use­ful di­men­sions of vari­abil­ity would not map to any at­tributes that we would or­di­nar­ily think of or have English words for.

Per­haps what I have in mind can be said more clearly this way: there’s a cer­tain amount of in­for­ma­tion that picks out the space of all hu­man omelette-con­cepts from the space of all pos­si­ble con­cepts… call that bit­string S1. There’s a cer­tain amount of in­for­ma­tion that picks out the space of my omelette-con­cept from the space of all hu­man omelette-con­cepts… call that bit­string S2.

S2 is much, much, shorter than S1.

It’s in­effi­cient to have 7 billion hu­man minds each of which is tak­ing up valuable bits stor­ing 7 billion copies of S1 along with their in­di­vi­d­ual S2s. Why in the world would we do that, posit­ing an ar­chi­tec­ture that didn’t phys­i­cally re­quire it? Run a bloody com­pres­sion al­gorithm, store S1 some­where, have each hu­man mind re­fer to it.

I have no idea what S1 or S2 are.

And I don’t ex­pect that they’re ex­press­ible in words, any more than I can ex­press which pieces of a movie are stored as in­dexed sub­strings… it’s not like MPEG com­pres­sion of a movie of an auto race cre­ates an in­dexed “car” data struc­ture with pa­ram­e­ters rep­re­sent­ing color, make, model, etc. It just iden­ti­fies re­peated sub­strings and in­dexes them, and takes ad­van­tage of the fact that se­quen­tial frames share many sub­strings in com­mon if prop­erly parsed.

But I’m com­mit­ted enough to a com­pu­ta­tional model of hu­man con­cept stor­age that I be­lieve they ex­ist. (Of course, it’s pos­si­ble that our con­cept-space of an omelette sim­ply can’t be picked out by a bit-string, but I can’t see why I should take that pos­si­bil­ity se­ri­ously.)

• Oh, and agreed that we would change if we were ca­pa­ble of shar­ing one an­other’s per­spec­tives.
I’m not par­tic­u­larly in­ter­ested in pre­serv­ing my cur­rent cog­ni­tive iso­la­tion from other hu­mans, though… I value it, but I value it less than I value the abil­ity to eas­ily share per­spec­tives, and they seem to be op­posed val­ues.

• I think I’ve got a good re­sponse for this one.

My non-epi­sodic mem­ory con­tains the “facts” that Buffy the Vam­pire Slayer was one of the best tele­vi­sion shows that was ever made, and the Pink Floyd aren’t an in­ter­est­ing band. My boyfriend’s non-epi­sodic mem­ory con­tains the facts that Buffy was bor­ing, un­o­rigi­nal, and repete­tive (and that Pink Floyd’s mu­sic is tran­cen­den­tally good).

Ob­jec­tively, these are opinions, not facts. But we ex­pe­rience them as facts. If I want to pre­serve my sense of iden­tity, then I would need to re­tain the facts that were in my non-epi­sodic mem­ory. More than that, I would also lose my sense of self if I gained con­tra­dic­tory mem­o­ries. I would need to have my non-epi­sodic mem­o­ries and not have the facts from my boyfriend’s mem­ory.

That’s the rea­son why “off the shelf” doesn’t sound suit­able in this con­text.

• So, on one level, my re­sponse to this is similar to the one I gave (a few years ago) [http://​​less­wrong.com/​​lw/​​qx/​​time­less_iden­tity/​​9trc]… I agree that there’s a per­sonal re­la­tion­ship with BtVS, just like there’s a per­sonal re­la­tion­ship with my hus­band, that we’d want to pre­serve if we wanted to perfectly pre­serve me.

I was merely ar­gu­ing that the bitlength of that per­sonal in­for­ma­tion is much less than the ac­tual in­for­ma­tion con­tent of my brain, and there’s a great deal of com­pres­sion lev­er­age to be gained by tak­ing the shared mem­o­ries of BtVS out of both of your heads (and the other thou­sands of view­ers) and re­plac­ing them with poin­t­ers to a com­mon library rep­re­sen­ta­tion of the show and then have your per­sonal re­la­tion­ship re­fer to the com­mon library rep­re­sen­ta­tion rather than your pri­vate copy.

The per­sonal re­la­tion­ship re­mains lo­cal and pri­vate, but it takes up way less space than your mind cur­rently does.

That said… com­ing back to this con­ver­sa­tion af­ter three years, I’m find­ing I just care less and less about pre­serv­ing what­ever sense of self de­pends on these sorts of idiosyn­cratic judg­ments.

I mean, when you try to re­call a BtVS epi­sode, your mem­ory is im­perfect… if you watch it again, you’ll un­cover all sorts of in­for­ma­tion you ei­ther for­got or re­mem­bered wrong. If I offered to give you perfect ei­deitic re­call of BtVS—no dis­tor­tion of your cur­rent facts about the good­ness of it, ex­cept in­so­far as those facts turn out to be in­com­pat­i­ble with an ac­tual per­cep­tion (e.g., you’d have changed your mind if you watched it again on TV, too) -- would you take it?

I would. I mean, ul­ti­mately, what does it mat­ter if I re­place my cur­rent vague mem­ory of the soap opera Spike was ob­ses­sively watch­ing with a more spe­cific mem­ory of its name and what­ever else we learned about it? Yes, that vague mem­ory is part of my unique iden­tity, I guess, in that no­body else has quite ex­actly that vague mem­ory… but so what? That’s not enough to make it worth pre­serv­ing.

And for all I know, maybe you agree with me… maybe you don’t want to pre­serve your pri­vate “facts” about what kind of tie Giles was wear­ing when An­gel tor­tured him, etc., but you draw the line at los­ing your pri­vate “facts” about how good the show was. Which is fine, you care about what you care about.

But if you told me right now that I’m ac­tu­ally an up­load with re­con­structed mem­o­ries, and that there was a glitch such that my cur­rent “facts” about BTVS be­ing a good show for its time is mis-re­con­structed, and Dave be­fore he died thought it was mediocre… well, so what?

I mean, be­fore my stroke, I re­ally dis­liked pep­pers. After my stroke, pep­pers tasted pretty good. This was startling, but it posed no sort of challenge to my sense of self.

Ap­par­ently (Me + likes pep­pers) ~= (Me + dis­likes pep­pers) as far as I’m con­cerned.

I sus­pect there’s a mil­lion other things like that.

• Like TheOtherDave (I pre­sume), I con­sider my iden­tity to be ad­e­quately de­scribed by what­ever Tur­ing ma­chine that can em­u­late my brain, or at least its pre­frontal cor­tex + rele­vant mem­ory stor­age.

There’s a very wide range of pos­si­ble minds I con­sider to pre­serve my iden­tity; I’m not sure the ma­jor­ity of those em­u­late my pre­frontal cor­tex sig­nifi­cantly more closely than they em­u­late yours, and the ma­jor­ity of my mem­o­ries are not shared by the ma­jor­ity of those minds.

• In­ter­est­ing. I won­der what you would con­sider a mind that pre­serves your iden­tity. For ex­am­ple, I as­sume that the to­tal of your posts on­line, plus what­ever other in­for­ma­tion available with­out some hy­po­thet­i­cal fu­ture brain scan­ner, all run­ning as a pro­cess on some simu­la­tor, is prob­a­bly not enough.

• At one ex­treme, if I as­sume those posts are be­ing used to cre­ate a me-simu­la­tion by me-simu­la­tion-cre­ator that liter­ally knows noth­ing else about hu­mans, then I’m pretty con­fi­dent that the re­sult is noth­ing I would iden­tify with. (I’m also pretty sure this sce­nario is in­ter­nally in­con­sis­tent.)

At an­other ex­treme, if I as­sume the me-simu­la­tion-cre­ator has ac­cess to a stan­dard tem­plate for my gen­eral de­mo­graphic and is just look­ing to cus­tomize that tem­plate suffi­ciently to pick out some sub­set of the vol­ume of mindspace my suffi­ciently pre­served iden­tity defines… then maybe. I’d have to think a lot harder about what in­for­ma­tion is in my on­line posts and what in­for­ma­tion would plau­si­bly be in such a tem­plate to even ex­press a con­fi­dence in­ter­val about that.

That said, I’m cer­tainly not com­fortable treat­ing the re­sult of that pro­cess as pre­serv­ing “me.”

Then again I’m also not com­fortable treat­ing the re­sult of liv­ing a thou­sand years as pre­serv­ing “me.”

• Be­cause the no­tion of “me” is not an on­tolog­i­cally ba­sic cat­e­gory and the ques­tion of whether the “real me” wakes up is a ques­tion that aught to be un-asked.

I’m a bit con­fused at the ques­tion...you ar­tic­u­lated my in­tent with that sen­tence perfectly in your other post.

Hrm.. am­bigu­ous se­man­tics. I took it to im­ply ac­cep­tance of the idea but not ele­va­tion of its im­por­tance, but I see how it could be in­ter­preted differ­ently.

and, as TheOtherDave said,

pre­sum­ably that also helps ex­plain how they can sleep at night.

EDIT: Nev­er­mind, I now un­der­stand which part of my state­ment you mi­s­un­der­stood.

I’m not ac­cept­ing-but-not-ele­vat­ing the idea that the ’Real me” doesn’t wake up on the other side. Rather, I’m say­ing that the ques­tions of per­sonal iden­tity over time do not make sense in the first place. It’s like ask­ing “which color is the most moist”?

You ac­tu­ally con­tinue func­tion­ing when you sleep, it’s just that you don’t re­mem­ber de­tails once you wake up. A more use­ful ex­am­ple for such dis­cus­sion is gen­eral anes­the­sia, which shuts down the re­gions of the brain as­so­ci­ated with con­scious­ness. If per­sonal iden­tity is in fact de­rived from con­ti­nu­ity of com­pu­ta­tion, then it is plau­si­ble that gen­eral anes­the­sia would re­sult in a “differ­ent you” wak­ing up af­ter the op­er­a­tion. The ap­pli­ca­tion to cry­on­ics de­pends greatly on the sub­tle dis­tinc­tion of whether vit­rifi­ca­tion (and more im­por­tantly, the re­cov­ery pro­cess) slows downs or stops com­pu­ta­tion. This has been a source of philo­soph­i­cal angst for me per­son­ally, but I’m still a cry­on­ics mem­ber.

More trou­bling is the ap­pli­ca­tion to up­load­ing. I haven’t done this yet, but I want my Al­cor con­tract to ex­plic­itly for­bid up­load­ing as a restora­tion pro­cess, be­cause I am un­con­vinced that a simu­la­tion of my de­struc­tively scanned frozen brain would re­ally be a con­tinu­a­tion of my per­sonal iden­tity. I was hop­ing that “Time­less Iden­tity” would ad­dress this point, but sadly it punts the is­sue.

The root of your philo­soph­i­cal dilemma is that “per­sonal iden­tity” is a con­cep­tual sub­sti­tu­tion for soul—a sub­jec­tive thread that con­nects you over space and time.

No such thing ex­ists. There is no spe­cific lo­ca­tion in your brain which is you. There is no spe­cific time point which is you. Sub­jec­tive ex­pe­rience ex­ists only in the fleet­ing pre­sent. The only “thread’ con­nect­ing you to your past ex­pe­riences is your cur­rent sub­jec­tive ex­pe­rience of re­mem­ber­ing them. That’s all.

• I always won­der how I should treat my fu­ture self if I re­ject the con­ti­nu­ity of self. Should I think of him like a son? A spouse? A stranger? Should I let him get fat? Not get him a de­gree? In­vest in stock for him? Give him an­other child?

• I think it mat­ters in so far as as­sist­ing your pre­sent tra­jec­tory. Other­wise it might as well be an un­feel­ing en­tity.

• The root of your philo­soph­i­cal dilemma is that “per­sonal iden­tity” is a con­cep­tual sub­sti­tu­tion for soul—a sub­jec­tive thread that con­nects you over space and time.

No such thing ex­ists. There is no spe­cific lo­ca­tion in your brain which is you. There is no spe­cific time point which is you. Sub­jec­tive ex­pe­rience ex­ists only in the fleet­ing pre­sent. The only “thread’ con­nect­ing you to your past ex­pe­riences is your cur­rent sub­jec­tive ex­pe­rience of re­mem­ber­ing them. That’s all.

I have a strong sub­jec­tive ex­pe­rience of mo­ment-to-mo­ment con­ti­nu­ity, even if only in the fleet­ing pre­sent. Sim­ply say­ing “no such thing ex­ists” doesn’t do any­thing to re­solve the un­der­ly­ing con­fu­sion. If no such thing as per­sonal iden­tity ex­ists, then why do I ex­pe­rience it? What is the un­der­ly­ing in­sight that elimi­nates the ques­tion?

This is not an ab­stract ques­tion ei­ther. It has huge im­pli­ca­tions for the con­struc­tion of time­less de­ci­sion the­ory and util­i­tar­ian meta­moral­ity.

• “a strong sub­jec­tive ex­pe­rience of mo­ment-to-mo­ment con­ti­nu­ity” is an ar­ti­fact of the al­gorithm your brain im­ple­ments. It cer­tainly ex­ists in as much as the al­gorithm it­self ex­ists. So does your per­sonal iden­tity. If in the fu­ture it be­comes pos­si­ble to run the same al­gorithm on a differ­ent hard­ware, it will still pro­duce this sense of per­sonal iden­tity and will feel like “you” from the in­side.

• Yes, I’m not ques­tion­ing whether a fu­ture simu­la­tion /​ em­u­la­tion of me would have an iden­ti­cal sub­jec­tive ex­pe­rience. To re­ject that would be a re­treat to epiphe­nom­e­nal­ism.

Let me rephrase the ques­tion, so as to ex­pose the prob­lem: if I were to use ad­vanced tech­nol­ogy to have my brain scanned to­day, then got hit by a bus and cre­mated, and then 50 years from now that brain scan is used to em­u­late me, what would my sub­jec­tive ex­pe­rience be to­day? Do I ex­pe­rience “HONK Screeeech, bam” then wake up in a com­puter, or is it “HONK Screeeech, bam” and oblivion?

Yes, I re­al­ize that in both cases re­sult in a com­puter simu­la­tion of Mark in 2063 claiming to have just wo­ken up in the brain scan­ner, with a sub­jec­tive feel­ing of con­ti­nu­ity. But is that be­lief true? In the two situ­a­tions there’s a very differ­ent out­come for the Mark of 2013. If you can’t see that, then I think we are talk­ing about differ­ent things, and maybe we should taboo the phrase “per­sonal/​sub­jec­tive iden­tity”.

• if I were to use ad­vanced tech­nol­ogy to have my brain scanned to­day, then got hit by a bus and cre­mated, and then 50 years from now that brain scan is used to em­u­late me, what would my sub­jec­tive ex­pe­rience be to­day? Do I ex­pe­rience “HONK Screeeech, bam” then wake up in a com­puter, or is it “HONK Screeeech, bam” and oblivion?

Ah, hope­fully I’m slowly get­ting what you mean. So, there was the origi­nal you, Mark 2013, whose al­gorithm was ter­mi­nated soon af­ter it pro­cessed the in­puts “HONK Screeeech, bam”, and the new you, Mark 2063, whose ex­pe­rience is “HONK Screeeech, bam” then “wake up in a com­puter”. You are con­cerned with… I’m hav­ing trou­ble ar­tic­u­lated what ex­actly… some­thing about the lack of ex­pe­riences of Mark 2013? But, say, if Mark 2013 was re­stored to life in mostly the same phys­i­cal body af­ter a 50-year “oblivion”, you wouldn’t be?

• Ah, hope­fully I’m slowly get­ting what you mean. So, there was the origi­nal you, Mark 2013, whose al­gorithm was ter­mi­nated soon af­ter it pro­cessed the in­puts “HONK Screeeech, bam”, and the new you, Mark 2063, whose ex­pe­rience is “HONK Screeeech, bam” then “wake up in a com­puter”.

Pretty much cor­rect. To be spe­cific, if com­pu­ta­tional con­ti­nu­ity is what mat­ters, then Mark!2063 has my mem­o­ries, but was in fact “born” the mo­ment the simu­la­tion started, 50 years in the fu­ture. That’s when his iden­tity be­gan, whereas mine ended when I died in 2013.

This seems a lit­tle more in­tu­itive when you con­sider switch­ing on 100 differ­ent em­u­la­tions of me at the same time. Did I some­how split into 100 differ­ent per­sons? Or was there in fact 101 sep­a­rate sub­jec­tive iden­tities, 1 of which ter­mi­nated in 2013 and 100 new ones cre­ated for the simu­la­tions? The lat­ter is a more straight for­ward ex­pla­na­tion, IMHO.

You are con­cerned with… I’m hav­ing trou­ble ar­tic­u­lated what ex­actly… some­thing about the lack of ex­pe­riences of Mark 2013? But, say, if Mark 2013 was re­stored to life in mostly the same phys­i­cal body af­ter a 50-year “oblivion”, you wouldn’t be?

No, that would make lit­tle differ­ence as it’s pretty clear that phys­i­cal con­ti­nu­ity is an illu­sion. If pat­tern or causal con­ti­nu­ity were cor­rect, then it’d be fine, but both the­o­ries in­tro­duce other prob­lems. If com­pu­ta­tional con­ti­nu­ity is cor­rect, then a re­con­structed brain wouldn’t be me any more than a simu­la­tion would. How­ever it’s pos­si­ble that my cryo­geni­cally vit­rified brain would pre­serve iden­tity, if it were slowly brought back on­line with­out in­ter­rup­tion.

I’d have to learn more about how gen­eral anes­the­sia works to de­cide if per­sonal iden­tity would be pre­served across on the op­er­at­ing table (un­til then, it scares the crap out of me). Like­wise, a AI or em­u­la­tion run­ning on a com­puter that is pow­ered off and then later re­sumed would also break iden­tity, but de­pend­ing on the un­der­ly­ing na­ture of com­pu­ta­tion & sub­jec­tive ex­pe­rience, task switch­ing and on­line sus­pend/​re­sume may or may not re­sult in cy­cling iden­tity.

I’ll stop there be­cause I’m try­ing to for­mu­late all these thoughts into a longer post, or maybe a se­quence of posts.

• Can you taboo “per­sonal iden­tity”? I don’t un­der­stand what im­por­tant thing you could lose by go­ing un­der gen­eral anes­the­sia.

• It’s eas­ier to ex­plain in the case of mul­ti­ple copies of your­self. Imag­ine the trans­porter were turned into a repli­ca­tor—it gets stuck in a loop re­con­struct­ing the last thing that went through it, namely you. You step off and turn around to find an­other ver­sion of you just com­ing out. And then an­other, and an­other, etc. Each one of you shares the same mem­o­ries, but from that mo­ment on you have di­verged. Each clone con­tinues life with their own sub­jec­tive ex­pe­rience un­til that ex­pe­rience is ter­mi­nated by that clone’s death.

That sense of sub­jec­tive ex­pe­rience sep­a­rate from mem­o­ries or shared his­tory is what I have been call­ing “per­sonal iden­tity.” It is what gives me the be­lief, real or illu­sory, that I am the same per­son from mo­ment to mo­ment, day to day, and what sep­a­rates me from my clones. You are wel­come to sug­gest a bet­ter term.

The repli­ca­tor /​ clone thought ex­per­i­ment shows that “sub­jec­tive ex­pe­rience of iden­tity” is some­thing differ­ent from the in­for­ma­tion pat­tern that rep­re­sents your mind. There is some­thing, al­though at this mo­ment that some­thing is not well defined, which makes you the same “you” that will ex­ist five min­utes in the fu­ture, but which sep­a­rates you from the “you”s that walked out of the repli­ca­tor, or ex­ist in simu­la­tion, for ex­am­ple.

The first step is rec­og­niz­ing this dis­tinc­tion. Then turn around and ap­ply it to less fan­tas­ti­cal situ­a­tions. If the clone is “you” but not you (mean­ing no shared iden­tity, and my apolo­gies for the weak ter­minol­ogy), then what’s to say that a fu­ture simu­la­tion of “you” would also be you? What about cry­on­ics, will your un­frozen brain still be you? That might de­pend on what they do to re­pair dam­age from vit­rifi­ca­tion. What about gen­eral anes­the­sia? Again, I need to learn more about how gen­eral anes­the­sia works, but if they shut down your pro­cess­ing cen­ters and then restart you later, how is that differ­ent from the tele­por­ta­tion or simu­la­tion sce­nario? After all we’ve already es­tab­lished that what­ever pro­vides per­sonal iden­tity, it’s not phys­i­cal con­ti­nu­ity.

• That sense of sub­jec­tive ex­pe­rience sep­a­rate from mem­o­ries or shared his­tory is what I have been call­ing “per­sonal iden­tity.” It is what gives me the be­lief, real or illu­sory, that I am the same per­son from mo­ment to mo­ment, day to day, and what sep­a­rates me from my clones.

Well, OK. So sup­pose that, af­ter I go through that trans­porter/​repli­ca­tor, you ask the en­tity that comes out whether it has the be­lief, real or illu­sory, that it is the same per­son in this mo­ment that it was at the mo­ment it walked into the ma­chine, and it says “yes”.

If per­sonal iden­tity is what cre­ates that be­lief, and that en­tity has that be­lief, it fol­lows that that en­tity shares my per­sonal iden­tity… doesn’t it?

• Well, OK. So sup­pose that, af­ter I go through that trans­porter/​repli­ca­tor, you ask the en­tity that comes out whether it has the be­lief, real or illu­sory, that it is the same per­son in this mo­ment that it was at the mo­ment it walked into the ma­chine, and it says “yes”.

If per­sonal iden­tity is what cre­ates that be­lief, and that en­tity has that be­lief, it fol­lows that that en­tity shares my per­sonal iden­tity… doesn’t it?

Not quite. If You!Mars gave it thought be­fore an­swer­ing, his think­ing prob­a­bly went like this: “I have mem­o­ries of go­ing into the trans­porter, just a mo­ment ago. I have a con­tin­u­ous se­quence of mem­o­ries, from then un­til now. Nowhere in those mem­o­ries does my sense of self change. Right now I am ex­pe­rienc­ing the same sense of self I always re­mem­ber ex­pe­rienc­ing, and lay­ing down new mem­o­ries. Ergo, proof by back­wards in­duc­tion I am the same per­son that walked into the tele­porter.” How­ever for that—or any—line of meta rea­son­ing to hold, (1) your mem­o­ries need to ac­cu­rately cor­re­spond with the true and full his­tory of re­al­ity and (2) you need trust that what oc­curs in the pre­sent also oc­curred in the past. In other words, it’s kinda like say­ing “my mem­ory wasn’t al­tered be­cause I would have re­mem­bered that.” It’s not a cir­cu­lar ar­gu­ment per se, but it is a meta loop.

The map is not the ter­ri­tory. What hap­pened to You!Earth’s sub­jec­tive ex­pe­rience is an ob­jec­tive, if per­haps not em­piri­cally ob­serv­able fact. You!Mars’ be­lief about what hap­pened may or may not cor­re­spond with re­al­ity.

• What if me!Mars, af­ter giv­ing it thought, shakes his head and says “no, that’s not right. I say I’m the same per­son be­cause I still have a sense of sub­jec­tive ex­pe­rience, which is sep­a­rate from mem­o­ries or shared his­tory, which gives me the be­lief, real or illu­sory, that I am the same per­son from mo­ment to mo­ment, day to day, and which sep­a­rates me from my clones”?

Do you take his word for it?
Do you as­sume he’s mis­taken?
Do you as­sume he’s ly­ing?

• As­sum­ing that he ac­knowl­edges that clones have a sep­a­rate iden­tity, or in other words he ad­mits that there can be in­stances of him­self that are not him, then by as­sert­ing the same iden­tity as the per­son that walked into the tele­porter, he is mak­ing an ex­trap­o­la­tion into the past. He is ex­press­ing a be­lief that by what­ever defi­ni­tion he is us­ing the per­son walk­ing into the tele­porter meets a stan­dard of meness that the clones do not. Un­less the defi­ni­tion un­der con­sid­er­a­tion ex­plic­itly refer­ence You!Mars’ men­tal state (e.g. “by defi­ni­tion” he has shared iden­tity with peo­ple he re­mem­bers hav­ing shared iden­tity with), then the val­idity of that be­lief is ex­ter­nal: it is ei­ther true or false. The map is not the ter­ri­tory.

Un­der an as­sump­tion of pat­tern or causal con­ti­nu­ity, for ex­am­ple, it would be ex­plic­itly true. For com­pu­ta­tional con­ti­nu­ity it would be false.

• If I un­der­stood you cor­rectly, then on your ac­count, his claim is sim­ply false, but he isn’t nec­es­sar­ily ly­ing.

Yes?

It seems to fol­low that he might ac­tu­ally have a sense of sub­jec­tive ex­pe­rience, which is sep­a­rate from mem­o­ries or shared his­tory, which gives him the be­lief, real or illu­sory (in this case illu­sory), that he is the same per­son from mo­ment to mo­ment, day to day, and the same per­son who walked into the tele­porter, and which sep­a­rates him from his clones.

Yes?

• If I un­der­stood you cor­rectly, then on your ac­count, his claim is sim­ply false, but he isn’t nec­es­sar­ily ly­ing.

Yes, in the sense that it is a be­lief about his own his­tory which is ei­ther true or false like any his­tor­i­cal fact. Whether it ac­tu­ally false de­pends on the na­ture of “per­sonal iden­tity”. If I un­der­stand the origi­nal post cor­rectly, I think Eliezer would ar­gue that his claim is true. I think Eliezer’s ar­gu­ment lacks suffi­cient jus­tifi­ca­tion, and there’s a good chance his claim is false.

It seems to fol­low that he might ac­tu­ally have a sense of sub­jec­tive ex­pe­rience, which is sep­a­rate from mem­o­ries or shared his­tory, which gives him the be­lief, real or illu­sory (in this case illu­sory), that he is the same per­son from mo­ment to mo­ment, day to day, and the same per­son who walked into the tele­porter, and which sep­a­rates him from his clones.

Yes. My ques­tion is: is that be­lief jus­tified?

If your mem­ory were al­tered such to make you think you won the lot­tery, that doesn’t make you any richer. Like­wise You!Mars’ mem­ory was con­structed by the trans­porter ma­chine in such a way, fol­low­ing the trans­mit­ted de­sign as to make him re­mem­ber step­ping into the trans­porter on Earth as you did, and walk­ing out of it on Mars in seam­less con­ti­nu­ity. But just be­cause he doesn’t re­mem­ber the de­con­struc­tion, in­for­ma­tion trans­mis­sion, and re­con­struc­tion steps doesn’t mean they didn’t hap­pen. Once he learns what ac­tu­ally hap­pened dur­ing his trans­port, his de­ci­sion about whether he re­mains the same per­son that en­tered the ma­chine on Earth de­pends greatly on his model of con­scious­ness and per­sonal iden­tity/​con­ti­nu­ity.

• It seems to fol­low that he might ac­tu­ally have a sense of sub­jec­tive ex­pe­rience, which is sep­a­rate from mem­o­ries or shared his­tory, which gives him the be­lief, real or illu­sory (in this case illu­sory), that he is the same per­son from mo­ment to mo­ment, day to day, and the same per­son who walked into the tele­porter, and which sep­a­rates him from his clones.
Yes. My ques­tion is: is that be­lief jus­tified?

OK, un­der­stood.

Here’s my con­fu­sion: a while back, you said:

That sense of sub­jec­tive ex­pe­rience sep­a­rate from mem­o­ries or shared his­tory is what I have been call­ing “per­sonal iden­tity.” It is what gives me the be­lief, real or illu­sory, that I am the same per­son from mo­ment to mo­ment, day to day, and what sep­a­rates me from my clones.

And yet, here’s Dave!Mars, who has a sense of sub­jec­tive ex­pe­rience sep­a­rate from mem­o­ries or shared his­tory which gives him the be­lief, real or illu­sory (in this case illu­sory), that he is the same per­son from mo­ment to mo­ment, day to day, and the same per­son who walked into the tele­porter, and which sep­a­rates him from his clones.

But on your ac­count, he might not have Dave’s per­sonal iden­tity.

So, where is this sense of sub­jec­tive ex­pe­rience com­ing from, on your ac­count? Is it causally con­nected to per­sonal iden­tity, or not?

Once he learns what ac­tu­ally hap­pened dur­ing his trans­port, his de­ci­sion about whether he re­mains the same per­son that en­tered the ma­chine on Earth de­pends greatly on his model of con­scious­ness and per­sonal iden­tity/​con­ti­nu­ity.

Yes, that’s cer­tainly true. By the same to­ken, if I con­vince you that I placed you in sta­sis last night for… um… long enough to dis­rupt your per­sonal iden­tity (a minute? an hour? a mil­lisec­ond? a nanosec­ond? how long a pe­riod of “com­pu­ta­tional dis­con­ti­nu­ity” does it take for per­sonal iden­tity to evap­o­rate on your ac­count, any­way?), you would pre­sum­ably con­clude that you aren’t the same per­son who went to bed last night. OTOH, if I placed you in sta­sis last night and didn’t tell you, you’d con­clude that you’re the same per­son, and live out the rest of your life none the wiser.

• That ex­per­i­ment shows that “per­sonal iden­tity”, what­ever that means, fol­lows a time-tree, not a time-line. That con­clu­sion also must hold if MWI is true.

So I get that there’s a tricky (?) la­bel­ing prob­lem here, where it’s some­what con­tro­ver­sial which copy of you should be la­beled as hav­ing your “per­sonal iden­tity”. The thing that isn’t clear to me is why the la­bel­ing prob­lem is im­por­tant. What ob­serv­able fea­ture of re­al­ity de­pends on the out­come of this la­bel­ing prob­lem? We all agree on how those copies of you will act and what be­liefs they’ll have. What else is there to know here?

• Would you step through the trans­porter? If you an­swered no, would it be moral to force you through the trans­porter? What if I didn’t know your wishes, but had to ex­trap­o­late? Un­der what con­di­tions would it be okay?

Also, take the more vile forms of Pas­cal’s mug­ging and acausal trades. If some­thing threat­ens tor­ture to a simu­la­tion of you, should you be con­cerned about ac­tu­ally ex­pe­rienc­ing the tor­ture, thereby sub­vert­ing your ra­tio­nal­ist im­pulse to shut up and mul­ti­ply util­ity?

• Would you step through the trans­porter? If you an­swered no, would it be moral to force you through the trans­porter? What if I didn’t know your wishes, but had to ex­trap­o­late? Un­der what con­di­tions would it be okay?

I don’t see how any of that de­pends on the ques­tion of which com­pu­ta­tions (copies of me) get la­beled with “per­sonal iden­tity” and which don’t.

Also, take the more vile forms of Pas­cal’s mug­ging and acausal trades. If some­thing threat­ens tor­ture to a simu­la­tion of you, should you be con­cerned about ac­tu­ally ex­pe­rienc­ing the tor­ture, thereby sub­vert­ing your ra­tio­nal­ist im­pulse to shut up and mul­ti­ply util­ity?

Depend­ing on speci­fics, yes. But I don’t see how this de­pends on the la­bel­ing ques­tion. This just boils down to “what do I ex­pect to ex­pe­rience in the fu­ture?” which I don’t see as be­ing re­lated to “per­sonal iden­tity”.

• This just boils down to “what do I ex­pect to ex­pe­rience in the fu­ture?” which I don’t see as be­ing re­lated to “per­sonal iden­tity”.

For­get the phrase “per­sonal iden­tity”. If I am a pow­er­ful AI from the fu­ture and I come back to tell you that I will run a simu­la­tion of you so we can go bowl­ing to­gether, do you or do you not ex­pect to ex­pe­rience bowl­ing with me in the fu­ture, and why?

• Yes, with prob­a­bil­ity P(simu­la­tion), or no, with prob­a­bil­ity P(not simu­la­tion), de­pend­ing.

• Sup­pose that my hus­band and I be­lieve that while we’re sleep­ing, some­one will paint a blue dot on ei­ther my fore­head, or my hus­band’s, de­ter­mined ran­domly. We ex­pect to see a blue dot when we wake up… and we also ex­pect not to see a blue dot when we wake up. This is a perfectly rea­son­able state for two peo­ple to be in, and not at all prob­le­matic.

Sup­pose I be­lieve that while I’m sleep­ing, a pow­er­ful AI will du­pli­cate me (if you like, in such a way that both du­pli­cates ex­pe­rience com­pu­ta­tional con­ti­nu­ity with the origi­nal) and paint a blue dot on one du­pli­cate’s fore­head. When I wake up, I ex­pect to see a blue dot when I wake up… and I also ex­pect not to see a blue dot when I wake up. This is a perfectly rea­son­able state for a du­pli­cated per­son to be in, and not at all prob­le­matic.

Similarly, I both ex­pect to ex­pe­rience bowl­ing with you, and ex­pect to not ex­pe­rience bowl­ing with you (sup­pos­ing that the origi­nal con­tinues to op­er­ate while the simu­la­tion goes bowl­ing).

• The situ­a­tion isn’t analo­gous, how­ever. Let’s posit that you’re still al­ive when the simu­la­tion is ran. In fact, aside from tech­nol­ogy there’s no rea­son to put it in the fu­ture or in­volve an AI. I’m a brain scan­ning re­searcher that shows up at your house to­mor­row, with all the equip­ment to do a non-de­struc­tive mind up­load and whole-brain simu­la­tion. I tell you that I am go­ing to scan your brain, start the simu­la­tion, then don VR gog­gles and go vir­tual-bowl­ing with “you”. Once the scan­ning is done you and your hus­band are free to go to the beach or what­ever, while I go bowl­ing with TheVir­tu­alDave.

What prob­a­bil­ity would you put on you end­ing up bowl­ing in­stead of at the beach?

• Well, let’s call P1 my prob­a­bil­ity of ac­tu­ally go­ing to the beach, even if you never show up. That is, (1-P1) is the prob­a­bil­ity that traf­fic keeps me from get­ting there, or my car breaks down, or what­ever. And let’s call P2 my prob­a­bil­ity of your VR/​simu­la­tion rig work­ing. That is, (1-P2) is the prob­a­bil­ity that the scan­ner fails, etc. etc.

In your sce­nario, I put a P1 prob­a­bil­ity of end­ing up at the beach, and a P2 prob­a­bil­ity of end­ing up bowl­ing. If both are high, then I’m con­fi­dent that I will do both.

There is no “in­stead of”. Go­ing to the beach does not pre­vent me from bowl­ing. Go­ing bowl­ing does not pre­vent me from go­ing to the beach. Some­one will go to the beach, and some­one will go bowl­ing, and both of those some­ones will be me.

• As I al­luded to in an­other re­ply, as­sum­ing perfectly re­li­able scan­ning, and as­sum­ing that you hate los­ing in bowl­ing to MarkAI, how do you de­cide whether to go prac­tice bowl­ing or to do some­thing else you like more?

• If it’s im­por­tant to me not to lose in bowl­ing, I prac­tice bowl­ing, since I ex­pect to go bowl­ing. (As­sum­ing un­in­ter­est­ing scan­ning tech.)
If it’s also im­por­tant to me to show off my rock­ing abs at the beach, I do sit-ups, since I ex­pect to go to the beach.
If I don’t have the time to do both, I make a trade­off, and I’m not sure ex­actly how I make that trade­off, but it doesn’t in­clude as­sum­ing that the go­ing to the beach some­how hap­pens more or hap­pens less or any­thing like that than the go­ing bowl­ing.

Ad­mit­tedly, this pre­sumes that the bowl­ing-me will go on to live a nor­mal life­time. If I know the simu­la­tion will be turned off right af­ter the bowl­ing match, I might not care so much about win­ning the bowl­ing match. (Then again, I might care a lot more.) By the same to­ken, if I know the origi­nal will be shot to­mor­row morn­ing I might not care so much abuot my abs. (Then again, I might care more. I’m re­ally not con­fi­dent about how the prospect of up­com­ing death af­fects my choices; still less how it does so when I ex­pect to keep sur­viv­ing as well.)

• Of course they do. Why shouldn’t they?

What is your prob­a­bil­ity that you will wake up to­mor­row morn­ing?
What is your prob­a­bil­ity that you will wake up Fri­day morn­ing?
I ex­pect to do both, so my prob­a­bil­ities of those two things add up to ~2.

In Mark’s sce­nario, I ex­pect to go bowl­ing and I ex­pect to go to the beach.
My prob­a­bil­ities of those two things similarly add up to ~2.

• I think we have the same model of the situ­a­tion, but I feel com­pel­led to nor­mal­ize my prob­a­bil­ity. A guess as to why:

I can rephrase Mark’s ques­tion as, “In 10 hours, will you re­mem­ber hav­ing gone to the beach or hav­ing bowled?” (As­sume the simu­la­tion will con­tinue run­ning!) There’ll be a you that went bowl­ing and a you that went to the beach, but no sin­gle you that did both of those things. Your suc­ces­sive wak­ings ex­am­ple doesn’t have this prop­erty.

I sup­pose I an­swer 50% to in­di­cate my un­cer­tainty about which fu­ture self we’re talk­ing about, since there are two pos­si­ble refer­ents. Maybe this is un­helpful.

• Yes, that seems to be what’s go­ing on.

That said, nor­mal­iz­ing my prob­a­bil­ity as though there were only go­ing to be one of me at the end of the pro­cess doesn’t seem at all com­pel­ling to me. I don’t have any un­cer­tainty about which fu­ture self we’re talk­ing about—we’re talk­ing about both of them.

Sup­pose that you and your hus­band are plan­ning to take the day off to­mor­row, and he is plan­ning to go bowl­ing, and you are plan­ning to go to the beach, and I ask the two of you “what’s y’all’s prob­a­bil­ity that one of y’all will go bowl­ing, and what’s y’all’s prob­a­bil­ity that one of y’all will go to the beach?” It seems the cor­rect an­swers to those ques­tions will add up to more than 1, even though no one per­son will ex­pe­rience bowl­ing AND go­ing to the beach. In 10 hours, one of you will will re­mem­ber hav­ing gone to the beach, and one will re­mem­ber hav­ing bowled.

This is ut­terly un­prob­le­matic when we’re talk­ing about two peo­ple.

In the du­pli­ca­tion case, we’re still talk­ing about two peo­ple, it’s just that right now they are both me, so I get to an­swer for both of them. So, in 10 hours, I (aka “one of me”) will re­mem­ber hav­ing gone to the beach. I will also re­mem­ber hav­ing bowled. I will not re­mem­ber hav­ing gone to the beach and hav­ing bowled. And my prob­a­bil­ities add up to more than 1.

I rec­og­nize that it doesn’t seem that way to you, but it re­ally does seem like the ob­vi­ous way to think about it to me.

• I rec­og­nize that it doesn’t seem that way to you, but it re­ally does seem like the ob­vi­ous way to think about it to me.

I think your de­scrip­tion is co­her­ent and de­scribes the same model of re­al­ity I have. :)

• 2 Oct 2013 0:52 UTC
0 points
Parent

I can rephrase Mark’s ques­tion as, “In 10 hours, will you re­mem­ber hav­ing gone to the beach or hav­ing bowled?”

Yes. Prob­a­bil­ities aside, this is what I was ask­ing.

I sup­pose I an­swer 50% to in­di­cate my un­cer­tainty about which fu­ture self we’re talk­ing about, since there are two pos­si­ble refer­ents.

I was ask­ing a dis­guised ques­tion. I re­ally wanted to know: “which of the two fu­ture selfs do you iden­tify with, and why?”

• I was ask­ing a dis­guised ques­tion. I re­ally wanted to know: “which of the two fu­ture selfs do you iden­tify with, and why?”

Oh, that’s easy. Both of them, equally. As­sum­ing ac­cu­rate enough simu­la­tions etc., of course.

ETA: Why? Well, they’ll both think that they’re me, and I can’t think of a way to dis­prove the claim of one with­out also dis­prov­ing the claim of the other.

• 2 Oct 2013 20:00 UTC
−1 points
Parent

ETA: Why? Well, they’ll both think that they’re me, and I can’t think of a way to dis­prove the claim of one with­out also dis­prov­ing the claim of the other.

Any of the mod­els of con­scious­ness-as-con­ti­nu­ity would offer a defini­tive pre­dic­tion.

• Any of the mod­els of con­scious­ness-as-con­ti­nu­ity would offer a defini­tive pre­dic­tion.

IMO, there liter­ally is no fact of the mat­ter here, so I will bite the bul­let and say that any model that sup­poses there is one is wrong. :) I’ll re­con­sider if you can point to an ob­jec­tive fea­ture of re­al­ity that changes de­pend­ing on the an­swer to this. (So-and-so will think it to be im­moral doesn’t count!)

• I won’t be­cause that’s not what I’m ar­gu­ing. My po­si­tion is that sub­jec­tive ex­pe­rience has moral con­se­quences, and there­fore mat­ters.

PS: The up/​down karma vote isn’t a record of what you agree with, but whether a post has been rea­son­ably ar­gued.

• PS: The up/​down karma vote isn’t a record of what you agree with, but whether a post has been rea­son­ably ar­gued.

It is nei­ther of those things. This isn’t de­bate club. We don’t have to give peo­ple credit for find­ing the most clever ar­gu­ments for a wrong po­si­tion.

I make no com­ment about the sub­ject of de­bate is in this con­text (I don’t know or care which party is say­ing crazy things about ‘con­cious­ness’). I down­voted the par­ent speci­fi­cally be­cause it made a nor­ma­tive as­ser­tion about how peo­ple should use the karma mechanism which is nei­ther some­thing I sup­port nor an ac­cu­rate de­scrip­tion of an ac­cepted cul­tural norm. This is an ex­am­ple of vot­ing be­ing used le­gi­t­i­mately in a way that is noth­ing to do with whether the post has been rea­son­ably ar­gued.

• I did use the term “rea­son­ably ar­gued” but I didn’t mean clever. Maybe “ra­tio­nally ar­gued”? By my own al­gorithm a clev­erly ar­gued but clearly wrong ar­gu­ment would not gar­ner an up vote.

I gave you an up­vote for ex­plain­ing your down vote.

• I did use the term “rea­son­ably ar­gued” but I didn’t mean clever. Maybe “ra­tio­nally ar­gued”? By my own al­gorithm a clev­erly ar­gued but clearly wrong ar­gu­ment would not gar­ner an up vote.

You are right, ‘clever’ con­tains con­no­ta­tions that you wouldn’t in­tend. I my­self have used ‘clever’ as term of dis­dain and I don’t want to ap­ply that to what you are talk­ing about. Let’s say stick with ei­ther of the terms you used and agree that we are talk­ing about ar­gu­ments that are sound, co­gent and rea­son­able rather than art­ful rhetoric that ex­ploits known bi­ases in hu­man so­cial be­havi­our to score per­sua­sion points. I main­tain that even then down-votes are some­times ap­pro­pri­ate. Allow me to illus­trate.

There are two out­wardly in­dis­t­in­guish­able boxes with but­tons that dis­play heads or tails when pressed. You know that one of the boxes re­turns true 70% of the time, the other re­turns heads 40% of the time. A third party, Joe, has ex­per­i­mented with the first box three times and tells you that each time it re­turned true. This rep­re­sents an ar­gu­ment that the first box is the “70%” box. Now, as­sume that I have ob­served the in­ter­nals of the boxes and know that the first box is, in fact, the 40% box.

Whether I down­vote Joe’s com­ment de­pends on many things. Ob­vi­ously, tone mat­ters a lot, as does my im­pres­sion of whether Joe’s bias is based on dis-in­ge­nu­ity or more in­no­cent ig­no­rance. But even in the case when Joe is ar­gu­ing in good faith there are some cases where a policy at­tempt­ing to im­prove the com­mu­nity will ad­vo­cate down­vot­ing the con­tri­bu­tion. For ex­am­ple if there is a sig­nifi­cant se­lec­tion bias in what kind of ev­i­dence peo­ple like Joe have ex­posed them­selves to then pop­u­lar per­cep­tion af­ter such peo­ple share their opinions will tend to be even more bi­ased than the in­di­vi­d­u­als alone. In that case down­vot­ing Joe’s com­ment im­proves the dis­cus­sion. The ideal out­come would be for Joe to learn to stfu un­til he learns more.

More sim­ply I ob­serve that even the most ‘ra­tio­nal’ of ar­gu­ments can be harm­ful if the se­lec­tion pro­cess for the cre­ation and rep­e­ti­tion of those ar­gu­ments is at all bi­ased.

• For many peo­ple, the up/​down karma vote is a record of what we want more/​less of.

• I won’t be­cause that’s not what I’m ar­gu­ing. My po­si­tion is that sub­jec­tive ex­pe­rience has moral con­se­quences, and there­fore mat­ters.

OK, that’s fine, but I’m not con­vinced—I’m hav­ing trou­ble think­ing of some­thing that I con­sider to be a moral is­sue that doesn’t have a cor­re­spond­ing con­se­quence in the ter­ri­tory.

PS: That down­vote wasn’t me. I’m aware of how votes work around here. :)

• 2 Oct 2013 21:35 UTC
−1 points
Parent

Ex­am­ple: is it moral to power-cy­cle (hi­ber­nate, turn off, power on, re­store) a com­puter run­ning an self-aware AI? WIll fu­ture ma­chine in­tel­li­gences view any less-than-nec­es­sary AGI ex­per­i­ments I run the same way we do Josef Men­gele’s work in Auschwitz? Is it a pos­si­ble failure mode that an un­friendly/​not-proov­ably-friendly AI that ex­pe­riences rou­tine power cy­cling might un­cover this line of rea­son­ing and de­cide it doesn’t want to “die” ev­ery night when the lights go off? What would it do then?

• OK, in a hy­po­thet­i­cal world where some­how paus­ing a con­scious com­pu­ta­tion—main­tain­ing all data such that it could be restarted losslessly—is mur­der, those are con­cerns. Agreed. I’m not ar­gu­ing against that.

My po­si­tion is that paus­ing a com­pu­ta­tion as above hap­pens to not be mur­der/​death, and that those who be­lieve it is mur­der/​death are mis­taken. The ex­am­ple I’m look­ing for is some­thing ob­jec­tive that would demon­strate this sort of paus­ing is mur­der/​death. (In my view, the bad thing about death is its per­ma­nence, that’s most of why we care about mur­der and what makes it a moral is­sue.)

• As Eliezer men­tioned in his re­ply (in differ­ent words), if power cy­cling is death, what’s the short­est sus­pen­sion time that isn’t? Cur­rently most com­put­ers run syn­chronously off a com­mon clock. The com­pu­ta­tion is com­pletely sus­pended be­tween clock cy­cles. Does this mean that an AI run­ning on such a com­puter is mur­dered billions of times ev­ery sec­ond? If so, then moral­ity lead­ing to this ab­surd con­clu­sion is not a use­ful one.

Edit: it’s ac­tu­ally worse than that: digi­tal com­pu­ta­tion hap­pens mostly within a short time of the clock level switch. The rest of the time be­tween tran­si­tions is just to en­sure that the elec­tri­cal sig­nals re­lax to within their tol­er­ance lev­els. Which means that the AI in ques­tion is likely dead 90% of the time.

• 2 Oct 2013 22:24 UTC
−1 points
Parent

What Eliezer and you de­scribe is more analo­gous to task switch­ing on a time­shar­ing sys­tem, and yes my un­der­stand­ing of com­pu­ta­tional con­ti­nu­ity the­ory is that such a ma­chine would not be sent to oblivion 120 times a sec­ond. No, such a com­puter would be strangely schizophrenic, but also com­pletely self-con­sis­tent at any mo­ment in time.

But com­pu­ta­tional con­ti­nu­ity does have a differ­ent an­swer in the case of in­ter­me­di­ate non-com­pu­ta­tional states. For ex­am­ple, sav­ing the state of a whole brain em­u­la­tion to mag­netic disk, shut­ting off the ma­chine, and restart­ing it some­time later. In the mean time, shut­ting off the ma­chine re­sulted in de­cou­pling/​de­co­her­ence of state be­tween the com­pu­ta­tional el­e­ments of the ma­chine, and gen­eral re­ver­sion back to a state of ther­mal noise. This does equal death-of-iden­tity, and is similar to the trans­porter thought ex­per­i­ment. The rele­vance may be more ob­vi­ous when you think about tak­ing the drive out and load­ing it in an­other ma­chine, copy­ing the con­tents of the disk, or run­ning mul­ti­ple simu­la­tions from a sin­gle check­point (none of these change the facts, how­ever).

• In the mean time, shut­ting off the ma­chine re­sulted in de­cou­pling/​de­co­her­ence of state be­tween the com­pu­ta­tional el­e­ments of the ma­chine, and gen­eral re­ver­sion back to a state of ther­mal noise.

It is prob­a­bly best for you to stay away from the physics/​QM point of view on this, since you will lose: the states “be­tween the com­pu­ta­tional el­e­ments”, what­ever you may mean by that, de­co­here and re­lax to “ther­mal noise” much quicker than the time be­tween clock tran­si­tions, so there no differ­ence be­tween a nanosec­ond an an hour.

Maybe what you mean is more logic-re­lated? For ex­am­ple, when a self-aware al­gorithm (in­clud­ing a hu­man) ex­pects one sec­ond to pass and in­stead mea­sures a full hour (be­cause it was sus­pended), it in­ter­prets that dis­crep­ancy of in­puts as death? If so, shouldn’t any un­ex­pected dis­crep­ancy, like sleep­ing past your alarm clock, or day-dream­ing in class, be treated the same way?

This does equal death-of-iden­tity, and is similar to the trans­porter thought ex­per­i­ment.

I agree that fork­ing a con­scious­ness is not a morally triv­ial is­sue, but that’s differ­ent from tem­po­rary sus­pen­sion and restart­ing, which hap­pens all the time to peo­ple and ma­chines. I don’t think that con­flat­ing the two is helpful.

• 3 Oct 2013 0:22 UTC
−1 points
Parent

It is prob­a­bly best for you to stay away from the physics/​QM point of view on this, since you will lose: the states “be­tween the com­pu­ta­tional el­e­ments”, what­ever you may mean by that, de­co­here and re­lax to “ther­mal noise” much quicker than the time be­tween clock tran­si­tions, so there no differ­ence be­tween a nanosec­ond an an hour.

Maybe what you mean is more logic-re­lated?...

No, I meant the phys­i­cal ex­pla­na­tion (I am a physi­cist, btw). It is pos­si­ble for a sys­tem to ex­hibit fea­tures at cer­tain fre­quen­cies, whilst only show­ing noise at oth­ers. Think stand­ing waves, for ex­am­ple.

I agree that fork­ing a con­scious­ness is not a morally triv­ial is­sue, but that’s differ­ent from tem­po­rary sus­pen­sion and restart­ing, which hap­pens all the time to peo­ple and ma­chines. I don’t think that con­flat­ing the two is helpful.

When does it ever hap­pen to peo­ple? When does your brain, even just re­gions ever stop func­tion­ing, en­tirely? You do not re­mem­ber deep sleep be­cause you are not form­ing mem­o­ries, not be­cause your brain has stopped func­tion­ing. What else could you be talk­ing about?

• Hmm, I get a feel­ing that none of these are your true ob­jec­tions and that, for some rea­son, you want to equate sus­pen­sion to death. I should have stayed dis­en­gaged from this con­ver­sa­tion. I’ll try to do so now. Hope you get your doubts re­solved to your satis­fac­tion even­tu­ally.

• 3 Oct 2013 7:53 UTC
0 points
Parent

I don’t want to, I just think that the al­ter­na­tives lead to ab­surd out­comes that can’t pos­si­bly be cor­rect (see my anal­y­sis of the tele­porter sce­nario).

• I re­ally have a hard time imag­in­ing a uni­verse where there ex­ists a thing that is pre­served when 10^-9 sec­onds pass be­tween com­pu­ta­tional steps but not when 10^3 pass be­tween steps (while I move the hard­drive to an­other box).

• Pre­dic­tion: TheOtherDave will say 50%, Beach!Dave and Bowl­ing!Dave would both con­sider both to be the “origi­nal”. As­sum­ing suffi­ciently ac­cu­rate scan­ning & simu­lat­ing.

• Here’s what TheOtherDave ac­tu­ally said.

• Yes, looks like that pre­dic­tion is falsified. At least the first sen­tence. :)

• I’ll give a 50% chance that I’ll ex­pe­rience that. (One copy of me con­tinues in the “real” world, an­other copy of me ap­pears in a simu­la­tion and goes bowl­ing.)

(If you ask this ques­tion as “the AI is go­ing to run N copies of the bowl­ing simu­la­tion”, then I’m not sure how to an­swer—I’m not sure how to weight N copies of the ex­act same ex­pe­rience. My in­tu­ition is that I should still give a 50% chance, un­less the simu­la­tions are go­ing to differ in some re­spect, then I’d give a N/​(N+1) chance.)

• I need to think about your an­swer, as right now it doesn’t make any sense to me. I sus­pect that what­ever in­tu­ition un­der­lies it is the source of our dis­agree­ment/​con­fu­sion.

@linkhyrule5 had an an­swer bet­ter than the one I had in mind. The prob­a­bil­ity of us go­ing bowl­ing to­gether is ap­prox­i­mately equal to the prob­a­bil­ity that you are already in said simu­la­tion, if com­pu­ta­tional con­ti­nu­ity is what mat­ters.

If there were a 6th Day like ser­vice I could sign up for where if any­thing were to hap­pen to me, a clone/​simu­la­tion of with my mem­o­ries would be cre­ated, I’d sign up for it in a heart­beat. Be­cause if some­thing were to hap­pen to me I wouldn’t want to de­prive my wife of her hus­band, or my daugh­ters of their father. But that is purely al­tru­is­tic: I would have P(~0) ex­pec­ta­tion that I would ac­tu­ally ex­pe­rience that re­s­ur­rec­tion. Rather, some dop­pel­ganger twin that in ev­ery out­ward way be­haves like me will take up my life where I left off. And that’s fine, but let’s be clear about the differ­ence.

If you are not the simu­la­tion the AI was refer­ring to, then you and it will not go bowl­ing to­gether, pe­riod. Be­cause when said bowl­ing oc­curs, you’ll be dead. Or maybe you’ll be al­ive and well and off do­ing other things while the simu­la­tion is go­ing on. But un­der no cir­cum­stances should you ex­pect to wake up as the simu­la­tion, as we are as­sum­ing them to be causally sep­a­rate.

At least from my way of think­ing. I’m not sure I un­der­stand yet where you are com­ing from well enough to pre­dict what you’d ex­pect to ex­pe­rience.

• @linkhyrule5 had an an­swer bet­ter than the one I had in mind. The prob­a­bil­ity of us go­ing bowl­ing to­gether is ap­prox­i­mately equal to the prob­a­bil­ity that you are already in said simu­la­tion, if com­pu­ta­tional con­ti­nu­ity is what mat­ters.

You could un­der­stand my 50% an­swer to be ex­press­ing my un­cer­tainty as to whether I’m in the simu­la­tion or not. It’s the same thing.

I don’t un­der­stand what “com­pu­ta­tional con­ti­nu­ity” means. Can you ex­plain it us­ing a pro­gram that com­putes the digits of pi as an ex­am­ple?

Rather, some dop­pel­ganger twin that in ev­ery out­ward way be­haves like me will take up my life where I left off. And that’s fine, but let’s be clear about the differ­ence.

I think you’re mak­ing a dis­tinc­tion that ex­ists only in the map, not in the ter­ri­tory. Can you point to some­thing in the ter­ri­tory that this mat­ters for?

• I come back to tell you that I will run a simu­la­tion of you so we can go bowl­ing together

Pre­sum­ably you cre­ate a sim-me which in­cludes the ex­pe­rience of hav­ing this con­ver­sa­tion with you (the AI).

do you or do you not ex­pect to ex­pe­rience bowl­ing with me in the fu­ture, and why?

Let me in­ter­pret the term “ex­pect” con­cretely as “I bet­ter go prac­tice bowl­ing now, so that sim-me can do well against you later” (as­sum­ing I hate los­ing). If I don’t par­tic­u­larly en­joy bowl­ing and rather do some­thing else, how much effort is war­ranted vs do­ing some­thing I like?

The an­swer is not un­am­bigu­ous and de­pends on how much I (meat-me) care about fu­ture sim-me hav­ing fun and not em­bar­rass­ing sim-self. If sim-me con­tinues on af­ter meat-me passes away, I care very much about sim-me’s well be­ing. On the other hand, if the sim-me pro­gram is halted af­ter the bowl­ing game, then I (meat-me) don’t care much about that sim-loser. After all, meat-me (who will not go bowl­ing) will con­tinue to ex­ist, at least for a while. You might feel differ­ently about sim-you, of course. There is a whole range of pos­si­ble sce­nar­ios here. Feel free to spec­ify one in more de­tail.

TL;DR: If the simu­la­tion will be the only copy of “me” in ex­is­tence, I act as if I ex­pect to ex­pe­rience bowl­ing.

• I’d have to learn more about how gen­eral anes­the­sia works to de­cide if per­sonal iden­tity would be pre­served across on the op­er­at­ing table

Hmm, what about across dream­less sleep? Or faint­ing? Or fal­ling and hit­ting your head and los­ing con­scious­ness for an in­stant? Would these count as kil­ling one per­son and cre­at­ing an­other? And so be morally net-nega­tive?

• If com­pu­ta­tional con­ti­nu­ity is what mat­ters, then no. Just be­cause you have no mem­ory doesn’t mean you didn’t ex­pe­rience it. There is in fact a con­tin­u­ous ex­pe­rience through­out all of the ex­am­ples you gave, just no new mem­o­ries be­ing formed. But from the last point you re­mem­ber (go­ing to sleep, faint­ing, hit­ting your head) to when you wake up, you did ex­ist and were run­ning a com­pu­ta­tional pro­cess. From our un­der­stand­ing of neu­rol­ogy you can be cer­tain that there was no in­ter­rup­tion of sub­jec­tive ex­pe­rience of iden­tity, even if you can’t re­mem­ber what ac­tu­ally hap­pened.

Whether this is also true of gen­eral anes­the­sia de­pends very much on the bio­chem­istry go­ing on. I ad­mit ig­no­rance here.

• OK, I guess I should give up, too. I am ut­terly un­able to re­late to what­ever it is you mean by “be­cause you have no mem­ory doesn’t mean you didn’t ex­pe­rience it” or “sub­jec­tive ex­pe­rience of iden­tity, even if you can’t re­mem­ber what ac­tu­ally hap­pened”.

• Did I some­how split into 100 differ­ent per­sons? Or was there in fact 101 sep­a­rate sub­jec­tive iden­tities, 1 of which ter­mi­nated in 2013 and 100 new ones cre­ated for the simu­la­tions? The lat­ter is a more straight for­ward ex­pla­na­tion, IMHO.

I would say that yes, at T1 there’s one of me, and at T2 there’s 100 of me.
I don’t see what makes “there’s 101 of me, one of which ter­mi­nated at T1” more straight­for­ward than that.

• I don’t see what makes “there’s 101 of me, one of which ter­mi­nated at T1” more straight­for­ward than that.

It’s wrapped up in the ques­tion over what hap­pened to that origi­nal copy that (maybe?) ter­mi­nated at T1. Did that origi­nal ver­sion of you ter­mi­nate com­pletely and for­ever? Then I wouldn’t count it among the 100 copies that were cre­ated later.

• Sure, ob­vi­ously if it ter­mi­nated then it isn’t around af­ter­wards.
Equally ob­vi­ously, if it’s around af­ter­wards, it didn’t ter­mi­nate.

You said your met­ric for de­ter­min­ing which de­scrip­tion is ac­cu­rate was (among other things) sim­plic­ity, and you claimed that the “101 − 1” an­swer is more straight­for­ward (sim­pler?) than the “100″ an­swer.
You can’t now turn around and say that the rea­son it’s sim­pler is be­cause the “101-1” an­swer is ac­cu­rate.

Either it’s ac­cu­rate be­cause it’s sim­pler, or it’s sim­pler be­cause it’s ac­cu­rate, but to as­sert both at once is ille­gi­t­i­mate.

• I’ll ad­dress this in my se­quence, which hope­fully I will have time to write. The short an­swer is that what mat­ters isn’t which ex­pla­na­tion of this situ­a­tion is sim­pler, re­quires fewer words, a smaller num­ber, or what­ever. What mat­ters is: which gen­eral rule is sim­pler?

Pat­tern or causal con­ti­nu­ity leads to all sorts of weird edge cases, some of which I’ve tried to ex­plain in my ex­am­ples here, and in other cases fails (mys­te­ri­ous an­swer) to provide a defini­tive pre­dic­tion of sub­jec­tive ex­pe­rience. There may be other solu­tions, but com­pu­ta­tional con­ti­nu­ity at the very least pro­vides a sim­pler model, even if it re­sults in the more “com­plex” 101-1 an­swer.

It’s sorta like wave col­lapse vs many-wor­lds. Wave col­lapse is sim­pler (sin­gle world), right? No. Many wor­lds is the sim­pler the­ory be­cause it re­quires fewer rules, even though it re­sults in a mind-bog­glingly more com­plex and varied mul­ti­verse. In this case I think com­pu­ta­tional con­ti­nu­ity in the way I for­mu­lated it re­duces con­scious­ness down to sim­ple gen­eral ex­pla­na­tion that dis­solves the ques­tion with no resi­d­ual prob­lems.

Kinda like how freewill is what a de­ci­sion al­gorithm feels like from the in­side, con­scious­ness /​ sub­jec­tive ex­pe­rience is what any com­pu­ta­tional pro­cess feels like from the in­side. And there­fore, when the com­pu­ta­tional pro­cess ter­mi­nates, so too does the sub­jec­tive ex­pe­rience.

• Do I ex­pe­rience “HONK Screeeech, bam” then wake up in a com­puter, or is it “HONK Screeeech, bam” and oblivion?

Non-run­ning al­gorithms have no ex­pe­riences, so the lat­ter is not a pos­si­ble out­come. I think this is per­haps an un­spo­ken ax­iom here.

• Non-run­ning al­gorithms have no ex­pe­riences, so the lat­ter is not a pos­si­ble out­come. I think this is per­haps an un­spo­ken ax­iom here.

No dis­agree­ment here—that’s what I meant by oblivion.

• OK, cool, but now I’m con­fused. If we’re mean­ing the same thing, I don’t un­der­stand how it can be a ques­tion—“not run­ning” isn’t a thing an al­gorithm can ex­pe­rience; it’s a log­i­cal im­pos­si­bil­ity.

• Clearly, your sub­jec­tive ex­pe­rience to­day is HONK-screech-bam-oblivion, since all the sub­jec­tive ex­pe­riences that come af­ter that don’t hap­pen to­day in this ex­am­ple… they hap­pen 50 years later.

It is not in the least bit clear to me that this means those sub­jec­tive ex­pe­riences aren’t your sub­jec­tive ex­pe­riences. You aren’t some epiphe­nom­e­nal en­tity that dis­si­pates in the course of those 50 years and there­fore isn’t around to ex­pe­rience those ex­pe­riences when they hap­pen… what­ever is hav­ing those sub­jec­tive ex­pe­riences, when­ever it is hav­ing them, that’s you.

maybe we should taboo the phrase “per­sonal/​sub­jec­tive iden­tity”.

Sounds like a fine plan, albeit a difficult one. Want to take a shot at it?

EDIT: Ah, you did so el­sethread. Cool. Replied there.

• Eliezer...the main is­sue that keeps me from cry­on­ics is not whether the “real me” wakes up on the other side. Most smart peo­ple would agree that this is a non-is­sue, a silly ques­tion aris­ing from the illu­sion of mind-body du­al­ity.

The first ques­tion is about how ac­cu­rate the re­con­struc­tion will be. When you wipe a hard drive with a mag­net, you can re­cover some of the con­tent, but usu­ally not all of it. Re­cov­er­ing “some” of a hu­man, but not all of it, could eas­ily cre­ate a men­tally hand­i­capped, bro­ken con­scious­ness.

Set­ting that aside, there is an sec­ond prob­lem. If and when im­mor­tal­ity and AI are achieved, what value would my re­vived con­scious­ness con­tribute to such a so­ciety?

You’ve thus far un­der­stood that death isn’t a bad thing when a copy of the in­for­ma­tion is pre­served and later re­vived. You’ve ex­plained that you are will­ing to treat con­scious­ness much like you would a com­puter file—you’ve ex­plained that you would be will­ing to de­stroy one of two re­dun­dant du­pli­cates of your­self.

Tell me, why ex­actly is it okay to de­stroy a re­dun­dant du­pli­cate of your­self? You can’t say that it’s okay to de­stroy it sim­ply be­cause it is re­dun­dant, be­cause that also de­stroys the point of cry­on­ics. There will be countless hu­mans and AIs that will come into ex­is­tence, and each of those minds will re­quire re­sources to main­tain. Why is it so im­por­tant that your, or my, con­scious­ness be one among this swarm? Isn’t that...well...re­dun­dant?

For the same rea­sons that you would be will­ing to de­stroy one of two iden­ti­cal copies, you should be will­ing to de­stroy all the copies given that the soft­ware—the con­scious­ness—that runs within is not ex­cep­tional among all the pos­si­ble con­scious­nesses that those re­sources could be de­voted to.

• I don’t think you’ve proven what you claim to have proven in this post, but it might work as pro­pa­ganda to in­crease cy­ron­ics en­rol­l­ment, which should be good for both of us.
Speci­fi­cally, I don’t think it’s clear that (1) cur­rent cry­on­ics tech­nol­ogy pre­vents in­for­ma­tion-the­o­retic death, (2) that if I’m “re­vived” from cry­on­ics such that it fools dis­cern­ment tech­nol­ogy of that era, I’m ac­tu­ally hav­ing a sub­jec­tive con­scious ex­pe­rience of be­ing al­ive and con­scious. And per­haps dis­cern­ment tech­nol­ogy 30 years later will trag­i­cally demon­strate why, and what could’ve been done differ­ently to pre­serve me as a sub­jec­tive con­scious en­tity, (3) fu­ture so­cieties with the tech­nol­ogy to re­vive us will choose to.

Separate from pro­pa­ganda, I think 1-3 are im­por­tant ar­eas to fo­cus on in terms of re­search and in­no­va­tion. We don’t want to be fooled by our own pro­pa­ganda and thus fail to ra­tio­nally max­i­mize our per­sis­tence odds. We don’t want to be pris­on­ers of our own myths.

• I think the en­tire post makes sense, but what if...

Brian flips a coin ten times, and in quan­tum branches where he get all tails he signs up for cry­on­ics. Each sur­viv­ing Brian makes a few thou­sand copies of him­self.

Carol takes \$1000 and plays 5050 bets on the stock mar­ket till she crashes or makes a billion. Win­ning Carols donate and in­vest wisely to make pos­i­tive sin­gu­lar­ity more likely and nega­tive sin­gu­lar­ity less likely, and sign up for cry­on­ics. Sur­viv­ing Carols run off around a mil­lion copies each, but ad­justed up­wards or down­wards based how nice a place to live they ended up in.

As­sum­ing Brian and Carol aren’t in love (most of her won’t get to meet any of him at the Sin­gu­lar­ity Re­u­nion), who’s bet­ter off here?

• Wise­man: Yes, that’s a pos­si­bil­ity. But even if I only gave MWI a, say, 30% prob­a­bil­ity of be­ing true, the thought of it be­ing even that likely would con­tinue to bother me. In or­der to avoid feel­ing the an­guish through that route, I’d need to make my­self be­lieve the chance for MWI be­ing true was far lower than what’s ra­tio­nal. In ad­di­tion to that be­ing against my prin­ci­ples, I’m not sure if it was eth­i­cal, ei­ther—if MWI re­ally is true, or even if there’s a chance of it be­ing true, then that should in­fluence my be­hav­ior some­how, by e.g. avoid­ing hav­ing offspring so there’d at least be less sen­tients around to ex­pe­rience the hor­ror of MWI (not that I’d prob­a­bly be hav­ing kids pre-Sin­gu­lar­ity any­way, but that was the first ex­am­ple that came to mind—avoid­ing situ­a­tions where I’m in a po­si­tion to harm some­body else would prob­a­bly also be good).

Thanks for try­ing to help, though.

• Dave,

Well, if you re­solve not to sign up for cry­on­ics and if the think­ing on Quan­tum Im­mor­tal­ity is cor­rect, you might ex­pect a se­ries of weird (and prob­a­bly painful) events to pre­vent you in­definitely from dy­ing; while if you’re signed up for it, the vast ma­jor­ity of the wor­lds con­tain­ing a later “you” will be the ones re­vived af­ter a peace­ful death. So there’s a big differ­ence in the sort of ex­pe­rience you might an­ti­ci­pate, de­pend­ing on whether you’ve signed up.

• I wrote a com­ment this morn­ing on the monthly open thread which ad­dresses some of the ques­tions that have been raised above, but I will copy it here since that is a stale thread.

A cou­ple of peo­ple asked about the re­la­tion­ship be­tween quan­tum ran­dom­ness and the macro­scopic world.

Eliezer wrote a long es­say here, http://​​www.sl4.org/​​wiki/​​Knowa­bil­i­tyOfFAI, about (among other things) the differ­ence be­tween un­pre­dictabil­ity of in­tel­li­gent de­ci­sions, and ran­dom­ness. De­ci­sions we or some­one else make may be un­pre­dictable be­fore­hand, but that doesn’t mean they are ran­dom. It may well be that even for a close and difficult de­ci­sion where it felt like we could have gone ei­ther way, that in the vast ma­jor­ity of the MWI branches, we would have de­cided the same way.

At the same time, it is clear that there would be at least some branches where we would have de­cided differ­ently. The brain ul­ti­mately de­pends on chem­i­cal pro­cesses like diffu­sion that have a ran­dom com­po­nent, and this ran­dom­ness will be in­fluenced by quan­tum effects as molecules in­ter­act. So there would be some quan­tum fluc­tu­a­tions that could cause neu­rons to be­have differ­ently, and ul­ti­mately lead to differ­ent brain ac­tivi­ties. This means that at the philo­soph­i­cal level, we do face the fact that ev­ery de­ci­sion we make goes “both ways” in differ­ent branches. Our de­ci­sion mak­ing is then a mat­ter of what frac­tion of the branches go which way, and our men­tal efforts can be thought of as max­i­miz­ing the frac­tion of good out­comes.

It would be in­ter­est­ing to try to figure out the de­gree to which quan­tum effects in­fluence other macro­scopic sources of ran­dom­ness. Clearly, due to the but­terfly effect, storms will be highly in­fluenced by quan­tum ran­dom­ness. If we re­set the world to 5 years ago and put ev­ery molecule on the same track, New Or­leans would not have been de­stroyed in al­most all cases. How about a coin flip? If it comes up heads, what frac­tion of the branches would have seen tails? My guess is that the ma­jor vari­able will be the strength with which the coin is thrown by the thumb and arm. At the molec­u­lar level this will have two in­fluences: the actin and myosin fibers in the mus­cles, ac­ti­vated by neu­ro­trans­mit­ter pack­ets; and the fric­tion be­tween the thumb­nail and the fore­finger which de­ter­mines the ex­act point at which the coin is re­leased. The mus­cle ac­tivity will have con­sid­er­able quan­tum vari­a­tion in in­di­vi­d­ual fiber steps, but there would be a huge num­ber of fibers in­volved, so I’d guess that will av­er­age out and be pretty sta­ble. The fric­tion on the other hand would prob­a­bly be non­lin­ear and chaotic, an avalanche effect where a small change in stick­i­ness leads to a big change in over­all mo­tion. I can’t come up with a firm an­swer on this ba­sis, but my guess would be that there is a sub­stan­tial but not over­whelming quan­tum effect, so that we would see close to a 50-50 split among the branches. I won­der if any­one has at­tempted a more quan­ti­ta­tive anal­y­sis.

One thing I will add, I imag­ine that ping-pong ball based lot­tery ma­chines would be sub­stan­tially af­fected by quan­tum ran­dom­ness. The many bounces will lead to chaotic be­hav­ior, sen­si­tive de­pen­dence on ini­tial con­di­tions, and even very small ran­dom­ness due to quan­tum effects dur­ing col­li­sions will al­most cer­tainly IMO be am­plified to pro­duce macro­scop­i­cally differ­ent cir­cum­stances af­ter sev­eral sec­onds.

• I have been se­ri­ously con­sid­er­ing cry­on­ics; if the MWI is cor­rect, I figure that even if there is a van­ish­ingly small chance of it work­ing, “I” will still wake up in one of the wor­lds where it does work. Then again, even if I do not sign up, there are plenty of wor­lds out there where I do. So sign­ing up is less of an at­tempt to live for­ever as it is an at­tempt to line up my cur­rent ex­is­tence with the mem­ory of the per­son who is re­vived, if that makes any sense. To put it an­other way, if there is a world where I pro­cras­ti­nate sign­ing up un­til right be­fore I die, the per­son who is re­vived will have 99.9% of the same mem­o­ries as some­one who did not sign up at all, so if I don’t end up sign­ing up I do not lose much.

FWIW, I sent an email to Al­cor a while ago that was never re­sponded to, which makes me won­der if they have their act to­gether enough to pre­serve me for the long haul.

On a re­lated note, is there much agree­ment on what is “pos­si­ble” as far as MWI goes? For ex­am­ple, in a clas­si­cal uni­verse if I know the po­si­tion/​mo­men­tum of ev­ery par­ti­cle, I can pre­dict the out­come of a coin flip with 1.0 prob­a­bil­ity. If we throw quan­tum events in the mix, how much does this change? I figure the an­swer should be in the range of (1.0 - tiny num­ber) and (0.5 + tiny num­ber).

• Kaj didn’t sug­gest that there is any other vi­able op­tion. He sug­gested kil­ling off the hu­man race.

This strat­egy would fail too, how­ever, since it would not suc­ceed on ev­ery branch.

• Err, how can two copies of a per­son be ex­actly the same when the grav­i­ta­tional forces on each will both be differ­ent? Isn’t the very idea that you can trans­fer ac­tual atoms in the uni­verse to a new lo­ca­tion while some­how en­sur­ing that this trans­fer doesn’t de­ter­minis­ti­cally guaran­tee be­ing able to de­ter­min­ing which per­son “caused” the copy to ex­ist (I.E. the origi­nal), phys­i­cal non­sense?

While molecules may not have in­visi­ble “unique ID” num­bers at­tached to them, they are unique in the sense of quan­tum evolu­tion, pre­serv­ing the “im­por­tance” of one atom dis­t­in­guished from an­other.

• Ran­dom ques­tion for Eliezer:

If, to­day, you had the cry­op­re­served body of Genghis Khan, and had the ca­pac­ity to re­vive it, would you? (Re­mem­ber, this is the guy who, ac­cord­ing t leg­end, said that the best thing in life was to crush your en­e­mies, see them driven be­fore you, and hear the lamen­ta­tion of the women.)

(As Un­known sug­gests, I’d rather have a “bet­ter” per­son ex­ist in the fu­ture than have “me” ex­ist in the fu­ture. What I’d do in a post-Sin­gu­lar­ity fu­ture is sign up to be a wire­head. The fu­ture doesn’t need more wire­heads.)

• If, to­day, you had the cry­op­re­served body of Genghis Khan, and had the ca­pac­ity to re­vive it, would you? (Re­mem­ber, this is the guy who, ac­cord­ing t leg­end, said that the best thing in life was to crush your en­e­mies, see them driven be­fore you, and hear the lamen­ta­tion of the women.)

Ab­solutely. You could raise mil­lions just from the his­to­ri­ans of Europe & Asia who would kill to talk to Genghis Khan; and then there’s all the ge­netic and med­i­cal re­search one could do on an au­then­tic liv­ing/​breath­ing per­son of a mil­len­nium ago. (More tests of Sapir-Whorf, any­one?)

• This ar­gu­ment makes no sense to me:

If you’ve been cry­ocras­ti­nat­ing, putting off sign­ing up for cry­on­ics “un­til later”, don’t think that you’ve “got­ten away with it so far”. Many wor­lds, re­mem­ber? There are branched ver­sions of you that are dy­ing of can­cer, and not signed up for cry­on­ics, and it’s too late for them to get life in­surance.

This is only hap­pen­ing in the sce­nar­ios where I didn’t sign up for cry­on­ics. In the ones where I did sign up, I’m safe and cozy in my very cold bed. Th­ese uni­verses don’t ex­ist con­tin­gent on my be­hav­ior in this one; what pos­si­ble im­pact could my choice here to sign up for cry­on­ics have on my al­ter­nate-uni­verse Dop­pelgän­geren?

• Same here. This does not strike me as a good ar­gu­ment at all… We can re­verse it to ar­gue against sign­ing up for cry­on­ics :

“Even if I sign up for cry­on­ics, there will still be some other wor­lds in wich I didn’t and in wich “I” am dy­ing of can­cer.”

Or

“Even if don’t sign up, there are still other wor­lds in wich I did.”

Maybe there is some­thing about me ac­tu­ally mak­ing the choice to sign up in this world al­ter­ing/​con­strain­ing the over­all prob­a­bil­ity dis­tri­bu­tion and mak­ing some out­comes less and less prob­a­ble in the over­all dis­tri­bu­tion...

I am new to this side and I still have to search through it more thor­ough­fuly but I re­ally don’t think I can let that ar­gu­ment fly by with­out re­ac­tion. I ap­pol­o­gize in ad­vance if I make some re­ally dumb mis­take here.

Edit:

Okay, I thought this over a lit­tle bit and I can see a point: the ear­lier I sign up the more there will be of fu­ture “me”s get­ting cry­on­ised. I do not see how much it mat­ters in the grand scheme of things (I am just choos­ing a branch , I am not de­stroy­ing the branch in wich I choose not to sign up.) but I guess there can be some­thing along the lines of “I can not do much about the past but my de­ci­sions can in­fluence the ‘fu­ture’” or “my re­spon­s­abil­ity is about my fu­ture ‘me’s, I should not worry about the wor­lds I can not ‘reach’”

The ar­gu­ment still sounds rather weak to me (and the many-world view a bit nihilis­tic, not that it makes it wrong but I find it rather weird that you man­age to get some sort of pos­i­tive drive from it.)

• I am just choos­ing a branch , I am not de­stroy­ing the branch in wich I choose not to sign up.

Ac­tu­ally… you are. The phys­i­cal im­ple­men­ta­tion of mak­ing the choice in­volves shift­ing weight from not-signed-up branches to signed-up branches (note, the ‘not-signed-up-yet’ branch is defined in a way that lets it leak am­pli­tude). That im­ple­men­ta­tion is con­tained within you, and it in­volves pro­cesses we de­scribe as ap­ply­ing op­er­a­tors on that branch which re­duce its am­pli­tude. This to­tally counts as de­stroy­ing the branch.

If you sign up for cry­on­ics at time T1, then the not-signed-up branch has lower am­pli­tude af­ter T1 than it had be­fore T1. But this is very differ­ent from say­ing that the not-signed up branch has lower am­pli­tude af­ter T1 than it would have had af­ter T1 if you had not signed up for cry­on­ics at T1. In fact, the lat­ter state­ment is nec­es­sar­ily false if physics re­ally is time­less.

I think this lat­ter point is what the other posters are driv­ing at. It is true that if there is a branch at T1 where some yous go down a path where they sign up and oth­ers don’t, then the am­pli­tude for not-signed-up is lower af­ter T1. But this hap­pens even if this par­tic­u­lar you doesn’t go down the signed-up branch. What mat­ters is that the branch point oc­curs, not which one any spe­cific you takes.

In other words, am­pli­tude is always be­ing seeped from the not-signed-up branch, even if some par­tic­u­lar you keeps not leav­ing that branch.

• Same here. This does not strike me as a good ar­gu­ment at all… We can re­verse it to ar­gue against sign­ing up for cry­on­ics :

“Even if I sign up for cry­on­ics, there will still be some other wor­lds in wich I didn’t and in wich “I” am dy­ing of can­cer.”

Or

“Even if don’t sign up, there are still other wor­lds in wich I did.”

Maybe there is some­thing about me ac­tu­ally mak­ing the choice to sign up in this world al­ter­ing/​con­strain­ing the over­all prob­a­bil­ity dis­tri­bu­tion and mak­ing some out­comes less and less prob­a­ble in the over­all dis­tri­bu­tion...

I am new to this side and I still have to search through it more thor­ough­fuly but I re­ally don’t think I can let that ar­gu­ment fly by with­out re­ac­tion. I ap­pol­o­gize in ad­vance if I make some re­ally dumb mis­take here.

• 22 May 2017 11:37 UTC
0 points

There is no situ­a­tion where two same ob­jects can be ob­served in the same place at the same time.

If we were to ig­nore their phys­i­cal lo­ca­tion and we are look­ing at a flow­ing ac­tion—time will split them the mo­ment one is copied. Their first ex­pe­rience will be differ­ent, cre­at­ing two differ­ent iden­tities.

If we were to ig­nore the lo­ca­tion and ob­serve them both in a cer­tain mo­ment of time. This would be similar to look­ing at two iden­ti­cal pho­tos of the same per­son, we would not be able to spot a differ­ence in their iden­tity un­less we press the “Play” but­ton again.

I as­sume there is no iden­tity with­out time. And where is time, there are no ex­act copies.

• “If cry­on­ics were widely seen in the same terms as any other med­i­cal pro­ce­dure, economies of scale would con­sid­er­ably diminish the cost”

To what de­gree are these economies of scale as­sumed? Is it re­ally vi­able, both prac­ti­cally and fi­nan­cially, to cryo­geni­cally pre­serve 150,000 peo­ple a day?

Is there any par­tic­u­lar rea­son to sus­pect that in­vest­ing this sort of fund­ing in to cry­on­ics re­search is the best so­cial policy? What about other efforts to “cure death” by keep­ing peo­ple from dy­ing in the first place (for in­stance, those tech­nolo­gies that would be the nec­es­sary foun­da­tions for restor­ing peo­ple from cry­on­ics in the first place)?

I see cry­on­ics hyped a lot here, and in ra­tio­nal­ist /​ tran­shu­man com­mu­ni­ties at large, and it seems like an “ap­plause light”, a so­cial sig­nal of “I’m ra­tio­nal­ist; see, I even have the Manda­tory Tran­shu­man­ist Cryo­gen­ics Policy!”

• Liquid ni­tro­gen is cheap, and heat loss scales as the 23 power of vol­ume. Cry­on­i­cally pre­serv­ing 150,000 peo­ple per day would, I fully ex­pect, be vastly cheaper than any­thing else we could do to com­bat death.

• Could you tell us what you see in the way that cry­on­ics is “hyped” that you would be less likely to see if peo­ple praised it sim­ply be­cause it was a good idea?

• I would ex­pect to see a ra­tio­nal dis­cus­sion of the benefits and trade-offs in­volved, in such a way as to let me eval­u­ate, based on my util­ity func­tion, whether this is a good in­vest­ment for me.

In­stead, I pri­mar­ily see al­most a “re­versed stu­pidity” dis­cus­sion, com­bined with what seems like in-group sig­nal­ling: “See all these ar­gu­ments against cry­on­ics? They are all ir­ra­tional, as I have now demon­strated. QED cry­on­ics is ra­tio­nal, and you should sig­nal your con­for­mity to the Ra­tion­al­ity Tribe by sign­ing up to­day!”

I can to­tally un­der­stand why it’s pre­sented this way, but it reads off as “hype” be­cause I al­most never en­counter any­thing else. It all seems to just naively as­sume that “pre­serv­ing my in­di­vi­d­ual life at any cost is a perfectly ra­tio­nal de­ci­sion.” Maybe that re­ally is all the thought that goes in to it; if your util­ity func­tion places a suit­ably high value on self-preser­va­tion, then there’s not re­ally a lot of fur­ther dis­cus­sion re­quired.

But I get the sense that there are deeper thoughts that just never get dis­cussed, be­cause ev­ery­one is busy fight­ing against the nay-say­ers. There’s a deep ab­sence of ar­gu­ments for cry­on­ics, es­pe­cially ones that ac­tu­ally take in to con­sid­er­a­tion so­cial policy, and what else could be ac­com­plished for \$200K.

(Eliezer hinted at it, with his com­ments about economies of scale, but it was a mere foot­note, and quite pos­si­bly the first time I’ve seen any­one dis­cuss the is­sue from that per­spec­tive even briefly)

• Looks like you’ve just found an­other way of say­ing “you’re all ir­ra­tional!” with­out pro­vid­ing ev­i­dence.

• It’s more that all the ar­gu­ments I see are aimed at a differ­ent au­di­ence (cry­on­ics skep­tics). I do not take this as very strong ev­i­dence of ir­ra­tional­ity. On the other hand, any­one who posts here, I take that as de­cent ev­i­dence of ra­tio­nal­ity, es­pe­cially peo­ple like Eliezer. So I as­sume with a high prob­a­bil­ity that ei­ther the peo­ple es­pous­ing it have a differ­ent util­ity func­tion than I, or are sim­ply not talk­ing about the other half of the ar­gu­ment. I’m as­sum­ing that there is a ra­tio­nal rea­son, but ob­ject­ing be­cause I don’t feel any­one is try­ing to ra­tio­nally ex­plain it to me :)

Loosely, in my head, there’s the idea of a “nega­tive” ar­gu­ment, which is just re­but­ting your op­po­nent, or a “pos­i­tive” ar­gu­ment which ac­tu­ally looks at the ad­van­tages of your po­si­tion. I see hype, in-group sig­nal­ling, and “nega­tive” ar­gu­ments. I’m in­ter­ested in see­ing some “pos­i­tive” ones.

As far as ev­i­dence, I did ac­tu­ally just put up a post dis­cussing speci­fi­cally the “economies of scale” ar­gu­ment. It is thus far the only “pos­i­tive” ar­gu­ment I’ve heard for it, aside from the (IMO) very weak ar­gu­ment of “who doesn’t want im­mor­tal­ity?” (I find it weak speci­fi­cally be­cause it ig­nores both availa­bil­ity and price, and glosses over how re­li­a­bil­ity is af­fected by those two fac­tors as well)

Hope­fully that was clearer!

• Manda­tory link on cry­on­ics scal­ing that ba­si­cally agrees with Eliezer:

http://​​less­wrong.com/​​lw/​​2f5/​​cry­on­ics_wants_to_be_big/​​

• Un­less mod­ern figures have drifted dra­mat­i­cally, free stor­age would give you a whop­ping 25% off coupon.

This is based on the 1990 rates I found for Al­cor. And based on Al­cor’s com­men­tary on those prices, this is an op­ti­mistic es­ti­mate.

Cost of cryo­genic sus­pen­sion (neuro-sus­pen­sion only): \$18,908.76

Cost of fund to cover all main­te­nance costs: \$6,600

Pro­por­tional cost of main­te­nance: 25.87%

I’d also echo ci­pher­goth’s re­quest for any sort of ac­tual cita­tion on the num­bers in that post; the en­tire post strikes me as mak­ing some ab­surdly op­ti­mistic as­sump­tions (or some ut­terly triv­ial ones, if the au­thor was talk­ing about neuro-sus­pen­sion in­stead of whole-body...)

• Ques­tion for Eliezer and ev­ery­one else :

Would you re­ally not care about dy­ing if you knew you had a full backup body (with an up-to-date ver­sion of your brain) just wait­ing to be wo­ken up ?

• Eliezer: “my own in­surance policy, with CI.”? I thought you had said you were signed up as a neuro rather than full body. As far as I know, CI only does full body rather than neuro.

(Isn’t neuro sup­posed to be bet­ter, any­ways? That is, bet­ter chance of “clean” brain vit­rifi­ca­tion?)

• Court, that pa­per ad­dresses the gen­eral ques­tion of what we can know about the out­come of the Sin­gu­lar­ity.

Some­thing’s been bug­ging me about MWI and sce­nar­ios like this: am I perform­ing some sort of act of quan­tum al­tru­ism by not get­ting frozen since that means that “I” will be ex­pe­rienc­ing not get­ting frozen while some other me, or rather set of world-branches of me, will ex­pe­rience get­ting frozen?

Not re­ally, since your de­ci­sion de­ter­mines the rel­a­tive sizes of the sets of branches.

• Hmm, as­sum­ing de­war lev­els of in­su­la­tion* and a few other num­bers gues­ti­mated like joules re­quired to cre­ate a litre of N2 (2 KWh/​l) I got 7 litres lost per sec­ond and a 50KW sup­ply for en­ergy.

* I’m not sure this is a safe as­sump­tion. A de­war is fully sealed, we are putting 495000 litres of ma­te­rial in per day.

It looks like the cost to freeze the heads would dwarf this as well, 137 litres per sec­ond of more dense ma­te­rial with higher spe­cific heat ca­pac­ity to cool to liquid ni­tro­gen lev­els. Prob­a­bly up to the megawatt range, if not more. Not tak­ing into ac­count travel en­ergy costs and freez­ing costs while trav­el­ling.

It would be in­ter­est­ing to see whether it is bet­ter to have 1 gi­ant store or many smaller ones. Any­one up for brain­storm­ing a de­sign? I am not too in­ter­ested in per­sonal sur­vival but if it can be done for min­i­mal-ish cost it would be very worth­while from an archival of hu­man­ity point of view.

• I think Kaj’s con­cerns are silly and I’m all for shut­ting up and mul­ti­ply­ing, but is there a strong ar­gu­ment why the ex­pected util­ity of bet­ter-than-death out­comes out­weighs the ex­pected nega­tive util­ity of worse-than-death out­comes (boots stamp­ing on hu­man faces for­ever and the like)?

• @Kaj: There are more cheer­ful prospects. I think you are still too much caught up in an “essence” of you which acts. There is no such thing. There is no di­chotomy be­tween you and the uni­verse.

The an­guish you feel is an­guish about your own (the uni­verses!) suffer­ing. Try to be happy, you will in­crease hap­piness over­all.

Eastern philos­o­phy helps, it merges well with ma­te­ri­al­ism. You are only dis­turbed if you can’t get rid of deeply-con­di­tioned Western philo­soph­i­cal as­sump­tions.

Ray­mond Smul­lyan’s “The Tao is Silent”
Joseph Gold­stein’s “One Dharma: The Emerg­ing Western Bud­dhism” is ex­cel­lent also.

On my read­ing list (looks highly rele­vant) is this book:
Ko­lak, Daniel. “I Am You: The Me­ta­phys­i­cal Foun­da­tions for Global Ethics”

Maybe you want to check that out too.

Some in­spiri­a­tion from Lao Tse’s Dao de jing (verse two):

Un­der heaven all can see beauty as beauty only be­cause there is ugli­ness.
All can know good as good only be­cause there is evil.

There­fore hav­ing and not hav­ing arise to­gether.
Difficult and easy com­ple­ment each other.
Long and short con­trast each other:
High and low rest upon each other;
Voice and sound har­mo­nize each other;
Front and back fol­low one an­other.

There­fore the sage goes about do­ing noth­ing, teach­ing no-talk­ing.
The ten thou­sand things rise and fall with­out cease,
Creat­ing, yet not.
Work­ing, yet not tak­ing credit.
Work is done, then for­got­ten.
There­fore it lasts for­ever.

Cheers,
GÃ¼nther

• Nick,

Noth­ing about cry­on­ics there. That was what I was refer­ring to speci­fi­cally in bring­ing up Pas­cal’s Wager. Or am I miss­ing some­thing?

• Michael Vasser, thanks for the start of the calcu­la­tion. Shame you didn’t ac­tu­ally finish it by giv­ing en­ergy needed to main­tain temp per me­tre squared. This could be from 1 watt to 1000 watts, I don’t per­son­ally have a good es­ti­mate of in­su­la­tion/​ni­tro­gen loss at this temp.

Tak­ing into ac­count how much en­ergy will be needed to take 150k heads down to −200 de­grees C, would also be good. I am pressed for time, so I may not get around to it.

• What if cry­on­ics were phrased as the abil­ity to cre­ate an iden­ti­cal twin from your brain at some point in the fu­ture, rather than ‘you’ wak­ing up. If all ver­sions of peo­ple are the same, this dis­tinc­tion should be im­ma­te­rial. But do you think it would have the same ap­peal to peo­ple?
I don’t know, and un­less you’re try­ing to mar­ket it, I don’t think it mat­ters. Peo­ple make silly judge­ments on many sub­jects, blindly copy­ing the ma­jor­ity in this so­ciety isn’t par­tic­u­larly good ad­vice.

Each twin might feel strong re­gard for the other, but there’s no way they would ac­tu­ally be com­pletely in­differ­ent be­tween pain for them­selves and pain for their twin.
Any re­ac­tion of this kind is ei­ther ir­ra­tional, based on di­ver­gence which has already taken place, or based on value sys­tems very differ­ent from my own. In real life, you’d prob­a­bly get a mix of the first two, and pos­si­bly also the last, from most peo­ple.

If an­other ‘me’ were cre­ated on mars and then got a bul­let in the head, this would be sad, but no more so than any other death. It wouldn’t feel like a life-ex­tend­ing boon when he was cre­ated, nor a hor­rible blow to my im­mor­tal­ity when he was de­stroyed.
For me, this would be a quan­ti­ta­tive judge­ment: it de­pends on how much both in­stances have changed since the split. If the time lived be­fore the split is sig­nifi­cantly longer than that af­ter, I would con­sider the other in­stance a near-backup, and judge the rele­vance of its de­struc­tion ac­cord­ingly. Aside from the as­pect of valu­ing the other per­son as a hu­man like any other that also hap­pens to share most of your val­ues, it’s effec­tively like los­ing the only (and some­what out-of-date) backup of a very im­por­tant file: No ter­rible loss if you can keep the origi­nal in­tact un­til you can make a new backup, but an in­creased dan­ger in the mean­time.

If you truly be­lieve that ‘the same atoms means its ‘you’ in ev­ery sense’, sup­pose I’m go­ing to scan you and cre­ate an iden­ti­cal copy of you on mars. Would you im­me­di­ately trans­fer half your life sav­ings to a bank ac­count only ac­cessible from mars? What if I did this a hun­dred times?
Maybe, maybe not, de­pends on the ex­act strat­egy I’d mapped out be­fore­hand for what each of the copies will do af­ter the split. If I didn’t have enough fore­sight to do that be­fore­hand, all of my in­stances would have to agree on the strat­egy (in­clud­ing al­lo­ca­tion of ini­tial re­sources) over IRC or wiki or some­thing, which could get messy with a hun­dred of them—so please, if you ever do this, give me a week of ad­vance warn­ing. Split­ting it up evenly might be ok in the case of two copies (as­sum­ing they both have com­pa­rable ex­pected fi­nan­cial load and in­come in the near term), but would fail hor­ribly for a hun­dred; there just wouldn’t be enough money left for any of them to mat­ter at all (I’m a poor uni­ver­sity stu­dent, cur­rently; I don’t re­ally have “life sav­ings” in trans­ferrable for­mat).

• Some­thing’s been bug­ging me about MWI and sce­nar­ios like this: am I perform­ing some sort of act of quan­tum al­tru­ism by not get­ting frozen since that means that “I” will be ex­pe­rienc­ing not get­ting frozen while some other me, or rather set of world-branches of me, will ex­pe­rience get­ting frozen?

• Michael Anis­si­mov raises a good ques­tion about post length. Eliezer, I think some of your posts could benefit from be­ing shorter. You have to say what you need to, but peo­ple are more likely to read shorter blog posts.

Even be­fore I’d read the se­ries on quan­tum physics, I can’t imag­ine fear of still be­ing the same per­son as a rea­son I wouldn’t sign up for cry­on­ics. My un­der­stand­ing was that all the atoms mak­ing up your body change many times in a life­time any­way, and while that used to dis­tress me I wouldn’t have seen it as a prob­lem that would be ex­ac­er­bated greatly by sign­ing up for cry­on­ics. The only rea­son I haven’t signed up for cry­on­ics yet is money, but hope­fully I’ll be able to over­come that soon.

• John Faben: “If you re­ally be­lieve that sign­ing up for cry­on­ics is so im­por­tant, why aren’t you be­ing frozen now?”

I’m not sure any­one’s claimed that cry­on­ics is 100% guaran­teed to work. So com­mit­ting suicide just to get frozen would be an odd thing to do, given such un­cer­tainty.

• Phil: What makes you say nega­tive util­i­tar­i­anism is “a com­mon view in Western cul­ture since 1970”?

Pas­cal wa­gered on gain­ing im­mor­tal­ity via God, Eliezer wa­gers on gain­ing im­mor­tal­ity via the Sin­gu­lar­ity.… The same fal­lacy seems to ap­ply to Eliezer’s Wager—even if the Sin­gu­lar­ity is true, how can we know its char­ac­ter­is­tics, e.g., that some fu­ture benev­olent AI will re-an­i­mate his frozen brain?
• “Will Pear­son: Shut up and mul­ti­ply. 150K/​day adds up to about 3B af­ter 60 years, which is a con­ser­va­tively high es­ti­mate for how long we need. Heads have a vol­ume of a few liters, call it 3.33 for con­ve­nience, so that’s 10M cu­bic me­ters. Cool­ing in­volves mas­sive economies of scale, as only sur­faces mat­ter. All we are talk­ing about is, as­sum­ing a hemi­spher­i­cal fa­cil­ity, 168 me­ters of ra­dius and 267,200 square me­ters of sur­face area. Not a lot to in­su­late. One small power plant could eas­ily power the main­te­nance of such a fa­cil­ity at liquid ni­tro­gen tem­per­a­tures.

Michael Vas­sar—you’ve also as­sumed here that the num­ber “150K/​day” is go­ing to re­main con­stant over the next 60 years: it’s go­ing to in­crease.

I’m se­ri­ous. Other­wise you’ll buy lot­tery tick­ets be­cause some ver­sion of you wins, make in­con­sis­tent choices on the Allais para­dox, choose SPECKS over TORTURE...

Eliezer—I’m largely un­con­vinced by MWI, or at your in­ter­pre­ta­tion. But I’m not go­ing try to ar­gue it here.

You’re a great writer, you’re clever, and very quick. But you haven’t got a clue about moral­ity. Your tor­ture-over-specks con­clu­sion, and the line of ar­gu­ment which was used to reach it, is crip­plingly flawed. And ev­ery time you re­peat it, you de­lude minds.

• The thought of I—and yes, since there are no origi­nals or copies, the very I writ­ing this—hav­ing a guaran­teed cer­tainty of end­ing up do­ing that causes me so much an­guish that I can’t help but think­ing that if true, hu­man­ity should be de­stroyed in or­der to min­i­mize the amount of branches where peo­ple end up in such situ­a­tions. I find lit­tle com­fort in the prospect of the “be­trayal branches” be­ing van­ish­ingly few in fre­quency—in ab­solute num­bers, their amount is still uni­mag­in­ably large, and more are born ev­ery mo­ment.
To para­phrase:

Statis­ti­cally, it is in­evitable that some­one, some­where, will suffer. There­fore, we should de­stroy the world.

Eli’s posts, when dis­cussing ra­tio­nal­ity and com­mu­ni­ca­tion, tend to fo­cus on failures to com­mu­ni­cate in­for­ma­tion. I find that dis­agree­ments that I have with “nor­mal peo­ple” are some­times be­cause they have some un­der­ly­ing bizarre value func­tion, such as Kaj’s val­u­a­tion (a com­mon one in Western cul­ture since about 1970) that Utility(good things hap­pen­ing in 99.9999% of wor­lds—bad things hap­pen­ing in 0.0001% of wor­lds) < 0. I don’t know how to re­solve such differ­ences ra­tio­nally.

• Bran­don:And isn’t mul­ti­ply­ing in­fini­ties by finite in­te­gers to prove val­ues through quan­ti­ta­tive com­par­i­son an ex­er­cise doomed to failure?

In­fini­ties? OK, I’m fine with my mind smeared frozen in causal flow­ma­tion over countlessly split­ting wave pat­terns but please, no in­finite split­ting. It’s just un­nerv­ing.

• If you truly be­lieve that ‘the same atoms means its ‘you’ in ev­ery sense’, sup­pose I’m go­ing to scan you and cre­ate an iden­ti­cal copy of you on mars. Would you im­me­di­ately trans­fer half your life sav­ings to a bank ac­count only ac­cessible from mars?

Ab­solutely, as there is a 50% that af­ter the copy “I” will be the one end­ing up on Mars. If 100 copies were go­ing to be made, I would be pretty screwed; I think I would move to a welfare state first :)

Alter­na­tively, I would ask that they pick one of the copies at ran­dom and give him the money and kill the other 99. Of course, this would have the same effect as the copies never be­ing made (in a sense).

• If you truly be­lieve that ‘the same atoms means its ‘you’ in ev­ery sense’, sup­pose I’m go­ing to scan you and cre­ate an iden­ti­cal copy of you on mars. Would you im­me­di­ately trans­fer half your life sav­ings to a bank ac­count only ac­cessible from mars?

Even as­sum­ing that I could con­firm where my money was ac­tu­ally go­ing, I don’t think a copy of my­self left on Mars would have much use for money. So, no.

• Se­bas­tian:

Take this as a fur­ther ques­tion. One of the key dis­tinc­tions be­tween the ‘you you’ and the ‘iden­ti­cal twin you’ is the types of sac­ri­fice I’ll make for each one. Notwith­stand­ing that I can’t tell you why I’m still the same per­son when I wake up to­mor­row, I’ll sac­ri­fice for my fu­ture self in ways that I won’t for an atom-ex­act iden­ti­cal twin.

If you truly be­lieve that ‘the same atoms means its ‘you’ in ev­ery sense’, sup­pose I’m go­ing to scan you and cre­ate an iden­ti­cal copy of you on mars. Would you im­me­di­ately trans­fer half your life sav­ings to a bank ac­count only ac­cessible from mars? What if I did this a hun­dred times? If the same atoms make it the same per­son, why wouldn’t you?

And if you don’t re­ally have the same re­gard for a ‘copy’ of your­self while you’re still al­ive, why should this change when the origi­nal brain stays cryo­geni­cally frozen and a copy is cre­ated?

• Se­bas­tian:

I see your point that given the atoms are what they are, they are ‘the same per­son’, but can’t get around the sense that it still mat­ters on some level.

What if cry­on­ics were phrased as the abil­ity to cre­ate an iden­ti­cal twin from your brain at some point in the fu­ture, rather than ‘you’ wak­ing up. If all ver­sions of peo­ple are the same, this dis­tinc­tion should be im­ma­te­rial. But do you think it would have the same ap­peal to peo­ple?

Sup­pose you do a cryo­gen­ics brain scan and cre­ate a sec­ond ver­sion of your­self while you’re still al­ive. Each twin might feel strong re­gard for the other, but there’s no way they would ac­tu­ally be com­pletely in­differ­ent be­tween pain for them­selves and pain for their twin. They share a past up to a cer­tain point, and were iden­ti­cal when cre­ated, but that’s it. If an­other ‘me’ were cre­ated on mars and then got a bul­let in the head, this would be sad, but no more so than any other death. It wouldn’t feel like a life-ex­tend­ing boon when he was cre­ated, nor a hor­rible blow to my im­mor­tal­ity when he was de­stroyed. How is cryo­gen­ics differ­ent from this?

• RI—Aren’t Sur­viv­ing Brian Copies [1-1000] are each their own en­tity? Brian-like en­tities? “Who is bet­ter off” are any Brian-like en­tities that man­aged to sur­vive, any Adam-like en­tities that man­aged to sur­vive, and any Carol-like en­tities that man­aged to sur­vive. All in var­i­ous in­finite forms of “bet­ter off” based on lots of other splits from en­tirely un­re­lated cir­cum­stances. Say­ing or im­ply­ing that Carol-Cur­rent-In­stant-Prime is bet­ter off be­cause more fu­ture ver­sions of her sur­vived than Adam-Cur­rent-In­stant-Prime seems mis­taken, be­cause fu­ture ver­sions of Adam or Carol are all their own en­tities. Aren’t Adam-Next-In­stant-N and Adam-Cur­rent-In­stant-Prime also differ­ent en­tities?

And isn’t mul­ti­ply­ing in­fini­ties by finite in­te­gers to prove val­ues through quan­ti­ta­tive com­par­i­son an ex­er­cise doomed to failure?

All this try­ing to com­pare the qual­i­ta­tive val­ues of the fates of in­fini­ties of un­countable in­finite-in­fini­ties seems some­what pointless. Also: it seems to be an ex­er­cise in ig­nor­ing prob­a­bil­ity and causal­ity to make strange points that would be bet­ter made in clear state­ments.

:(

I might just mi­s­un­der­stand you.

• In that case I don’t think MWI says any­thing we didn’t already know: speci­fi­cally that ‘stuff hap­pens’ out­side of our con­trol, which is some­thing which we have to deal with even in non-quan­tum lines of thought. Try­ing to make choices differ­ent when ac­knowl­edg­ing that MWI is true prob­a­bly will re­sult in no util­ity gain at all, since say­ing that x num­ber of fu­ture wor­lds out of the to­tal will re­sult in some un­de­sir­able state, is the same as say­ing, un­der copen­hagen, the chances it will hap­pen to you is x out-of to­tal. And that lack of mean­ingfull differ­ence should be a clue as to MWI’s fals­hood.

In the end the only way to guide our ac­tions is to abide by ra­tio­nal ethics, and seek to im­prove those.

• I sup­pose I’ll just have to deal with it, then. Sigh—I was ex­pect­ing there to be some more cheer­ful an­swer, which I’d just failed to re­al­ize. Vas­sar’s re­sponse does help a bit.

• Also, if you count on quan­tum im­mor­tal­ity alone, the mea­sure of fu­ture-yous sur­viv­ing through freak­ish good luck will be much smaller than the mea­sure that would sur­vive with cry­on­ics. I’m not sure how this mat­ters, though, be­cause naive weight­ing seems to im­ply a very steep dis­count rate to ac­count for con­stant split­ting, which seems ab­surd.

• Just thought I’d men­tion that if one wants to con­sider parfit’s thought ex­per­i­ment (the brain scan­ner that non-de­struc­tively copies you) and the un­der­ly­ing quan­tum me­chan­i­cal na­ture of re­al­ity, you have to re­mem­ber the no-clon­ing the­o­rem.

http://​​en.wikipe­dia.org/​​wiki/​​No_clon­ing_theorem

Thus if you con­sider your­self to be a spe­cific quan­tum state, parfit’s ma­chine can­not pos­si­bly ex­ist. Of course there are sub­tleties here, but I just though I’d throw that in for peo­ple to con­sider.

• Is there re­ally any­one who would sign up for cry­on­ics ex­cept that they are wor­ried that their fu­ture re­vived self wouldn’t be made of the same atoms and thus would not be them? The case for cry­on­ics (a case that per­suades me) should be sim­pler than this.

I think that’s just a point in the larger ar­gu­ment that what­ever the “con­scious­ness we ex­pe­rience” is, it’s at suffi­ciently high level that it does sur­vive mas­sive changes at at quan­tum level over the course of a sin­gle night’s sleep. If worry about some­thing as seem­ingly dis­as­trous as hav­ing all the molecules in your body re­placed with iden­ti­cal twins can be shown to be un­founded, then wor­ry­ing about the effects of be­ing frozen for a few decades on your con­scious­ness should seem to be similarly un­founded.

• Eliezer,

• Is the ‘you’ on mars the same as ‘you’ on Earth?
There’s one of you on earth, and one on mars. They start out (by as­sump­tion) the same, but will pre­sum­ably in­creas­ingly di­verge due to differ­ent in­put from the en­vi­ron­ment. What else is there to know? What does the word ‘same’ mean for you?

And what ex­actly does that mean if the ‘you’ on earth doesn’t get to ex­pe­rience the other one’s sen­sa­tions first hand? Why should I care chat hap­pens to him/​me?
That’s be­tween your world model and your val­ues. If this hap­pened to me, I’d care be­cause the other in­stance of my­self hap­pens to have similar val­ues to the in­stance mak­ing the judge­ment, and will there­fore try to steer the fu­ture into states which we will both pre­fer.

• “Other physi­cists ar­gue that as­pects of time are real, such as the re­la­tion­ships of causal­ity, that record which events were the nec­es­sary causes of oth­ers. Pen­rose, Sorkin and Markopoulou have pro­posed mod­els of quan­tum space­time in which ev­ery­thing real re­duces to these re­la­tion­ships of causal­ity.”

I guess Eliezer is already aware of these the­o­ries...

• I’m a mem­ber of Al­cor. I wear my id neck­lace, but not the bracelet. I some­times won­der how much my prob­a­bil­ity of be­ing suc­cess­fully sus­pended de­pends on wear­ing my id tags and whether I have a sig­nifi­cantly higher prob­a­bil­ity from wear­ing both. I’ve as­signed a very high (70%+) prob­a­bil­ity to wear­ing at least one form of Al­cor id, but it seems an ad­di­tional one doesn’t add as much, as­sum­ing emer­gency re­sponse per­son­nel are trained to check the neck & wrists for spe­cial case ids. In most cases where I could catas­troph­i­cally lose one form of id (such as dis­mem­ber­ment!) I would prob­a­bly not be vi­able for sus­pen­sion. What do you other mem­bers think?

• @Ian Maxwell: It’s not about the yous in the uni­verses where you have signed up—it’s about all of the yous that die when you’re not signed up. i.e. none of the yous that die on your way to work tom­morow are go­ing to get frozen.

(This is mak­ing me won­der if any­one has de­vel­oped a cor­re­spond­ing gram­mar for many wor­lds yet...)

• Kaj So­tala, it seems you have stum­bled upon the apoc­a­lyp­tic im­per­a­tive.

• The Rudi Hoff­man link is bro­ken.

Is there any liter­a­ture on the likely en­ergy costs of large scale cry­on­ics?

And do you have a cun­ning plan where adding an ex­tra 150k vit­rified peo­ple per day to main­tain does not drive up the already heaven ward bound en­ergy prices, re­duc­ing the qual­ity of life of the poor­est? This could lead to con­flict and more death, see re­cent South Africa for an ex­am­ple of poor en­ergy plan­ning.

A pre-req­ui­site for large scale cry­on­ics seem to me to be a sta­ble and eas­ily grow­able en­ergy sup­ply, which we just don’t seem to be able to man­age at the mo­ment.

• Isn’t that then how we dis­t­in­guish a non­de­struc­tive copy from the origi­nal?

Not un­less you pos­tu­late an im­perfect copy, with co­va­lent bonds in differ­ent places and so on.

Once you be­gin to pos­tu­late a the­o­ret­i­cally im­perfect copy, the Gen­er­al­ized ver­sion of the Anti-Zom­bie Prin­ci­ple has to take over and ask whether the differ­ences have more im­pact on your in­ter­nal nar­ra­tive /​ mem­o­ries /​ per­son­al­ity etc. than the cor­re­spond­ing effects of ther­mal noise at room tem­per­a­ture.

• Eliezer: That’s how we dis­t­in­guish Eliezer from Mitchell.

Isn’t that then how we dis­t­in­guish a non­de­struc­tive copy from the origi­nal? If the origi­nal has been copied non­de­struc­tively, why shouldn’t we con­tinue to re­gard it as the origi­nal?

• I can­not ex­pe­rience what fu­ture me will ex­pe­rience, not even what past me ex­pe­rienced. I can­not ex­pe­rience what my hy­po­thet­i­cal copy ex­pe­riences. The con­figu­ra­tion that leads to my iden­tity is not im­por­tant. The only thing I can value and pre­serve is what I ex­pe­rience now.

Why should I care about a copy of me? In­vest on a re­s­ur­rected ver­sion of my­self?

• Quan­tum non-same­ness of the con­figu­ra­tions from mo­ment to mo­ment, and quan­tum ab­solute equal­ity of “the same sorts of par­ti­cles in the same ar­range­ment” are both illus­tra­tive as ex­tremes, but the ques­tion looks much sim­pler to me. Since I have ev­ery rea­son to sup­pose “the me of me” is in­for­ma­tional, I can sim­ply ap­ply what I know of in­for­ma­tion: that it ex­ists as pat­terns in­de­pen­dent of a par­tic­u­lar sub­strate, and that it can be copied and still be the origi­nal. If I’m copied then the two mes will start di­verg­ing and be­come dis­t­in­guish­able, but nei­ther has a stronger claim.

• Clearly, due to the but­terfly effect, storms will be highly in­fluenced by quan­tum ran­dom­ness. If we re­set the world to 5 years ago and put ev­ery molecule on the same track, New Or­leans would not have been de­stroyed in al­most all cases.

No, no, no. The vast ma­jor­ity will have New Or­leans de­stroyed, but in slightly differ­ent ways. Yes, weather is chaotic, but it evolves in fairly set ways. The origi­nal Lorenz at­trac­tor is chaotic, but it has a definite shape that re­curs.

• … there­fore, you can pre­dict that some hur­ri­canes will oc­cur in the area, but not pre­cisely where they will be. The origi­nal state­ment is cor­rect.

• David,

You’re right not to feel a ‘blow to your im­mor­tal­ity’ should that hap­pen; but con­sider an al­ter­nate story:

You step into the tele­port cham­ber on Earth and, af­ter a weird glow sur­rounds you, you step out on Mars feel­ing just fine and dandy. Then some­body tells you that there was a copy of you left in the Earth booth, and that the copy was just as­sas­si­nated by anti-clon­ing ex­trem­ists.

The point of the iden­tity post is that there’s re­ally no differ­ence at all be­tween this story and the one you just told, ex­cept that in this story you sub­jec­tively feel you’ve trav­eled a long way in­stead of stay­ing in the booth on Earth.

Both of the copies are you (or, more pre­cisely, be­fore you step into the booth each copy is a fu­ture you); and to each copy, the other copy is just a clone that shares their mem­o­ries up to time X.

• Kaj—there is a more cheer­ful an­swer. And this is it: Many-Wor­lds isn’t true. Although Eliezer may be con­fi­dent, the fi­nal word on the is­sue is still a long way off. Eliezer has been illog­i­cal on enough of his rea­son­ing that there is rea­son to ques­tion that con­fi­dence.

• I can un­der­stand why cre­at­ing a re­con­struc­tion of a frozen brain might still be con­sid­ered ‘you’. But what hap­pens if mul­ti­ple ver­sions of ‘you’ are cre­ated? Are they all still ‘you’? If I cre­ate 4 re­con­struc­tions of a brain and put them in four differ­ent bod­ies, punch­ing one in the arm will not cre­ate nerve im­pulses in the other three. And the punched brain will be­gin to think differ­ent thoughts (‘who is this jerk punch­ing me?’).

In that case, all 4 brains started as ‘you’, but will not ex­pe­rience the same sub­se­quent thoughts, and will be as dis­con­nected from each other as iden­ti­cal twins.

This is ba­si­cally the first Parfit ex­am­ple, which I note you don’t ac­tu­ally ad­dress. Is the ‘you’ on mars the same as ‘you’ on Earth? And what ex­actly does that mean if the ‘you’ on earth doesn’t get to ex­pe­rience the other one’s sen­sa­tions first hand? Why should I care chat hap­pens to him/​me?

• As a mat­ter of his­tor­i­cal co­her­ence, as it were, see Na­gar­juna’s MÅ«lamad­hya­maka-kÄrikÄ (Fun­da­men­tal Verses of the Mid­dle Way). Con­cern­ing the point that ‘noth­ing hap­pens,’ you have more or less ar­rived at the same con­clu­sions, though need­less to say his ver­sion lacks the fancy math­e­mat­i­cal foot­work. I tend to think that your fun­da­men­tal po­si­tion re­gard­ing the phys­i­cal na­ture of ex­is­tence, in­so­far as I un­der­stand it, is prob­a­bly cor­rect. It’s where you go from there that’s a lit­tle more trou­bling.

Na­gar­juna ex­trap­o­lates from his views that via the Law of Karma we can reach Nir­vana; Eliezer ex­trap­o­lates from his views that via the Laws of Physics we can reach the Sin­gu­lar­ity. Both hold that their Law(s) do not re­quire our as­sent; they con­tinue to op­er­ate whether we be­lieve in them them or not, and fur­ther­more, their op­er­a­tion is in­evitable. I am very skep­ti­cal that this fol­lows in ei­ther case.

As re­gards cry­on­ics, it seems to me what Eliezer is do­ing is fairly sim­ple: he’s tak­ing Pas­cal’s Wager. Pas­cal wa­gered on gain­ing im­mor­tal­ity via God, Eliezer wa­gers on gain­ing im­mor­tal­ity via the Sin­gu­lar­ity. There’s no harm in it, per se, any more than there was in Pas­cal’s be­ing a be­liev­ing Chris­tian. But one of the ma­jor fal­la­cies in Pas­cal’s Wager is the as­sump­tion that we know God’s char­ac­ter­is­tics, e.g., if I am be­lieve in Him, He will re­ward me with eter­nal life. The same fal­lacy seems to ap­ply to Eliezer’s Wager—even if the Sin­gu­lar­ity is true, how can we know its char­ac­ter­is­tics, e.g., that some fu­ture benev­olent AI will re-an­i­mate his frozen brain?

Per­haps, Eliezer, you could in fu­ture posts fill in the gaps .

• note that Parfit is de­scribing thought ex­per­i­ments, not nec­es­sar­ily en­dors­ing them.

I spy with my lit­tle eye some­thing be­gin­ning with D.

• [ ]
[deleted]