# MWI, copies and probability

Fol­lowup to: Poll: What value ex­tra copies?

For those of you who didn’t fol­low Eliezer’s Quan­tum Physics Se­quence, let me re­it­er­ate that there is some­thing very messed up about the uni­verse we live in. Speci­fi­cally, the Many Wor­lds In­ter­pre­ta­tion (MWI) of quan­tum me­chan­ics states that our en­tire clas­si­cal world gets copied some­thing like 1040±20 times per sec­ond1. You are not a line through time, but a branch­ing tree.

If you think care­fully about Descartes’ “I think there­fore I am” type skep­ti­cism, and ap­proach your stream of sen­sory ob­ser­va­tions from such a skep­ti­cal point of view, you should note that if you re­ally were just one branch-line in a per­son-tree, it would feel ex­actly the same as if you were a unique per­son-line through time, be­cause look­ing back­wards, a tree looks like a line, and your mem­ory can only look back­wards.

How­ever, the rules of quan­tum me­chan­ics mean that the in­te­gral of the mod­u­lus squared of the am­pli­tude den­sity, ∫|Ψ|2, is con­served in the copy­ing pro­cess. There­fore, the tree that is you has branches that get thin­ner (where thick­ness is ∫|Ψ|2 over the lo­cal­ized den­sity “blob” that rep­re­sents that branch) as they branch off. In fact they get thin­ner in such a way that if you gath­ered them to­gether into a bun­dle, the bun­dle would be as thick as the trunk it came from.

Now, since each copy­ing event cre­ates a slightly differ­ent clas­si­cal uni­verse, the copies in each of the sub-branches will each ex­pe­rience ran­dom events go­ing differ­ently. This means that over a timescale of decades, they will be to­tally “differ­ent” peo­ple, with differ­ent jobs, prob­a­bly differ­ent part­ners and will live in differ­ent places though they will (of course) have your DNA, ap­prox­i­mate phys­i­cal ap­pear­ance, and an iden­ti­cal his­tory up un­til the time they branched off. For timescales on the or­der of a day, I sus­pect that al­most all of the copies will be vir­tu­ally iden­ti­cal to you, even down to go­ing to bed at the same time, hav­ing ex­actly the same sched­ule that day, think­ing al­most all of the same thoughts etc.

MWI mixes copies and probability

When a “ran­dom” event hap­pens, ei­ther the event was pseu­do­ran­dom (like a large digit of pi) or it was a copy event, mean­ing that both (or all) out­comes were re­al­ized el­se­where in the wave­func­tion. This means that in many situ­a­tions, when you say “there is a prob­a­bil­ity p of event X hap­pen­ing”, what this re­ally means is “pro­por­tion p of my copy-chil­dren will ex­pe­rience X”.

In Poll: What value ex­tra copies?, I asked what value peo­ple placed upon non-in­ter­act­ing ex­tra copies of them­selves, ask­ing both about lock-step iden­ti­cal and statis­ti­cally iden­ti­cal copies. The over­whelming opinion was that nei­ther were of much value. For ex­am­ple, Sly com­ments:2

“I would place 0 value on a copy that does not in­ter­act with me. This might be odd, but a copy of me that is non-in­ter­act­ing is in­dis­t­in­guish­able from a copy of some­one else that is non-in­ter­act­ing. Why does it mat­ter that it is a copy of me?”

How to get away with at­tempted mur­der

Sup­pose you throw a grenade with a quan­tum deto­na­tor at Sly. The deto­na­tor will sam­ple a qbit in an even su­per­po­si­tion of states 1 and 0. On a 0 it ex­plodes, in­stantly va­por­iz­ing sly (it’s a very pow­er­ful grenade). On a 1, it de­fuses the grenade and dis­penses a \$100 dol­lar note. Sup­pose that you throw it and ob­serve that it doesn’t ex­plode:

(A) does Sly charge you with at­tempted mur­der, or does he thank you for giv­ing him \$100 in ex­change for some­thing that had no value to him any­way?

(B) if he thanks you for the free \$100, does he ask for an­other one of those nice free hun­dred dol­lar note dis­pensers? (This is the “quan­tum suicide” option

(C) if he says “the one you’ve already given me was great, but no more please”, then pre­sum­ably if you throw an­other one against his will, he will thank you for the free \$100 again. And so on ad in­fini­tum. Sly is tem­po­rally in­con­sis­tent if this op­tion is cho­sen.

The punch line is that the physics we run on gives us a very strong rea­son to care about the welfare of copies of our­selves, which is (ac­cord­ing to my sur­vey) a counter-in­tu­itive re­sult.

EDIT: Quite a few peo­ple are bit­ing the quan­tum suicide bul­let. I think I’ll have to talk about that next. Also, Wei Dai sum­ma­rizes:

Another way to think about this is that many of us seem to share the fol­low three in­tu­itions about non-in­ter­act­ing ex­tra copies, out of which we have to give up at least one to re­tain log­i­cal con­sis­tency:

1. We value ex­tra copies in other quan­tum branches.

2. We don’t value ex­tra copies that are just spa­tially sep­a­rated from us (and are not too far away).

3. We ought to value both kinds of copies the same way.

• Giv­ing up 1 is the po­si­tion of “quan­tum im­mor­tal­ity”.

• Giv­ing up 2 seems to be Roko’s po­si­tion in this post.

• Giv­ing up 3 would im­ply that our val­ues are rather ar­bi­trary: there seems to be no morally rele­vant differ­ences be­tween these two kinds of copies, so why should we value one and not the other? But ac­cord­ing to the “com­plex­ity of value” po­si­tion, per­haps this isn’t re­ally a big prob­lem.

I might add a fourth op­tion that many peo­ple in the com­ments seem to be go­ing af­ter: (4) We don’t in­trin­si­cally value copies in other branches, we just have a sub­jec­tive an­ti­ci­pa­tion of be­com­ing them.

1: The copy­ing events are not dis­crete, rather they con­sist of a con­tin­u­ous de­for­ma­tion of prob­a­bil­ity am­pli­tude in state space, but the shape of that de­for­ma­tion looks a lot like a con­tin­u­ous ap­prox­i­ma­tion to a dis­crete copy­ing event, and the clas­si­cal rules of physics ap­prox­i­mately gov­ern the time evolu­tion of the “copies” as if they were com­pletely in­de­pen­dent. This last state­ment is the phe­nomenon of de­co­her­ence. The un­cer­tainty in the copy­ing rate is due to my ig­no­rance, and I would wel­come a physi­cist cor­rect­ing me.

2: There were many oth­ers who ex­pressed roughly similar views, and I don’t hold it as a “black mark” to pick the op­tion that I am ad­vis­ing against, rather I en­courage peo­ple to hon­estly put for­ward their opinions in a spirit of com­mu­nal learn­ing.

• Your whole “para­dox­i­cal” setup works just as well if the ran­dom­iz­ing de­vice in the grenade is clas­si­cal rather than quan­tum. But in the clas­si­cal case our feel­ings are just the same, though no copies ex­ist! The moral of the story is, I cer­tainly do care about prob­a­bil­ity-branches of my­self (the prob­a­bil­ities could be clas­si­cal or quan­tum, no differ­ence), but you haven’t yet per­suaded me to care about ar­bi­trary copies of my­self el­se­where in the uni­verse that aren’t con­nected to my “main tree”, so to speak.

• Your whole “para­dox­i­cal” setup works just as well if the ran­dom­iz­ing de­vice in the grenade is clas­si­cal rather than quantum

What do you mean by “clas­si­cal”? Do you mean “pseu­do­ran­dom”, like a digit of pi?

• In the clas­si­cal case we could con­vert the prob­a­bil­ity into in­dex­i­cal un­cer­tainty. That is, the ran­dom choices were made at the be­gin­ning of time. There’s no tree, there are just in­de­pen­dent copies march­ing in lock-step un­til they be­have differ­ently.

• Ditto in the quan­tum case.

• No, it doesn’t, if by clas­si­cal you mean “pseu­do­ran­dom”. The pseu­do­ran­dom grenade that sly holds “could” re­ally have kil­led him, whereas the quan­tum grenade that he holds never stood any chance of kil­ling the sly that holds it, but it cer­tainly kil­led his quan­tum twin who he pro­fesses not to care about.

• I think you mean “might have”. If the grenade is pseu­do­ran­dom and it didn’t kill him, it just means that de­ter­minis­ti­cally it couldn’t kill him. It’s perfectly equiv­a­lent to a fake grenade that you don’t know is fake.

It can’t kill you (it’s phys­i­cally im­pos­si­ble for it to ex­plode), but it might kill you (you don’t know that it’s phys­i­cally im­pos­si­ble etc. etc.).

:-P

• Sure. I agree, but for charg­ing some­one with at­tempted mur­der, it is the “might” that mat­ters.

• Nice—I think you got across ex­actly what I was strug­gling to say but with about 14 of the words!

• Another way to think about this is that many of us seem to share the fol­low three in­tu­itions about non-in­ter­act­ing ex­tra copies, out of which we have to give up at least one to re­tain log­i­cal con­sis­tency:

1. We value ex­tra copies in other quan­tum branches.

2. We don’t value ex­tra copies that are just spa­tially sep­a­rated from us (and are not too far away).

3. We ought to value both kinds of copies the same way.

_

• Giv­ing up 1 is the po­si­tion of “quan­tum im­mor­tal­ity”.

• Giv­ing up 2 seems to be Roko’s po­si­tion in this post.

• Giv­ing up 3 would im­ply that our val­ues are rather ar­bi­trary: there seems to be no morally rele­vant differ­ences be­tween these two kinds of copies, so why should we value one and not the other? But ac­cord­ing to the “com­plex­ity of value” po­si­tion, per­haps this isn’t re­ally a big prob­lem.

Another pos­si­bil­ity is to hold a prob­a­bil­is­tic su­per­po­si­tion of these three po­si­tions, de­pend­ing on the rel­a­tive strengths of the rele­vant in­tu­itions in your mind.

• Is there any ev­i­dence for 1: “We value ex­tra copies in other quan­tum branches”...?

Who does that? It seems like a crazy po­si­tion to take—since those are in other wor­lds!

Re­ject­ing a p(0.5) grenade is not “valu­ing copies in other quan­tum branches.” It is sim­ply not want­ing to die. Mak­ing such a de­ci­sion while not know­ing how the prob­a­bilty will turn out works just the same clas­si­cally, with no mul­ti­ple copies in­volved. Ev­i­dently the de­ci­sion has noth­ing to do with “valu­ing mul­ti­ple copies”—and is sim­ply the re­sult of the ob­server’s un­cer­tainty.

• Is there any ev­i­dence for 1: “We value ex­tra copies in other quan­tum branches”...?

Who does that? It seems like a crazy po­si­tion to take—since those are in other wor­lds!

Me. Valu­ing ex­is­tence in as much Everett branch as pos­si­ble sounds like one of the least ar­bi­trary prefer­ences one could pos­si­bly have.

• Valu­ing existence

Whose ex­is­tence? It’s ques­tion beg­ging to as­sume that all copies share the same ex­is­tence.

• How does it com­pare to want­ing to make a large pos­i­tive differ­ence in as many Everett branches as pos­si­ble?

• Roughly equal up un­til the point where you are choos­ing what ‘pos­i­tive differ­ence’ means. While that is in­evitably ar­bi­trary it is ar­bi­trary in a, well, pos­i­tive way. While it does seem to me that ba­sic self per­pet­u­a­tion is in some sense more fun­da­men­tal or ba­sic than any so­phis­ti­cated value sys­tem I don’t en­dorse it any more than I en­dorse grav­ity.

• Valu­ing ex­is­tence?!? I have no idea what that means. The ex­is­tence—of what?

• The ex­is­tence of valu­ing, at least ;-)

If you ask what “ex­is­tence in an­other Everett branch” means, it means at least that at some point it was “ob­jec­tively” a prob­a­ble op­tion (“ob­jec­tively” means you were not epistem­i­cally wrong about as­sign­ing them prob­a­bil­ity), so that, up­date­lessly, you should care about them.

• The mul­ti­verse smears me into a messy con­tinuum of me and not-me. In this “least ar­bi­trary” of prefer­ence schemes, it is not at all clear what is ac­tu­ally be­ing val­ued.

If you are say­ing that the MWI is just a way of vi­su­al­is­ing prob­a­bil­ity, then we are back to:

“Mak­ing such a de­ci­sion while not know­ing how the prob­a­bilty will turn out works just the same clas­si­cally, with no mul­ti­ple copies in­volved. Ev­i­dently the de­ci­sion has noth­ing to do with “valu­ing mul­ti­ple copies”—and is sim­ply the re­sult of the ob­server’s un­cer­tainty.”

Ob­servers of­ten place value on fu­ture pos­si­bil­ities that they might find them­selves wit­ness­ing. But that is not about quan­tum the­ory, it is about ob­server un­cer­tainty. You get pre­cisely the same phe­nomenon in clas­si­cal uni­verses. To claim that that is valu­ing your fu­ture self in other wor­lds is thus a re­ally bad way of look­ing at what is hap­pen­ing. What peo­ple are valu­ing is usu­ally, in part, their own pos­si­ble fu­ture ex­is­tence. And they value that just the same whether they are in a uni­verse with many wor­lds physics—or not. The val­ues are noth­ing to do with whether the laws of physics dic­tate that copy­ing takes place. If it turns out ex­per­i­men­tally that wave­func­tions col­lapse, that will have roughly zero im­pact on most peo­ple’s moral sys­tems. They never val­ued other Everett wor­lds in the first place—so their loss would mean prac­ti­cally noth­ing to them.

The “many wor­lds” do not sig­nifi­cantly in­terfere with each other, once they are re­mote el­e­ments in the su­per­po­si­tion. A short while af­ter they have split they are gone for good. There is usu­ally no rea­son to value things you will never see again. You have no way to in­fluence them at that stage any­way. Ac­tu­ally car­ing about what hap­pens in other wor­lds in­volves coun­ter­fac­tu­als—and so is not some­thing evolu­tion can be ex­pected to favour. That is an ob­vi­ous rea­son for so few peo­ple ac­tu­ally do­ing it.

Maybe—from the ex­is­tence of this de­bate—this is some cu­ri­ous cor­ner of the in­ter­net where peo­ple re­ally do care about what hap­pens in other wor­lds—or at least think that they do. If so, IMO, you folk have prob­a­bly been mis­led—and are in need of talk­ing down. A moral sys­tem that de­pends on the de­tails of the in­ter­pre­ta­tion of quan­tum physics? Really? The idea has a high geek fac­tor maybe—but it seems to be lack­ing in com­mon sense.

Pur­port­ing to care about a bunch of things that never hap­pened, that can’t in­fluence you and that you can’t do any­thing about makes lit­tle sense as moral­ity—but looks a lot like sig­nal­ling: “see how very much I care?” /​ “look at all the things I care about”. It seems to be an ex­treme and un­be­liev­able sig­nal, though—so: you are kid­ding—right?

• Since you are writ­ing be­low my post and I sense de­tach­ment from what I’ve tried to ex­press, I re­fer you to my http://​​less­wrong.com/​​lw/​​2di/​​poll_what_value_ex­tra_copies/​​27ee and http://​​less­wrong.com/​​lw/​​2e0/​​mwi_copies_and_prob­a­bil­ity/​​27f1 com­ments.

ETA: I re­tract “de­tach­ment”. Why you don’ play Rus­sian roulette? Be­cause you could get kil­led. Why a ma­gi­cian plays Rus­sian roulette? Be­cause he knows he won’t. Some­one who doesn’t value Everett branches ac­cord­ing to their “re­al­ity mass” doesn’t win—no ma­gi­cian would play quan­tum Rus­sian roulette. That you can­not ex­pe­rience be­ing dead doesn’t mean that you are im­mor­tal. (And ad­di­tion­ally, my prefer­ences are over wor­lds, not over ex­pe­riences.)

• The thing is, the cor­rect “ex­pected util­ity” sum to perform is not re­ally much to do with “valu­ing Everett branches”. It is to do with what you know—and what you don’t. Some things you don’t know—be­cause of quan­tum un­cer­tanty. How­ever, other things you don’t know be­cause you never learned about them, other things you don’t know be­caue you for­got them, and other things you don’t know be­cause of your delu­sions. You must calcu­late the ex­pected con­se­quences of your ac­tions based on your knowl­edge—and your knowl­edge of your ig­no­rance. Quan­tum un­cer­tainty is only a small part of that ig­no­rance—and in­deed, it is usu­ally in­signifi­cant enough to be to­tally ig­nored.

This “valu­ing Everett branches” ma­te­rial mostly seems like a delu­sion to me. Hu­man de­ci­sion the­ory has pre­cious lit­tle to do with the MWI.

• Take “not want­ing to die” and ex­tract the state which peo­ple are in if they do not in fact die. Alter­nately, con­sider what an ob­server who has not taken a “crazy po­si­tion” may choose to value. Then con­sider the differ­ence be­tween ‘deep and mys­te­ri­ous’ and just plain silly.

• FWIW, af­ter this “ex­pla­na­tion”, I am none the wiser.

• Re­ject­ing a p(0.5) grenade is not “valu­ing copies in other quan­tum branches.” It is sim­ply not want­ing to die.

You don’t seem to re­al­ise that un­der the many wor­lds in­ter­pre­ta­tion, the prob­a­bil­ities of the differ­ent out­comes of quan­tum events cor­re­spond (roughly speak­ing) to the am­pli­tudes as­signed to differ­ent uni­verses, each of which con­tains in­stances (i.e. ‘copies’) of you and ev­ery­thing else. In other words, un­der MWI there is no differ­ence be­tween ‘want­ing to max­i­mize your quan­tum prob­a­bil­ity of sur­vival’ and ‘valu­ing copies of your­self in fu­ture quan­tum branches’.

[Note that I’ve sub­sti­tuted the word fu­ture for other. Whether A = “you at time t0” cares about B and C = “two differ­ent copies of you at time t1″, both of which are ‘de­scen­dants’ of A, is a some­what differ­ent ques­tion from whether B cares about C. But this differ­ence is or­thog­o­nal to the pre­sent de­bate.]

If you want to sim­ply deny the MWI then fine but you should ac­knowl­edge that that’s ul­ti­mately what you’re dis­agree­ing with. (Also, per­son­ally I would ar­gue that the only al­ter­na­tives to the MWI are ei­ther (a) in­co­her­ent like Copen­hagen (b) un­par­si­mo­nious like Bohm’s in­ter­pre­ta­tion or (c) con­tain un­mo­ti­vated de­vi­a­tions from the pre­dic­tions of or­tho­dox quan­tum me­chan­ics (like the GRW the­ory).)

• The phe­nomenon has noth­ing to do with quan­tum the­ory. You get the same re­sult if the grenade de­pends on a coin toss—and the grenade re­cip­i­ent is ig­no­rant of the re­sult. That is the point I just ex­plained.

The be­havi­our isn’t the re­sult of valu­ing copies in other wor­lds—it is sim­ply valu­ing your own ex­is­tence un­der con­di­tions of un­cer­tainty. The same be­havi­our would hap­pen just fine in de­ter­minis­tic clas­si­cal uni­verses with no copy­ing. So, the phe­nomenon has noth­ing to do with valu­ing copies—since it hap­pens just the same re­gard­less of whether the uni­verse makes copies or not.

• OK, I’ll try again, from the be­gin­ning:

What Wei Dai means by “valu­ing ex­tra copies in other quan­tum branches” is two things:

1. (Weak ver­sion:) The fact that A val­ues B and C, where B and C are pos­si­ble ‘fu­ture selves’ of A.)

2. (Strong ver­sion:) The fact that B val­ues C, where C is B’s “coun­ter­part in a quan­tum coun­ter­fac­tual world”.

Now, there’s an ar­gu­ment to be had about whether (2) should be true, even as­sum­ing (1), but right now this sim­ply mud­dies the wa­ters, and it will be much clearer if we con­cen­trate on (1).

So, A valu­ing his own con­tinued ex­is­tence means A want­ing it to be true that B and C, his pos­si­ble fu­ture selves (in differ­ent coun­ter­fac­tual wor­lds), are both al­ive. A would not be very happy with B be­ing dead and C be­ing al­ive, be­cause he would say to him­self “that means I have (e.g.) a 12 chance of dy­ing”. He’d much rather that B and C were both al­ive.

How­ever, A might think like this: “If the Many Wor­lds In­ter­pre­ta­tion is true then it’s wrong to say that ei­ther B or C but not both will ex­ist. Rather, both of them ex­ist in­de­pen­dently in sep­a­rate uni­verses. Now, what’s im­por­tant to me is that my mind con­tinues in some form. But I don’t ac­tu­ally need both B and C for that to hap­pen. So if Roko offered me \$100 in ex­change for the in­stan­ta­neous, painless death of B I’d quite hap­pily ac­cept, be­cause from my per­spec­tive all that will hap­pen is that I’ll re­ceive the \$100.”

Pre­sum­ably you dis­agree with this rea­son­ing, right? Even if MWI is true? Well, the pow­er­ful in­tu­ition that causes you to dis­agree is what Wei is talk­ing about. (As he says, giv­ing up that in­tu­ition is the po­si­tion of “quan­tum im­mor­tal­ity”.)

The fact that Wei states “the strong ver­sion” when “the weak ver­sion” would have sufficed is un­for­tu­nate. But you will com­pletely miss the point of the de­bate if you con­cen­trate solely on the differ­ence be­tween the two ver­sions.

• OK, I’ll try again, from the beginning

Tim some­times morphs into an “I won’t up­date” bot dur­ing de­bates.

• Er, what ev­i­dence ex­actly am I sup­posed to be up­dat­ing on?

The sup­plied ev­i­dence for 1 (“We value ex­tra copies in other quan­tum branches”) seems fee­ble. Most peo­ple are to­tally ig­no­rant of the MWI. Most peo­ple lived be­fore it was in­vented. Quan­tum the­ory is mostly an ir­rele­vance—as far as peo­ple’s val­ues goes. If—as­ton­ish­ingly—ev­i­dence of wave­func­tion col­lapse was ever found, peo­ple would carry on car­ing about things much as be­fore—with­out any break­down of moral­ity—de­spite the loss of prac­tially ev­ery­thing in other wor­lds. That thought ex­per­i­ment seems to demon­strate that most peo­ple care very lit­tle about copies of them­selves in other wor­lds—since they would be­have much the same if sci­en­tists dis­cov­ered that those wor­lds did not ex­ist.

Maybe there are some­where a bunch of peo­ple with very odd val­ues, who ac­tu­ally be­lieve that they re­ally do value copies of them­selves in other wor­lds. I can think of at least one fel­low who thinks like that—David Pearce. How­ever, if so, this hy­po­thet­i­cal silent mass of peo­ple have not stood up to be counted here.

• We can con­struct less in­tu­itive setup. You have cre­ated 99 copies of self.

Then ev­ery copy gets fake grenade (which always gives \$100). Origi­nal you get real grenade. After ex­plo­sion/​non­ex­plo­sion re­main­ing “you”s are merged. Will you ac­cept next grenade in that setup?

• I would be fine with that—as­sum­ing that the copies came out with the ex­tra money; that the copy­ing setup was re­li­able, etc.

This ap­par­ently has lit­tle to do with valu­ing “ex­tra copies in other quan­tum branches” though—there is no “Everett merge” pro­ce­dure.

• Can I sum it as: if you know that “backup copies” ex­ist then it’s OK to risk be­ing ex­ploded? Do you care for be­ing backed up in all Everett branches then? Or is it enough to backup in branch where grenade ex­plodes?

• The usual idea of a “backup” is that it can be used to re­store from if the “origi­nal” is lost or dam­aged. Everett wor­lds are not “back­ups” in that sense of the word. If a quan­tum grenade kills some­one, their griev­ing wife and daugh­ters are not con­soled much by the fact that—in other Everett wor­lds—the bomb did not go off. The sup­posed “back­ups” are in­ac­cessible to them.

• Can I sum it as: if you know that “backup copies” ex­ist then it’s OK to risk be­ing ex­ploded?

Kirk and Scotty would say yes.

• This ap­par­ently has lit­tle to do with valu­ing “ex­tra copies in other quan­tum branches” though—there is no “Everett merge” pro­ce­dure.

While for the pur­poses of this dis­cus­sion it makes no differ­ence, my un­der­stand­ing is that the “Everett branches” form more of a mesh if you look at them closely. That is, each pos­si­ble state for a world can be ar­rived at from many differ­ent past states, with some of those states them­selves shar­ing com­mon an­ces­tors.

• Maybe—but that is cer­tainly not the con­ven­tional MWI—see:

“Why don’t wor­lds fuse, as well as split?”

• Yes, en­tropy con­sid­er­a­tions make re­com­bin­ing com­par­a­tively rare. Much like it’s more likely for an egg to break than to re­com­bine perfectly. Phys­i­cal in­ter­ac­tions be­ing re­versible in prin­ci­ple doesn’t mean we should ex­pect to see things re­verse them­selves all that of­ten. I doubt that we have a sub­stan­tial dis­agree­ment (at least, we don’t if I take your refer­ence to be rep­re­sen­ta­tive of your po­si­tion.)

• Is there any ev­i­dence for 1: “We value ex­tra copies in other quan­tum branches”...?

Yes. Every per­son who says they don’t want to com­mit quan­tum suicide is giv­ing such ev­i­dence.

• Very nice! In this set­ting, my po­si­tion is to give up both 2 (be­cause I don’t be­lieve moral in­tu­ition works ad­e­quately to eval­u­ate this situ­a­tion) and 3 (“com­plex­ity of value” ar­gu­ment: even if we do value spa­tially sep­a­rated copies, it’s not at all in the same way as we value MWI copies), while ac­cept­ing 1 (for moral in­tu­ition, quan­tum branches are analo­gous to prob­a­bil­ity, where nor­mal/​clas­si­cal situ­a­tions are con­cerned).

• Giv­ing up 3 would im­ply that our val­ues are rather arbitrary

Is that even in ques­tion? If these val­ues (what­ever they are in a given per­son) can be de­rived from some higher value, then they may not be ar­bi­trary, but at some point you’re ei­ther go­ing to find a su­per­goal that all val­ues de­rive from, or you’re go­ing to find two val­ues that are ar­bi­trary with re­spect to each other.

Find­ing the lat­ter case sooner rather than later seems to match how hu­mans re­ally are, so un­less you’re will­ing to ar­gue that hu­mans have a su­per­goal, giv­ing up 3 is a step that you’ve already taken any­way.

• We ought to value both kinds of copies the same way.

One ar­gu­ment in favour of this po­si­tion is the sub­jec­tive ex­pe­rience ar­gu­ment: you can­not tell the differ­ence be­tween be­ing a quan­tum copy and be­ing a clas­si­cal copy.

• I don’t think MWI is analo­gous to cre­at­ing ex­tra si­mul­ta­neous copies. In MWI one max­i­mizes the frac­tion of fu­ture selves ex­pe­rienc­ing good out­comes. I don’t care about par­allel selves, only fu­ture selves. As you say, look­ing back at my self-tree I see a sin­gle path, and look­ing for­ward I have ex­pec­ta­tions about fu­ture copies, but look­ing side­ways just sounds like day­dream­ing, and I don’t have place a high marginal value on that.

• Ex­actly my view.

A clar­ifi­ca­tion: sup­pose Roko throws such a qGre­nade (TM) at me, and I get \$100. I will be­come an­gry and my at­tempt to in­flict vi­o­lence upon Roko. How­ever, that is not be­cause I’m sad about the 50% of par­allel, un­touch­able uni­verses where I’m dead. In­stead, that is be­cause Roko’s be­hav­ior is strong ev­i­dence that in the fu­ture he may do dan­ger­ous things; righ­teous anger now (and, per­haps, vi­o­lence) is sim­ply in­tended to re­duce the mea­sure of my cur­rent “fu­tures” where Roko kills me.

On a slightly differ­ent note, wor­ry­ing about my “par­allel” copies (or even about their fu­tures) seems to me quite akin to wor­ry­ing about my past selves. I sim­ply doesn’t mean any­thing. I re­ally don’t care that my past self a year ago had a toothache — ex­cept in the limited sense that it’s slight ev­i­dence that I may be in the fu­ture pre­dis­posed to tooth aches. I do care about the prob­a­bil­ity of my fu­ture selves hav­ing aching teeth, be­cause I may be­come them.

Like Sly, I don’t put much value in “ver­sions” of me I can’t in­ter­act with. (The “much” is there be­cause, of course, I don’t know with 100% cer­tainty how the uni­verse works, so I can’t be 100% sure what I can in­ter­act with.) But my “fu­ture selves” are in a kind of in­ter­ac­tion with me: what I do in­fluences which of those fu­ture selves I’ll be­come. The value as­signed to them is akin to the value some­one in free-fall as­signs to the rigidity of the sur­face be­low them: they aren’t an­gry be­cause (say) the pave­ment is hard, in it­self; they are an­gry be­cause it im­plies a squishy fu­ture for them­selves. On the other hand, they re­ally don’t care about the sur­face they’ve fallen from.

• On a slightly differ­ent note, wor­ry­ing about my “par­allel” copies (or even about their fu­tures) seems to me quite akin to wor­ry­ing about my past selves. I sim­ply doesn’t mean any­thing. I re­ally don’t care that my past self a year ago had a toothache — ex­cept in the limited sense that it’s slight ev­i­dence that I may be in the fu­ture pre­dis­posed to tooth aches. I do care about the prob­a­bil­ity of my fu­ture selves hav­ing aching teeth, be­cause I may be­come them.

With this in mind it seems that you treat a qGre­nade in ex­actly the same way you would treat a pseudo-ran­dom grenade. You don’t care whether the prob­a­bil­ity was quan­tum or just ‘un­known’. My rea­son­ing may be very slightly differ­ent but in this re­gard we are in agree­ment.

• Yep. Gre­nades in MY past are always duds, oth­er­wise I wouldn’t be here to talk about them. It doesn’t mat­ter if they were fake, or malfunc­tioned, or had a pseu­do­ran­dom or quan­tum prob­a­bil­ity to blow up. Past throw­ers of grenades are only rele­vant in the sense that they are ev­i­dence of fu­ture grenade-throw­ing.

Gre­nades in my fu­ture are those that I’m con­cerned about. With re­gards to peo­ple in­tend­ing to throw grenades at me, the only dis­tinc­tion is how sure I am I’ll live; even some­thing de­ter­minis­tic but hard to com­pute (for me) I con­sider a risk, and I’d be an­gry with the pre­sump­tive thrower.

(A fine point: I would be less an­gry to find out that some­one who threw a grenade at me knew it wouldn’t blow, even if I didn’t know it at the time. I’d still be pissed, though.)

• Yep. Gre­nades in MY past are always duds, oth­er­wise I wouldn’t be here to talk about them. It doesn’t mat­ter if they were fake, or malfunc­tioned, or had a pseu­do­ran­dom or quan­tum prob­a­bil­ity to blow up. Past throw­ers of grenades are only rele­vant in the sense that they are ev­i­dence of fu­ture grenade-throw­ing.

I would still kill them, even if I knew they were now com­pletely re­formed or im­po­tent. If con­ve­nient, I’d beat them to death with a sin­gle box just to ram the point home.

• In MWI, one can do noth­ing about the pro­por­tion of fu­ture selves ex­pe­rienc­ing good out­comes that would not have hap­pened any­way.

• Re: “In MWI one max­i­mizes the frac­tion of fu­ture selves ex­pe­rienc­ing good out­comes.”

Note that the MWI is physics—not moral­ity, though.

• You are right, I should have said some­thing like “im­ple­ment­ing MWI over some moral­ity.”

• Is there some­thing wrong with the par­ent be­yond per­haps be­ing slightly awk­ward in ex­pres­sion?

Tim seems to be point­ing out that MWI it­self doesn’t say any­thing about max­imis­ing nor any­thing about what you should try to max­imise. This cor­rects a mis­lead­ing claim in the quote.

(Upvoted back to 0 with ex­pla­na­tion).

• I’ll bite this bul­let. If the grenade is always re­li­able, and if MWI is true for cer­tain, and if it’s not just a math­e­mat­i­cal for­mal­ism and the other wor­lds ac­tu­ally ex­ist, and if I don’t have any rel­a­tives or other peo­ple who’d care of me, and if my ex­ist­ing in this world doesn’t pro­duce any net good for the other res­i­dents of this world, and if I won’t end up man­gled… then I would ac­cept the deal of hav­ing this grenade thrown at me in ex­change for 100 dol­lars. Like­wise, I would deem it le­gal to offer this trade for other peo­ple, if it could be as­cer­tained, with­out any chance of cor­rup­tion, co­er­cion or abuse, that these same crite­ria also ap­plied to all the peo­ple the trade was be­ing offered to.

But for that pur­pose, you have to as­sume pretty much all of those con­di­tions. Since we can’t ac­tu­ally do that in real life, this isn’t re­ally as para­dox­i­cal a dilemma as you im­ply it to be. In or­der to make it a para­dox, you’d have to tack on so many as­sump­tions that it ceases to have nearly any­thing to do with the world we live in any­more.

It re­minds me of the ar­gu­ment that util­i­tar­i­anism is wrong, be­cause util­i­tar­i­anism says that doc­tors should kill healthy pa­tients to get life-sav­ing or­gan trans­plants for sev­eral other peo­ple. Yes, if you could in­sti­tute this as a gen­eral policy with no chance of any­one ever find­ing out about that, and you could always kill the cho­sen peo­ple with­out them hav­ing a chance to es­cape, and a dozen other caveats, then this might be worth it… but in pretty much any con­ceiv­able real-life situ­a­tion, even try­ing to in­sti­tute such a policy would ob­vi­ously just do more harm than good, so it isn’t re­ally an ar­gu­ment against util­i­tar­i­anism. Like­wise, your pro­posed sce­nario isn’t re­ally an ar­gu­ment against not valu­ing iden­ti­cal copies.

• It al­most sounds like you’re say­ing:

If I thought my life was worth­less any­way then sure, throw the grenade at me.

(I’m be­ing a bit face­tious, be­cause there is a differ­ence be­tween “my ex­ist­ing in this world doesn’t pro­duce any net good for any­one” and “my ex­ist­ing in this world doesn’t pro­duce any net good for any­one be­sides my­self”. But in real life, how plau­si­ble is it that we would have one with­out the other?)

• I never thought of it that way, but you’re right.

• You’re ig­nor­ing the trade­off that peo­ple face be­tween mak­ing them­selves hap­pier and mak­ing oth­ers hap­pier.

In this case, the \$100 makes it rel­a­tively unattrac­tive. But sup­pose it was \$100,000,000?

• Then grenade, please! I could help more per­son-mo­ments in the one Everett branch with a hun­dred mil­lion dol­lars than I could in both branches with my cur­rent level of in­come. And what’s more, my life would be more com­fortable, av­er­ag­ing over all my per­son-mo­ments.

• Yes, that’s true. In re­al­ity, find­ing real-life sys­tems that give you pos­i­tive ex­pected money for quan­tum suicide is hard. (Though far from im­pos­si­ble)

• Really? Can’t I just make a deal with Fred, pool our fi­nances and dis­tribute our quan­tum prob­a­bil­ity of life in pro­por­tion to our fi­nan­cial con­tri­bu­tion?

Or, read­ing again, are you refer­ring to max­imis­ing “p(life) * money_if_you_live”? I sup­pose that just re­lies on trad­ing with peo­ple who are des­per­ate. For ex­am­ple if Fred re­quires \$1,000,000 to cure his can­cer and has far less money than that he will benefit from trad­ing a far smaller slice of quan­tum_life at a worse price so that his slice of life ac­tu­ally in­volves liv­ing.

In­ci­den­tally Fred’s situ­a­tion there is one of very few cases where I would ac­tu­ally Quan­tum Suicide my­self. I value my quan­tum slice ex­tremely highly… but ev­ery­thing has a price (in util­ity if not dol­lars).

• The prob­lem with a lot of these tricks is that in a fair lot­tery, p(life) * money_if_you_live is fixed, and in a real lot­tery, it goes down ev­ery time you play be­cause real lot­ter­ies have nega­tive ex­pected value.

• That’s why you always make sure you’re the house.

By the way, un­der­scores work like as­ter­isks do. Es­cape them with an \ if you want to use more than one.

• I think that a bet­ter solu­tion is to use the stock mar­ket as a fair lot­tery. Then you pick your bet: E(\$) is always 0. (If there were ob­vi­ous ways to lose money in ex­pec­ta­tion on the mar­ket, then there would be ob­vi­ous ways to make it. But that is un­likely)

• I think that a bet­ter solu­tion is to use the stock mar­ket as a fair lot­tery.

Fair sys­tems are good for the per­son who would oth­er­wise be ex­ploited. They aren’t good for the one who is seek­ing ad­van­tage. The whole point in this branch is that you were con­sid­er­ing the availa­bil­ity of find­ing deals that give you pos­i­tive ex­pected re­turns.

If you are look­ing for a way to en­sure a real pos­i­tive ex­pec­ta­tion from a deal you don’t cre­ate a stock mar­ket, you cre­ate a black mar­ket.

• Well, it’s a use­ful thing to have. Cer­tainly beats real lot­ter­ies.

• The punch line is that the physics we run on gives us a very strong rea­son to care about the welfare of copies of our­selves, which is (ac­cord­ing to my sur­vey) a counter-in­tu­itive re­sult.

No, it doesn’t. Max­imis­ing your Everett blob is a differ­ent thing than max­imis­ing copies. They aren’t the same thing. It is perfectly con­sis­tent to care about hav­ing your­self ex­ist­ing in as much Everett stuff as pos­si­ble but be com­pletely in­differ­ent to how many clones you have in any given branch.

Read­ing down to Wei’s com­ment and us­ing that break down, premise 3) just seems to­tally bizarre to me:

1. We ought to value both kinds of copies the same way.

Huh? What? Why? The only rea­son to in­tu­itively con­sider that those must have the same value is to have in­tu­itions that re­ally don’t get quan­tum me­chan­ics.

I hap­pen to like the idea of hav­ing clones. I would pay to have clones across the cos­mic hori­zon. But this is in a whole differ­ent league of prefer­ence to not hav­ing me obliter­ated from half the quan­tum tree. So if I was Sly my re­sponse would be to lock you in a room and throw your 50% death grenade in with you. Then the Sly from the rele­vant branch would throw in a frag grenade to finish off the job. You just 50% mur­dered him.

It oc­curs to me that my in­tu­itions for such situ­a­tions are es­sen­tially up­date­less. Wedrifid-Sly cares about the state of the mul­ti­verse, not that of the sub­set of the Everett tree that hap­pens to flow through him at that pre­cise mo­ment in time (time­less too). It is ac­tu­ally ex­tremely difficult for me to imag­ine think­ing in such a way that quan­tum-mur­der isn’t just mostly mur­der­ring me, even af­ter the event.

• If your in­tu­itions are up­date­less, you should definitely care about the welfare of copies. Up­date­lessly, you are a copy of your­self.

• I have prefer­ences across the state of the uni­verse and all of my copies share them. Yet I, we, need not value hav­ing two copies of us in the uni­verse. It so hap­pens that I do have a mild prefer­ence for hav­ing such copies and a stronger prefer­ence for none of them be­ing tor­tured but this prefer­ence is or­thog­o­nal to time­less in­tu­itions.

• It so hap­pens that I do have a mild prefer­ence for hav­ing such copies and a stronger prefer­ence for none of them be­ing tor­tured but this prefer­ence is or­thog­o­nal to time­less in­tu­itions.

Want­ing your iden­ti­cal copies to not be tor­tured seems to be quintessen­tial time­less de­ci­sion the­ory...

• Want­ing your iden­ti­cal copies to not be tor­tured seems to be quintessen­tial time­less de­ci­sion the­ory...

If that is the case then I re­ject it time­less de­ci­sion the­ory and await a bet­ter one. (It isn’t.)

What I want for iden­ti­cal copies is a mere mat­ter of prefer­ence. There are many situ­a­tions, for ex­am­ple, where I would care not at all whether a simu­la­tion of me is be­ing tor­tured and that simu­la­tion doesn’t care ei­ther. I don’t even con­sider that to be a par­tic­u­larly in­sane prefer­ence.

• Do you like be­ing tor­tured?

• No. AND I SAY THE SAME THING AS I PREVIOUSLY DID BUT WITH EMPHASIS. ;)

• MWI copies and pos­si­ble world copies are not prob­le­matic, be­cause both situ­a­tions nat­u­rally ad­mit an in­ter­pre­ta­tion in terms of “the fu­ture me” con­cept (“split­ting sub­jec­tive ex­pe­rience”), and so the moral in­tu­itions an­chored to this con­cept work fine.

It is with within-world copies, or even worse near-copies, that the in­tu­ition breaks down: then there are mul­ti­ple “fu­ture me”, but no “the fu­ture me”. Anal­y­sis of such situ­a­tions can’t rely on those moral in­tu­itions, but nihilis­tic po­si­tion would also be in­cor­rect: we are just not equipped to eval­u­ate them.

Anal­y­sis of such situ­a­tions can’t rely on those moral in­tu­itions, but nihilis­tic po­si­tion would also be in­cor­rect: we are just not equipped to eval­u­ate them.

Do you be­lieve that a nihilis­tic po­si­tion would be in­cor­rect on the grounds of in­ter­nal log­i­cal in­con­sis­tency, or on the grounds that it would in­volve an in­cor­rect fac­tual state­ment about some ob­jec­tively ex­ist­ing prop­erty of the uni­verse?

• There are no grounds to priv­ilege nihilis­tic hy­poth­e­sis. It’s like as­sert­ing that the speed of light is ex­actly 5,000,000,000 m/​s be­fore mak­ing the first ex­per­i­ment. I’m ig­no­rant, and I ar­gue that you must be ig­no­rant as well.

(Of course, this situ­a­tion doesn’t mean that we don’t have some state of knowl­edge about this fact, but this state of knowl­edge would have to in­volve a fair bit of un­cer­tainty. De­ci­sion-mak­ing is pos­si­ble with­out much of epistemic con­fi­dence, un­der­stand­ing of what’s go­ing on.)

• Could you give an ex­am­ple of a pos­si­ble fu­ture in­sight that would in­val­i­date the nihilis­tic po­si­tion? I hon­estly don’t un­der­stand on what grounds you might be judg­ing “cor­rect­ness” here.

• I think the point is that not valu­ing non-in­ter­act­ing copies of one­self might be in­con­sis­tent. I sus­pect it’s true; that con­sis­tency re­quires valu­ing par­allel copies of our­selves just as we value fu­ture var­i­ants of our­selves and so pre­serve our lives. Our fu­ture selves also can’t “in­ter­act” with our cur­rent self.

• The poll in the pre­vi­ous post had to do with a hy­po­thet­i­cal guaran­tee to cre­ate “ex­tra” (non-in­ter­act­ing) copies.

In the situ­a­tion pre­sented here there is noth­ing jus­tify­ing the use of the word “ex­tra”, and it seems analo­gous to quan­tum-lot­tery situ­a­tions that have been dis­cussed pre­vi­ously. I clearly have a rea­son to want the world to be such that (as­sum­ing MWI) as many of my fu­ture selves as pos­si­ble ex­pe­rience a fu­ture that I would want to ex­pe­rience.

As I have ar­gued pre­vi­ously, the term “copy” is mis­lead­ing any­way, on top of which the word “ex­tra” was re­in­forc­ing the con­no­ta­tions linked to copy-as-backup, where in MWI noth­ing of the sort is hap­pen­ing.

So, I’m still per­plexed. Pos­si­bly a clack on my part, mind you.

• Does it make an im­por­tant differ­ence to you that the MWI copies were “go­ing to be there any­way”, hence los­ing them is not fore­go­ing a gain but los­ing some­thing you already had? Is this an ex­am­ple of loss aver­sion?

• I value hav­ing a fu­ture that ac­cords with my prefer­ences. I am in no way in­differ­ent to your toss­ing a grenade my way, with a sub­jec­tive 12 prob­a­bil­ity of dy­ing. (Or non-sub­jec­tively, “forc­ing half of the fu­ture into a state where all my plans, am­bi­tions and ex­pec­ta­tions come to a grievous end.”)

I am, how­ever, in­differ­ent to your tak­ing an ac­tion (cre­at­ing an “ex­tra” non-in­ter­act­ing copy) which has no in­fluence on what fu­ture I will ex­pe­rience.

• Or non-sub­jec­tively, “forc­ing half of the fu­ture into a state where all my plans, am­bi­tions and ex­pec­ta­tions come to a grievous end.”

Well, it’s not re­ally half the fu­ture. It’s half of the fu­ture of this branch, which is it­self only an as­tro­nom­i­cally tiny frac­tion of the pre­sent. The vast ma­jor­ity of the fu­ture already con­tains no Morendil.

• I am, how­ever, in­differ­ent to your tak­ing an ac­tion (cre­at­ing an “ex­tra” non-in­ter­act­ing copy) which has no in­fluence on what fu­ture I will ex­pe­rience.

So you’d be OK with me putting you to sleep, scan­ning your brain, cre­at­ing 1000 copies, then wak­ing them all up and kil­ling all but the origi­nal you? (from a self­ish point of view, that is—imag­ine that all the copies are wo­ken then kil­led in­stantly and painlessly)

• I wouldn’t be happy to ex­pe­rience wak­ing up and re­al­iz­ing that I was a copy about to be snuffed (or even won­der­ing whether I was). So I would pre­fer not to in­flict that on any fu­ture selves.

• Sup­pose that the copies to be snuffed out don’t re­al­ize it. They just wake up and then die, with­out ever re­al­iz­ing. Would it worry you that you might “be” one of them?

• It doesn’t re­ally seem to mat­ter, in that case, that you wake them up at all.

And no, I wouldn’t get very worked up about the fate of such pat­terns (ex­cept in­so­far as I would like them to be pre­served for backup pur­poses).

• as many of my fu­ture selves as pos­si­ble ex­pe­rience a fu­ture that I would want to ex­pe­rience.

Do you mean “as large a frac­tion of” or “as many as pos­si­ble in to­tal”? Be­cause if you kill (se­lec­tively) most of your fu­ture selves, you could end up with the over­whelming ma­jor­ity of those re­main­ing liv­ing very well…

• As cousin_it has ar­gued, “se­lec­tively kil­ling most of my fu­ture selves” is some­thing that I sub­jec­tively ex­pe­rience as “hav­ing a size­able prob­a­bil­ity of dy­ing”. That doesn’t ap­peal.

• Ok, un­der­stood. So would you say that the frac­tion of fu­ture selves (in some copy­ing pro­cess such as MWI) that sur­vive == your sub­jec­tive prob­a­bil­ity of sur­vival?

• Yup.

• I think the situ­a­tion with re­gard to MWI ‘copies’ is differ­ent from that with re­gard to mul­ti­ple copies ex­ist­ing in the same Everett branch, or in a clas­si­cal uni­verse.

I can’t fully ex­plain why, but I think that when, through de­co­her­ence, the uni­verse splits into two or more ‘wor­lds’, each of which are as­signed prob­a­bil­ity weights that (as nearly as makes no differ­ence) be­have like clas­si­cal prob­a­bil­ities, it’s ra­tio­nal to act as though the Copen­hagen in­ter­pre­ta­tion was (as nearly as makes no differ­ence) true. Strictly speak­ing the Copen­hagen in­ter­pre­ta­tion is in­co­her­ent, but still you should act as though, in a “quan­tum suicide” sce­nario, there is prob­a­bil­ity p that you will ‘cease to ex­ist’, rather than a prob­a­bil­ity 1 that a copy of you will go on ex­ist­ing but ‘tagged’ with the in­for­ma­tion ‘norm square am­pli­tude of this copy is [1-p times that of the pre-ex­ist­ing per­son]’.

My ra­tio­nale is roughly as fol­lows: Sup­pose the uni­verse were gov­erned by laws of physics that were ‘in­de­ter­minis­tic’ in the sense of de­scribing the evolu­tion over time of a clas­si­cal prob­a­bil­ity dis­tri­bu­tion. Then if we want to, we can still pre­tend that there is a mul­ti­verse, that physics is de­ter­minis­tic, that all pos­si­ble wor­lds ex­ist with a cer­tain prob­a­bil­ity den­sity etc. And clearly the differ­ence be­tween a ‘sin­gle uni­verse’ and a ‘mul­ti­verse’ view is ‘meta­phys­i­cal’ in the sense that no ex­per­i­ment can tell them apart. Here I want to be a ver­ifi­ca­tion­ist and say that there is no ‘fact of the mat­ter’ as to which in­ter­pre­ta­tion is true. There­fore, the ques­tion of how to act ra­tio­nally shouldn’t de­pend on this.

When we move from clas­si­cal in­de­ter­minism to quan­tum in­de­ter­minism the prob­a­bil­ities get re­placed by com­plex-val­ued ‘am­pli­tudes’ but for rea­sons I strug­gle to ar­tic­u­late, I think the fact that the uni­verse is ‘nearly’ clas­si­cal means that our pre­scrip­tions for ra­tio­nal ac­tion must be ‘nearly’ the same as they would have been in a clas­si­cal uni­verse.

As for sly, he should act as though he has luck­ily sur­vived an event that had a 12 chance of kil­ling him (‘once and for all’, ‘ir­re­vo­ca­bly’ etc). Pre­sum­ably he would charge you with some­thing, though I’m not sure whether ‘at­tempted mur­der’ is what you’d be guilty of.

• And clearly the differ­ence be­tween a ‘sin­gle uni­verse’ and a ‘mul­ti­verse’ view is ‘meta­phys­i­cal’ in the sense that no ex­per­i­ment can tell them apart.

No, sorry, there are in-prin­ci­ple ex­per­i­ments that tell the two apart. For ex­am­ple, with good enough ap­para­tus, you could do the dou­ble-slit ex­per­i­ment with peo­ple. (Cur­rently they are do­ing it with bac­te­ria I be­lieve). You would be able to in­terfere with your­self in other branches in a wave-like way.

• For ex­am­ple, with good enough ap­para­tus, you could do the dou­ble-slit ex­per­i­ment with peo­ple. (Cur­rently they are do­ing it with bac­te­ria I be­lieve). You would be able to in­terfere with your­self in other branches in a wave-like way.

Wait, what? How would you do it with peo­ple or bac­te­ria? Do you have a link to the bac­te­ria ex­per­i­ment? I thought that the differ­ent wor­lds couldn’t in­ter­act; I’m very con­fused by this com­ment.

• How would you do it with peo­ple or bac­te­ria?

It would seem to rely on diffract­ing peo­ple around cor­ners. Sounds tricky. Must be very good equip­ment!

• MWI doesn’t strictly say that the wor­lds don’t in­ter­act. It just says that they are mostly ap­prox­i­mately in­de­pen­dent if they have de­co­hered (it­self a con­tin­u­ous pro­cess).

Ex­per­i­ments in con­trol­led con­di­tions show that small sub-branches can, in fact, in­terfere with each other like waves, hence the dou­ble-slit ex­per­i­ment with elec­trons. But the size of the ob­ject merely con­tributes to the difficulty of the ex­per­i­ment, it seems. So far, large molecules have been used, but in the fu­ture it is planned to use viruses and bac­te­ria. See Toward Quan­tum Su­per­po­si­tion of Liv­ing Or­ganisms. Also note that a quan­tum com­puter is es­sen­tially us­ing com­pu­ta­tion across the mul­ti­verse (though no-one has built a par­tic­u­larly large one of those).

• Is there any rea­son to use viruses and bac­te­ria as op­posed to, say, bac­terium-sized salt crys­tals? Is it to re­fute peo­ple who say: “But if it’s al­ive then per­haps it has mag­i­cal quan­tum prop­er­ties. Be­cause life is mag­i­cal.”

• Is there any rea­son to use viruses and bac­te­ria as op­posed to, say, bac­terium-sized salt crys­tals?

Yes. It is way cooler. Kind of like lev­i­tat­ing frogs with su­per­con­duct­ing mag­nets.

• Well, isn’t the copen­hagen in­ter­pre­ta­tion the state­ment that life has magic effects on physics, by caus­ing the wave­func­tion to col­lapse?

• Hu­man con­scious­ness speci­fi­cally, not just life. Would differ­ent in­ter­pre­ta­tions give differ­ent pre­dic­tions for an ex­per­i­ment with a hu­man in­terfer­ing with him­self in other branches?

• Are you ask­ing about what this would look like to ob­servers on the side, or about the sub­jec­tive ex­pe­rience of the per­son un­der­go­ing in­terfer­ence?

Re­gard­ing the first ques­tion, I don’t think it would be differ­ent in prin­ci­ple from any other hy­po­thet­i­cal ex­per­i­ment with macro­scopic quan­tum in­terfer­ence; how much differ­ent in­ter­pre­ta­tions man­age to ac­count for those is a com­plex ques­tion, but I don’t think pro­po­nents of ei­ther of them would ac­cept the mere fact of ex­per­i­men­tally ob­served macro­scopic in­terfer­ence as falsify­ing their fa­vored in­ter­pre­ta­tion. (Though ar­guably, col­lapse-based in­ter­pre­ta­tions run into ever greater difficul­ties as the largest scales of de­tected quan­tum phe­nom­ena in­crease.)

As for the sec­ond one, I think an­swer­ing that ques­tion would re­quire more knowl­edge about the ex­act na­ture of hu­man con­scious­ness than we presently have. Scott Aaron­son pre­sents some in­ter­est­ing dis­cus­sion along these lines in one of his lec­tures:
http://​​www.scot­taaron­son.com/​​dem­ocri­tus/​​lec11.html

• Nope. The copen­hagen in­ter­pre­ta­tion says that the wave­func­tion col­lapses when it in­ter­acts with any­thing.

But the thing it in­ter­acts with is still part of a larger wave­func­tion un­til that col­lapses etc.

I have yet to work out what the differ­ence is be­tween the Copen­hagen In­ter­pre­ta­tion and the Many Wor­lds In­ter­pre­ta­tion. The phys­i­cal re­al­ity they de­scribe is iden­ti­cal.

• Thanks for the link. I’m still not clear on ex­actly what it would mean to be able to in­terfere with my­self in other branches in a wave-like way. Also, I thought a non-re­versible pro­cess forced de­co­her­ence: is this not cor­rect, or is there a way to force liv­ing or­ganisms to be re­versible?

• If differ­ent wor­lds didn’t in­ter­act, you wouldn’t even get the or­di­nary dou­ble-slit re­sult. With or­di­nary prob­a­bil­ity, you can split off branches with­out a prob­lem, but quan­tum am­pli­tudes can be nega­tive or com­plex, they can can­cel out, etc. You just don’t typ­i­cally see this macro­scop­i­cally due to de­co­her­ence.

• Here I want to be a ver­ifi­ca­tion­ist and say that there is no ‘fact of the mat­ter’ as to which in­ter­pre­ta­tion is true.

• In­ter­est­ing. You seem to be say­ing that if the laws of physics ap­peared to be non­de­ter­minis­tic, there would be no way to be sure they don’t ac­tu­ally cre­ate copies in­stead.

I think this is cor­rect, with the caveat that the laws of physics for this uni­verse do not ap­pear to work that way. How­ever, the map is not the ter­ri­tory—not know­ing how the laws work does not mean there is no fact of how they work. Even so, as­sum­ing that the ra­tio­nal course ac­tion differs be­tween these situ­a­tions, the best you can do is as­sign a prior to both (Solomonoff prior will do, I sup­pose), and av­er­age your ac­tions in some way be­tween them.

It’s pos­si­ble you’re also cor­rect that, in that case, there would be no fact of the mat­ter about it—I’m not quite sure what you meant. If there are mul­ti­ple uni­verses (with differ­ent laws of physics), and clones in other uni­verses count, then in­dex­i­cal un­cer­tainty about which uni­verse you’re in trans­lates di­rectly into, in effect, ex­ist­ing in both. I think.

• At­tempted mur­der, I reckon!

We can’t have peo­ple go­ing around throw­ing grenades at each other—even if they “only” have a 50-50 chance of ex­plod­ing. This dan­ger­ous idiot is clearly in need of treat­ment and /​ or serv­ing as a les­son to oth­ers.

• (B) if he thanks you for the free \$100, does he ask for an­other one of those nice free hun­dred dol­lar note dis­pensers? (This is the “quan­tum suicide” option

I laugh in the face of any­one who at­tests to this and doesn’t com­mit armed rob­bery on a reg­u­lar ba­sis. If ‘at least one of my branches will sur­vive’ is your ar­gu­ment, why not go sky­div­ing with­out a parachute? You’ll sur­vive—by defi­ni­tion!

So many of these com­ments be­tray peo­ple still un­able to think of sub­jec­tive ex­pe­rience as any­thing other than a ghostly pres­ence sit­ting out­side the quan­tum world. ‘Well if this hap­pens in the world, what would I ex­pe­rience?’ If you shoot your­self in the head, you will ex­pe­rience hav­ing your brains blown out. The fact that con­tem­plat­ing one­self’s an­nihila­tion is very difficult is not an ex­cuse for mud­dling up physics.

• I think the re­sponse is: that MWI isn’t “In­finite plot-threads of fate”—or nar­ra­tivium as it’s put in the Disc­world nov­els—quan­tum de­cay doesn’t give a whit of care whether it’s effects are note­wor­thy for us or not.

On the two ‘far ends’ of the spec­trum, I’d ex­pect to see sig­nifi­cant plot-de­cay—par­ti­cle causes Hitler to get can­cer, he dies halfway through WWII—but I have trou­ble imag­in­ing a situ­a­tion where a quan­tum event which will make the differ­ence be­tween my mo­tor­cy­cle stick­ing to the curve, and the tire skid­ding out, leav­ing my frag­ile body to skid across the pave­ment at 120 km/​h, leav­ing a greasy trail that skids-out the semi rid­ing be­hind me.

Quan­tum-grenades are one of the few ex­cep­tions, where small-world events af­fect us here in the mid­dle-world. But I wouldn’t count on MWI to pro­duce a perfect bank rob­bery.

• Very true, and well put. A com­bi­na­tion of quan­tum events could prob­a­bly pro­duce any­thing you wanted, at what­ever van­ish­ingly tiny prob­a­bil­ity. Bear in mind that it’s the con­figu­ra­tion that evolves ev­ery which way, not ‘this par­ti­cle can go here, or here, or here....’ But we’re into Greg Egan ter­ri­tory here.

Suffice it to say that any­one who says they sub­scribe to quan­tum suicide but isn’t ei­ther dead or richer than god is talk­ing out of their bot­tom.

• Suffice it to say that any­one who says they sub­scribe to quan­tum suicide but isn’t ei­ther dead or richer than god is talk­ing out of their bot­tom.

Or, to be fair, just lack­ing in mo­ti­va­tion or cre­ativity. It may not have oc­cured to them to iso­late a source of ac­cessible quan­tum ran­dom­ness then shoot them­selves on ev­ery day in which they do not win a lot­tery.

• Some caveats are in or­der:

I (mostly, prob­a­bly?) find Quan­tum Suicide to be a perfectly rea­son­able op­tion when it works as in­tended—how­ever there are two cases of pos­si­ble-fu­ture-branches which con­cern me.

1) Non-com­plete destruction

While MWI death doesn’t bother me—and in fact, even to­tal death both­ers me less than most peo­ple—cer­tain situ­a­tions ter­rify me. Crip­pling in­jury, per­son­al­ity dis­tort­ing pain (tor­ture), and brain dam­age are deal break­ers. Even if I bumped my as­sess­ment of MWI to 1.00, then I still wouldn’t take a deal which in­volved one Everett branch be­ing tor­tured for 50 years (at least, not with­out some sort of in­cred­ible pay­out).

2) While I don’t worry about my own Everett copies so much, I do value the Ethics of al­ter­nate lines—which is to say that I wouldn’t want my fam­ily left be­hind in 50% of the uni­verses with­out me around due to some sort of MWI ex­per­i­ment (if I die by ac­ci­dent that’s, eth­i­cally speak­ing, not the same).

So for the clas­sic case of Quan­tum Rus­sian Roulette—where I and the other party have no loved ones to leave be­hind—I’m fully game. And in other situ­a­tions where all of my loved ones are ut­terly de­stroyed alongside me, I’m also game. But find­ing the me­chan­ics to cre­ate said situ­a­tions in our day-to-day, semi-tech­nolog­i­caly evolved world are pretty much im­pos­si­ble.

The only ex­cep­tion is (maybe—I’ll freely ad­mit to not hav­ing suffi­cient back­ground here) the LHC. That ar­gu­ment would go that, the rea­son we’ve had so many difficul­ties, is be­cause the Uni­verses where it worked de­stroyed all hu­man­ity. But my ques­tion there is how long can we keep try­ing it, with­out com­pletely de­stroy­ing our­selves? My guess is that we’d have a finite num­ber of goes at it be­fore no quan­tum event can stop us—and all Everett branches are dead.

But like I said, I don’t re­ally have the back­ground. I just hope the ex­perts fully ac­knowl­edge the dan­gers (but they prob­a­bly don’t).

• Quan­tum-grenades are one of the few ex­cep­tions, where small-world events af­fect us here in the mid­dle-world. But I wouldn’t count on MWI to pro­duce a perfect bank rob­bery.

Agree, be­cause most of the quan­tum events that de­ter­mine the suc­cess of the rob­bery are ones that hap­pened quite a while back. We’re prob­a­bly already in a branch where the suc­cess or failure is very nearly de­ter­mined at this point in time.

• (A) Sly charges me with at­tempted mur­der. Sly does not think that his coun­ter­part in the other Everett branch is not in­ter­act­ing with him—ac­tu­ally, Sly, af­ter read­ing LW a lot, thinks that he has a “causal” re­spon­si­bil­ity, go­ing-back-in-time, on whether his coun­ter­part lives in the other branch, and there­fore on how well the other branch fares. (If we trans­form this grenade into a coun­ter­fac­tual mug­ging grenade that has an op­tion to not be “prob­a­bly lethal” con­di­tional on Sly’s re­fus­ing of \$100, Sly has causal effect on whether he lives in the other branch. With a lethal-only quan­tum grenade, Sly is left with re­spon­si­bil­ity to con­sis­tently ban uses of the grenade, so he should re­fuse \$100 and put charges any­way—pro­duc­ing of such grenades is “sick”.)

• All these com­ments and no one ques­tions MWI? Is there an­other thread for that?

What does MWI ex­plain that can’t be ex­plained with­out it? Name even a gedanken ex­per­i­ment that gives a differ­ent re­sult in an MWI uni­verse than it does in a uni­verse with ei­ther 1) ran­domly de­ter­mined but truly col­lapsed uniquely quan­tum col­lapses 2) quan­tum col­lapses de­ter­mined uniquely by a force or method we do not yet know about.

Hav­ing said all that, Sly has plenty of rea­son to dis­like me, even to want me ar­rested for at­tempted mur­der even with­out MWI. A world in which Sly is ar­rested and stopped of his odd gam­bling with other peo­ple’s lives habit, I and other peo­ple I value have a higher prob­a­biliy of sur­viv­ing in my sub­jec­tive timeline. I know that peo­ple who do some­thing tend to do that in the fu­ture with much higher prob­a­bil­ity than do peo­ple who don’t do that. This in­fer­ence has been bred in to me be­cause it was true enough to aid the sur­vival, es­pe­cially in our so­cially com­petitve groups, of my an­ces­tors, so it makes good sense to go with this idea.

• All these com­ments and no one ques­tions MWI? Is there an­other thread for that?

See the se­quences, speci­fi­cally the quan­tum physics se­quence. I found it by click­ing the “Se­quences” link in the up­per right of the page, then read­ing the table of con­tents, and click­ing through.

• I would place 0 value on cre­at­ing iden­ti­cal non-in­ter­act­ing copies of my­self. How­ever, I would place a nega­tive value on cre­at­ing copies of my loved ones who were suffer­ing be­cause I got blown up by a grenade. If Sly is us­ing the same rea­son­ing, I think he should charge me with at­tempted mur­der.

• Least con­ve­nient pos­si­ble world: he has no loved ones.

• D) He will thank you, and charge you for at­tempted mur­der to min­i­mize risk of be­ing man­gled by non-100%-kill grenade.

BTW, I’ve some thoughts on an­thropic trilemma, maybe they’ll be of use.

• Least con­ve­nient pos­si­ble world: the grenade is ex­tremely re­li­able, moreso than the car he drives or the wa­ter he drinks.

• And MWI is true. And Sly knows that it is true. If Sly has no rel­a­tives than B) of course. If he has, than he should charge you for mur­der, as court prob­a­bly knows too that MWI is true.

EDIT: And if Sly thinks that his fu­ture ex­pe­riences are based on Solomonoff prior, than he will charge you for at­tempted mur­der, as he has rel­a­tively big chance to end up man­gled in not so least con­ve­nient pos­si­ble world.

• as he has rel­a­tively big chance to end up man­gled in not so least con­ve­nient pos­si­ble world.

Not if he only ac­cepts a few grenades, it’s only a fac­tor of 0.5 for each grenade.

• This looks un­finished? At any rate I’m not get­ting the point.

• I re­ject “3” (We ought to value both kinds of copies the same way), but don’t think that it is ar­bi­trary at all. Rather it is based off of an im­por­tant as­pect of our moral val­ues called “Separa­bil­ity.” Separa­bil­ity is, in my view, an ex­tremely im­por­tant moral in­tu­ition, but it is one that is not fre­quently dis­cussed or thought about be­cause we en­counter situ­a­tions where it ap­plies very in­fre­quently. Many Less Wrongers, how­ever, have ex­pressed the in­tu­ition of sep­a­ra­bil­ity when stat­ing that they don’t think that non-causally con­nected par­allel uni­verse should af­fect their be­hav­ior.

Separa­bil­ity ba­si­cally says that how con­nected some­one is to cer­tain events mat­ters morally in cer­tain ways. There is some de­bate as to whether this prin­ci­ple is a ba­sic moral in­tu­ition, or whether it can be de­rived from other in­tu­itions, I am firmly in fa­vor of the former.

That prob­a­bly sounds rather ab­stract, so let me give a con­crete ex­am­ple: Imag­ine that the gov­ern­ment is con­sid­er­ing tak­ing an ac­tion that will de­stroy a unique ecosys­tem. There are mil­lions of en­vi­ron­men­tal­ists who op­pose this ac­tion, protest against it, and lobby to stop it. Should their prefer­ence for the ecosys­tem to not be de­stroyed be taken into con­sid­er­a­tion when calcu­lat­ing the util­ity of this situ­a­tion? Have they, in a sense, been harmed if the ecosys­tem is de­stroyed? I’d say yes, and I think a lot of peo­ple would agree with me.

Now imag­ine that in a dis­tant galaxy there ex­ist ap­prox­i­mately 90 quadrillion alien brain em­u­la­tors liv­ing in a Ma­tri­oshka Brain. All these aliens are fer­vent en­vi­ron­men­tal­ists and have a strong prefer­ence that no unique ecosys­tem ever be de­stroyed. As­sume we will never meet these aliens. Should their prefer­ence for the ecosys­tem to not be de­stroyed be taken into con­sid­er­a­tion when calcu­lat­ing the util­ity of this situ­a­tion? Have they, in a sense, been harmed if the ecosys­tem is de­stroyed? I’d say no, even if Omega told me they ex­isted.

What makes these two situ­a­tions differ­ent? I would say that in the first situ­a­tion the en­vi­ron­men­tal­ists pos­sess strong causal con­nec­tions to the ecosys­tem in ques­tion, while the aliens do not. For this rea­son the en­vi­ron­men­tal­ists’ prefer­ences were morally rele­vant, the aliens’ not so.

Separa­bil­ity is re­ally es­sen­tial for util­i­tar­i­anism to avoid paral­y­sis. After all, if ev­ery­one’s de­sires count equally when eval­u­at­ing the moral­ity of situ­a­tions, re­gard­less of how con­nected they are to them, then there is no way of know­ing if you are do­ing right or not. Some­where in the uni­verse there is doubtless a vast amount of peo­ple who would pre­fer you not do what­ever it is you are do­ing.

So how does this ap­ply to the ques­tion of cre­at­ing copies in my own uni­verse, ver­sus de­siring a copy of me in an­other uni­verse not be de­stroyed by a quan­tum grenade?

Well, in the is­sue of whether or not to cre­ate iden­ti­cal copies in my own uni­verse, I would not spend a cent try­ing to do that. I be­lieve in ev­ery­thing Eliezer wrote in In Praise of Bore­dom and place great value on hav­ing new, unique ex­pe­riences. Creat­ing lock­step copies of me would be coun­ter­pro­duc­tive, to say the least.

How­ever, at first this ap­proach seems to run into trou­ble in MWI. If there are so many par­allel uni­verses it stands to rea­son that I’ll be du­pli­cat­ing an ex­pe­rience some other me has already had no mat­ter what I do. For­tu­nately, the Prin­ci­ple of Separa­bil­ity al­lows me to res­cue my val­ues. Since all those other wor­lds lack any causal con­nec­tion to me, they are not rele­vant in de­ter­min­ing whether I am liv­ing up to the Value of Bore­dom.

This al­lows us to ex­plain why I am up­set when the grenade is thrown at me. The copy that was kil­led had no causal con­nec­tion to me. Noth­ing I or any­one else did re­sulted in his cre­ation, and I can­not re­ally in­ter­act with him. So when I as­sess the bad­ness of his death, I do not in­clude my de­sire to have unique, nondu­pli­cated ex­pe­riences in my as­sess­ment. All that mat­ters is that he was kil­led.

So re­ject­ing (3) does not make our val­ues ar­bi­trary, not in the slight­est. There is an ex­tremely im­por­tant moral prin­ci­ple be­hind do­ing so, a moral prin­ci­ple that is es­sen­tial to our sys­tem of ethics. Namely, the Prin­ci­ple of Separa­bil­ity.

• You say that “sep­a­ra­bil­ity is re­ally es­sen­tial for util­i­tar­i­anism to avoid paral­y­sis” but also that it “is not fre­quently dis­cussed or thought about be­cause we en­counter situ­a­tions where it ap­plies very in­fre­quently.”

I have trou­ble un­der­stand­ing how both of these can be true. If situ­a­tions where it ap­plies are very in­fre­quent, how es­sen­tial can it re­ally be?

To avoid paral­y­sis, util­i­tar­i­ans need some way of re­solv­ing in­ter­sub­jec­tive differ­ences in util­ity calcu­la­tion for the same shared world-state. Us­ing “sep­a­ra­bil­ity” to dis­count the un­know­able util­ity calcu­la­tions of un­known Ma­tri­oshka Brains is a neg­ligible por­tion of the work that needs to be done here.

For my own part, I would spend con­sid­er­ably more than a cent to cre­ate an iden­ti­cal copy of my­self whom I can in­ter­act with, be­cause the ex­pe­rience of in­ter­act­ing with an iden­ti­cal but non-colo­cal­ized ver­sion of my­self would be novel and in­ter­est­ing, and also be­cause I sus­pect that we would both get net value out of the al­li­ance.

Iden­ti­cal copies I can’t in­ter­act with di­rectly are less valuable, but I’d still spend a fair amount to cre­ate one, be­cause I would ex­pect them to differ­en­tially cre­ate things in the world I value, just as I do my­self.

Iden­ti­cal copies I can’t in­ter­act with even in­di­rectly—noth­ing they do or don’t do will af­fect my life—I care about much much less, more due to self­ish­ness than any kind of ab­stract prin­ci­ple of sep­a­ra­bil­ity. What’s in it for me?

• I have trou­ble un­der­stand­ing how both of these can be true. If situ­a­tions where it ap­plies are very in­fre­quent, how es­sen­tial can it re­ally be?

What I should have said is “When dis­cussing or think­ing about moral­ity we con­sider situ­a­tions where it ap­plies very in­fre­quently.” When peo­ple think about moral­ity, and posit moral dilem­mas, they typ­i­cally only con­sider situ­a­tions where ev­ery­one in­volved is ca­pa­ble of in­ter­act­ing. When peo­ple con­sider the Trol­ley Prob­lem they only con­sider the six peo­ple on the tracks and the one per­son with the switch.

I sup­pose that tech­ni­cally sep­a­ra­bil­ity ap­plies to ev­ery de­ci­sion we make. For ev­ery ac­tion we take there is a pos­si­bil­ity that some­one, some­where does not ap­prove of our tak­ing it and would stop us if they could. This is es­pe­cially true if the uni­verse is as vast as we now think it is. So we need sep­a­ra­bil­ity in or­der to dis­count the de­sires of those ex­tremely causally dis­tant peo­ple.

To avoid paral­y­sis, util­i­tar­i­ans need some way of re­solv­ing in­ter­sub­jec­tive differ­ences in util­ity calcu­la­tion for the same shared world-state. Us­ing “sep­a­ra­bil­ity” to dis­count the un­know­able util­ity calcu­la­tions of un­known Ma­tri­oshka Brains is a neg­ligible por­tion of the work that needs to be done here.

You are cer­tainly right that sep­a­ra­bil­ity isn’t the only thing that util­i­tar­i­anism needs to avoid paral­y­sis, and that there are other is­sues that it needs to re­solve be­fore it even gets to the stage where sep­a­ra­bil­ity is needed. I’m merely say­ing that, at that par­tic­u­lar stage, sep­a­ra­bil­ity is es­sen­tial. It cer­tainly isn’t the only pos­si­ble way util­i­tar­i­anism could be par­a­lyzed, or oth­er­wise run into prob­lems.

For my own part, I would spend con­sid­er­ably more than a cent to cre­ate an iden­ti­cal copy of my­self whom I can in­ter­act with

When I re­fer to iden­ti­cal copies I mean a copy that starts out iden­ti­cal to me, and re­mains iden­ti­cal through­out its en­tire lifes­pan, like the copies that ex­ist in par­allel uni­verses, or the ones in this ma­trix-sce­nario Wei Dai de­scribes. You ap­pear to also be us­ing “iden­ti­cal” to re­fer to copies that start out iden­ti­cal, but di­verge later and have differ­ent ex­pe­riences.

Like you, I would prob­a­bly pay to cre­ate copies I could in­ter­act with, but I’m not sure how en­thu­si­as­tic about it I would be. This is be­cause I find ex­pe­riences to be much more valuable if I can re­mem­ber them af­ter­wards and com­pare them to other ex­pe­riences. If both mes get net value out of the ex­pe­rience like you ex­pect then this isn’t a rele­vant con­cern. But I cer­tainly wouldn’t con­sider hav­ing 3650 copies of me ex­ist­ing for one day and then be­ing deleted to be equiv­a­lent to liv­ing an ex­tra 10 years the way Robin Han­son ap­pears to.

• If you think care­fully about Descartes’ “I think there­fore I am” type skep­ti­cism, and ap­proach your stream of sen­sory ob­ser­va­tions from such a skep­ti­cal point of view, you should note that if you re­ally were just one branch-line in a per­son-tree, it would feel ex­actly the same as if you were a unique per­son-line through time, be­cause look­ing back­wards, a tree looks like a line, and your mem­ory can only look back­wards.

I like this way of ex­plain­ing how MWI all adds up to nor­mal­ity, and I’ll use it in fu­ture dis­cus­sions.

• What if I sim­ply don’t trust the many wor­lds in­ter­pre­ta­tion that much?

• I think that I can be con­sis­tent with charg­ing you with at­tempted mur­der.

In your sce­nario, if the grenade is not in my fa­vor; this par­tic­u­lar in­stance of me will be dead. The fact that a bunch of copies col­lect \$100 is of lit­tle value to the copy that my sub­jec­tive ex­pe­rience oc­cu­pied.

For in­stance, if Omega came up to me right now and said that he just slew some copies of me in other lines, than it is un­clear how that event has af­fected me. Like­wise if I die, and Omega tells my other copies, it seems like it is only this sub­jec­tive branch that suffers.

So be­cause the grenade can af­fect the cur­rent branch that I ex­pe­rience, I can ob­ject.

I think any­way, I may have mi­s­un­der­stood ev­ery­thing.

Also: I was very sur­prised to be the sub­ject of a post. It has been in­ter­est­ing. =)

EDIT: Wouldn’t the grenade thought ex­per­i­ment be more ac­cu­rate if the grenade only kil­led or gave out \$100 to copies when thrown at me? The fact that it in­ter­acts with me and not just copies of me is where I get a dis­con­nect.

• the copy that my sub­jec­tive ex­pe­rience oc­cu­pied.

Mag­i­cal ex­t­ra­phys­i­cal sub­jec­tive ex­pe­rience fact-of-the-mat­ter any­one?

• How is it mag­i­cal? Or ex­tra-phys­i­cal?

All it re­quires is that the copy that sur­vives is not the me that got an­nihilated in the grenade. I do not think this re­quires magic.

Like I said though, I may be mi­s­un­der­stand­ing some­thing. In that case I would ap­pre­ci­ate it if it were ex­plained bet­ter.

• I should do a post on this, rather than com­ments I think.

• BTW. I don’t un­der­stand, why it is taken for granted that Sly’s thread of sub­jec­tive ex­pe­rience in branch where grenade ex­ploded will merge with thread of sub­jec­tive ex­pe­rience in other branch.

Ques­tion of as­sign­ing prob­a­bil­ities to sub­jec­tive an­ti­ci­pa­tions is still open. Thus, it’s pos­si­ble that af­ter grenade ex­plo­sion Sly will ex­pe­rience be­ing in­finite chain of dy­ing Boltz­mann brains, and not a happy owner of \$100.

EDIT: Us­ing Solomonoff prior, I think he will wake up in a hos­pi­tal, as it is one of sim­plest con­tinu­a­tions of his prior ex­pe­riences.

• Hav­ing read Den­nett, I reach the con­clu­sion that the third horn of the ‘an­thropic trilemma’ is ob­vi­ously the cor­rect one. There is no such thing as a thread of sub­jec­tive ex­pe­rience. There is no ‘fact of the mat­ter’ as to whether ‘you’ will die in a tele­porter or ‘find your­self’ at your des­ti­na­tion.

After the grenade ex­plodes with prob­a­bil­ity 12, we can say that with prob­a­bil­ity 12 there is a dead Sly and with prob­a­bil­ity 12 there is a re­lieved Sly whose im­me­di­ate mem­o­ries con­sist of dis­cussing the ‘deal’ with Roko, agree­ing to go ahead with it, then wait­ing anx­iously. There is no rea­son what­so­ever to be­lieve in a fur­ther, epiphe­nom­e­nal fact about what hap­pened to Sly’s sub­jec­tive ex­pe­rience.

That there ‘seems’ to be a thread of sub­jec­tive ex­pe­rience shouldn’t give us any more pause than the fact that the Earth ‘seems’ to be mo­tion­less—in both cases we can ex­plain why it must seem that way with­out as­sum­ing it to be true.

• After the grenade ex­plodes with prob­a­bil­ity 12, we can say that with prob­a­bil­ity 12 there is a dead Sly and with prob­a­bil­ity 12 there is a re­lieved Sly

But quan­tum me­chan­ics tells us, for sure, that there are both!

• Hav­ing read Den­nett, I reach the con­clu­sion that the third horn of the ‘an­thropic trilemma’ is ob­vi­ously the cor­rect one.

The third horn of the an­thropic trilemma is to deny that there is any mean­ingful sense what­so­ever in which you can an­ti­ci­pate be­ing your­self in five sec­onds, rather than Brit­ney Spears; to deny that self­ish­ness is co­her­ently pos­si­ble; to as­sert that you can hurl your­self off a cliff with­out fear, be­cause who­ever hits the ground will be an­other per­son not par­tic­u­larly con­nected to you by any such ridicu­lous thing as a “thread of sub­jec­tive ex­pe­rience”.

• To loosely para­phrase Charles Bab­bage: ‘I am not able rightly to ap­pre­hend the kind of con­fu­sion of ideas that could pro­voke such an ar­gu­ment.’ I don’t be­lieve Eliezer was think­ing very clearly when he wrote that post.

• I don’t be­lieve Eliezer was think­ing very clearly when he wrote that post.

I agree. That is the one post of Eliezer’s that I can think of that seems to be just con­fused (when he prob­a­bly didn’t need to be).

• Can you share your way of “dis­con­fu­sion”?

I’ve made a copy of my­self. After that I will find my­self ei­ther as an origi­nal, or as a copy, but both these sub­jec­tive states will cor­re­spond to one phys­i­cal state. It seems there is some­thing be­yond phys­i­cal state.

I have an idea I’ve already par­tially de­scribed in other posts, that al­lows me to re­main re­duc­tion­ist. But I won­der how oth­ers deal with that.

Edit: Phys­i­cal state refers to phys­i­cal state of world, not to the phys­i­cal state of par­tic­u­lar copy.

• I’ve made a copy of my­self. After that I will find my­self ei­ther as an origi­nal, or as a copy, but both these sub­jec­tive states will cor­re­spond to one phys­i­cal state. It seems there is some­thing be­yond phys­i­cal state.

Er, aren’t those the same sub­jec­tive state as well?

• Origi­nal-I will see that I still stand/​lie on scan­ner end of copy­ing ap­para­tus. Copy-I will see that I “tele­ported” to con­struc­tion cham­ber of copy­ing ap­para­tus. One’s cur­rent ex­pe­riences is a part of sub­jec­tive state, isn’t it?

If scan­ner and con­struc­tion cham­bers are iden­ti­cal, then my sub­jec­tive state “splits” when Origi­nal-I and Copy-I leave cham­ber (note that “in-cham­ber” states are one sub­jec­tive state).

• Th­ese differ­ent sub­jec­tive states cor­re­spond to differ­ent phys­i­cal states: differ­ent pat­terns of pho­tons im­p­inge on your reti­nas, caus­ing differ­ent neu­ral ac­tivity in your vi­sual cor­tex, and so forth.

• I’ve left space for mis­in­ter­pre­ta­tion in my root post. I meant world phys­i­cal state, not a state of par­tic­u­lar copy. World state: origi­nal-body ex­ists and copy-body ex­ists. Sub­jec­tive state: ei­ther I am origi­nal, or I am copy.

• As far as I can see both those sub­jec­tive states ex­ist si­mul­ta­neously, it’s not an “ei­ther or”.

You-be­fore-copy­ing wakes up as both origi­nal-you and copy-you af­ter the copy­ing. From there the sub­jec­tive states di­verge. Analo­gously, in the grenade ex­am­ple be­fore-grenade you con­tinues as both dead-you and you-with-\$100*

*(who doesn’t ex­pe­rience any­thing, un­less there’s an af­ter­life)

**(as well as all other pos­si­ble quan­tum-yous. But that’s an­other is­sue en­tirely)

I sus­pect I’m rather miss­ing the point. What point are you in fact try­ing to make if I may ask?

• Yes, both sub­jec­tive states ex­ist. It is not the point.

You wake up af­ter copy­ing, at what point should you ex­pe­rience be­ing both copy and origi­nal?

Be­fore you open your eyes, you can’t know if you copy or origi­nal, but it is not ex­pe­rience of be­ing both, be­cause phys­i­cal states of both brains at this point are iden­ti­cal to state of brain that wasn’t copied at all. After you open your eyes you will find you­self be­ing ei­ther copy or origi­nal, but again not both by ob­vi­ous rea­son.

*(who doesn’t ex­pe­rience any­thing, un­less there’s an af­ter­life)

It is not true if sub­jec­tive ex­pe­rience isn’t on­tolog­i­cally fun­da­men­tal. As ex­istense of e.g. par­icu­lar kind of Boltz­mann brain seem in that case suffi­cient for con­tinu­a­tion of sub­jec­tive ex­pe­rience (very un­pleas­ant ex­pe­rience in this case).

• Be­fore you open your eyes, you can’t know if you copy or origi­nal, but it is not ex­pe­rience of be­ing both, be­cause phys­i­cal states of both brains at this point are iden­ti­cal to state of brain that wasn’t copied at all. After you open your eyes you will find you­self be­ing ei­ther copy or origi­nal, but again not both by ob­vi­ous rea­son.

Again, what point are you ac­tu­ally try­ing to make here?

It is not true if sub­jec­tive ex­pe­rience isn’t on­tolog­i­cally fun­da­men­tal. As ex­istense of e.g. par­icu­lar kind of Boltz­mann brain seem in that case suffi­cient for con­tinu­a­tion of sub­jec­tive ex­pe­rience (very un­pleas­ant ex­pe­rience in this case).

A con­tinu­a­tion of sub­jec­tive ex­pe­rience af­ter death is an af­ter­life.

• Again, what point are you ac­tu­ally try­ing to make here?

Did you read “An­thropic trilemma”? I am try­ing to

1. Con­vince some­one that Eliezer Yud­kowsky wasn’t that con­fused, or at least that he had a rea­son to be.

2. Check if I am wrong and there is satis­fac­tory an­swer to trilemma, or that the ques­tion “What I will ex­pe­rience next?” has no mean­ing/​doesn’t mat­ter.

A con­tinu­a­tion of sub­jec­tive ex­pe­rience af­ter death is an af­ter­life.

As far as know word af­ter­life im­plles du­al­ism, which is not the case here.

• This has noth­ing to do with ‘split­ting’ per se. If your point was valid then you could make it equally well by say­ing:

A and B are differ­ent peo­ple in the same uni­verse. World state: “A ex­ists and B ex­ists”. Sub­jec­tive state: “Either A is think­ing or B is think­ing.” Same phys­i­cal state, differ­ent sub­jec­tive states. There­fore, “it seems there is some­thing be­yond the phys­i­cal state”.

But this ‘ei­ther or’ busi­ness is non­sense. A and B are both think­ing. You and copy are both think­ing. What’s the big deal?

(Ap­par­ently, you think the uni­verse is some­thing like in the film Aliens where in ad­di­tion to what­ever’s ac­tu­ally hap­pen­ing, there is a “bank of screens” some­where show­ing ev­ery­one’s points of view. And then af­ter you split, “your screen” must ei­ther show the origi­nal’s point of view or else it must show the copy’s.)

• If you’re say­ing that you don’t have sub­jec­tive ex­pe­riences, I’ll bite the bul­let, and will not trust your view on the mat­ter. How­ever I doubt that you want me to think so. What is sub­jec­tive ex­pe­rience or “one’s screen”, as you put it, to you?

• Of course we have sub­jec­tive ex­pe­rience: it’s just that both copies of you have it, and there is no spe­cial flame of con­scious­ness that goes to one but not the other. After the copy, both copies re­mem­ber be­ing the origi­nal. They’re both “you”.

• They are both me for ex­ter­nal ob­server. But there’s no sub­jec­tive ex­pe­rience of be­ing both copies. Imag­ine you­self be­ing copied… Copy­ing… Done. Now you find you­self stand­ing ei­ther in scan­ner cham­ber, know­ing that there’s an­other “you” in con­struc­tion cham­ber, or in con­struc­tion cham­ber, know­ing that there’s an­other “you” in scan­ner cham­ber. If you think that you’ll ex­pe­rience somet­ing uni­mag­in­able, you need to clar­ify what causes your/​copy’s brain to cre­ate uni­mag­in­able ex­pe­rience.

• But there’s no sub­jec­tive ex­pe­rience of be­ing both copies. Imag­ine you­self be­ing copied… Copy­ing… Done. Now you find you­self stand­ing ei­ther in scan­ner cham­ber, know­ing that there’s an­other “you” in con­struc­tion cham­ber, or in con­struc­tion cham­ber, know­ing that there’s an­other “you” in scan­ner cham­ber.

The prob­lem is with the word “you,” which usu­ally refers to one spe­cific mind. In this case, when I am copied, the end re­sult will be two iden­ti­cal minds, each of which will have iden­ti­cal mem­o­ries and con­ti­nu­ity with the past. The self in the scan­ner cham­ber will find it­self with the sub­jec­tive ex­pe­rience of look­ing at the copy in the con­struc­tion cham­ber, and the self in the con­struc­tion cham­ber will find it­self with the sub­jec­tive ex­pe­rience of look­ing at the copy in the scan­ner cham­ber. They are both equally “you”, but from that point on they will have sep­a­rate ex­pe­riences.

• “You” in this con­text is am­bigu­ous only for ex­ter­nal ob­server. Both minds will know who “you” refers to. Right?

Your rephras­ing seems to dis­so­ci­ate situ­a­tion from sub­jec­tive ex­pe­rience. I can’t see how it helps how­ever. It will be not “the self” stand­ing in scan­ner/​con­struc­tion cham­ber, it will be you (not “you”) stand­ing there. Once again. They are both equally “you” for ex­ter­nal ob­server, but you will not be ex­ter­nal ob­server.

• When you say “you”, mind 1 will think of mind 1, and mind 2 will think of mind 2. One en­tity and one sub­jec­tive ex­pe­rience has split into two sep­a­rate ones. They are both you.

If you pre­fer: your sub­jec­tive ex­pe­rience stops when the copy is cre­ated. Two new en­tities ap­pear and start hav­ing sub­jec­tive ex­pe­riences based from your past. There is no more you. It makes more sense to me to think of them both as me, but it’s the same thing. The past you no longer ex­ists, so you can’t ask what hap­pens to him.

I think your prob­lem is with the word “you”. The ques­tion is “what hap­pens next af­ter the split?” Well, what hap­pens to who? Mind 1 starts hav­ing one sub­jec­tive ex­pe­rience, and Mind 2 starts hav­ing a slightly differ­ent one. It’s tricky, be­cause there’s a dis­con­ti­nu­ity at the point of the split, and all our as­sump­tions about per­sonal iden­tity are based on not hav­ing such a dis­con­ti­nu­ity.

• Two new en­tities ap­pear and start hav­ing sub­jec­tive ex­pe­riences based from your past. There is no more you. It makes more sense to me to think of them both as me, but it’s the same thing. The past you no longer ex­ists, so you can’t ask what hap­pens to him.

And even with­out copy­ing “past you” no longer ex­ist in that sense. If we agree that sub­jec­tive ex­pe­riences are un­am­bigu­ously de­ter­mined by phys­i­cal pro­cesses in brain, than it must be clear that cre­ation of copy doesn’t cre­ate any spe­cial con­di­tions in sub­jec­tive ex­pe­rience, as pro­cesses in both brains evolve in usual man­ner but with differ­ent in­puts.

It’s tricky, be­cause there’s a dis­con­ti­nu­ity at the point of the split, and all our as­sump­tions about per­sonal iden­tity are based on not hav­ing such a dis­con­ti­nu­ity.

There is, er, un­cer­tainty in our ex­pec­ta­tions, yes. But I can’t see dis­con­ti­nu­ity of what you are speak­ing of? Which en­tity is dis­con­tin­u­ous at the point of copy­ing?

Edit: Just for in­for­ma­tion. I am aware that in­puts to copy-brain is dis­con­tin­u­ous in a sense (com­pare it with sleep).

• And even with­out copy­ing “past you” no longer ex­ist in that sense.

Yes, ex­actly. But nor­mally we have a no­tion of per­sonal iden­tity as a straight line through time, a path with no forks.

If we agree that sub­jec­tive ex­pe­riences are un­am­bigu­ously de­ter­mined by phys­i­cal pro­cesses in brain, than it must be clear that cre­ation of copy doesn’t cre­ate any spe­cial con­di­tions in sub­jec­tive ex­pe­rience, as pro­cesses in both brains evolve in usual man­ner but with differ­ent in­puts.

True. From the point of view of each copy, noth­ing seems any differ­ent, and ex­pe­rience goes on un­in­ter­rupted. The cre­ation of a copy cre­ates a spe­cial con­di­tion in per­sonal iden­tity, how­ever, be­cause there’s no longer just one “you”, so ask­ing which one “you” will be af­ter the copy no longer makes sense.

But I can’t see dis­con­ti­nu­ity of what you are speak­ing of? Which en­tity is dis­con­tin­u­ous at the point of copy­ing?

“You.” Your per­sonal iden­tity. In­stead of be­ing like a line through time with no forks, it splits into two.

• The cre­ation of a copy cre­ates a spe­cial con­di­tion in per­sonal iden­tity, how­ever, be­cause there’s no longer just one “you”, so ask­ing which one “you” will be af­ter the copy no longer makes sense.

I don’t un­der­stand. Do you mean that per­sonal iden­tity is some­thing that ex­ist in­de­pen­dently of sub­jec­tive ex­pe­rience of be­ing one­self? Ex­ter­nal ob­server again?

Sorry,but I see that you (un­in­ten­tion­ally?) try to “jump out” of you­self, when an­ti­ci­pat­ing post-copy ex­pe­rience.

How about an­other copy­ing setup? You stand in scan­ner cham­ber, but in­stead of im­me­di­ate copy copy­ing ap­para­tus store scanned data for a year and then makes a copy of you. When per­sonal iden­tity splits in this setup? What you ex­pect to ex­pe­rience af­ter scan­ning? After copy is made? And if a copy will never be made?

The point I try­ing to make is that there is no such thing as per­sonal iden­tity de­tached from your sub­jec­tive ex­pe­rience of be­ing you. So it can’t be dis­con­tin­u­ous.

Edit: I have pre­limi­nary re­s­olu­tion of an­thropic trilemma, but I was hes­i­tant to put it out, as I was try­ing to check that I am wrong about the need to re­solve it. So I pro­pose, that sub­jec­tive ex­pe­rience has one-to-one cor­re­spon­dence not to a phys­i­cal state of sys­tem ca­pa­ble of hav­ing sub­jec­tive ex­pe­rience, but to a set of sys­tems in pos­si­ble wor­lds which are in­var­i­ant to in­for­ma­tion pre­serv­ing sub­strate change. This defi­ni­tion is far from perfect of course, but at least it par­tially re­solves an­thropic trilemma.

• How about an­other copy­ing setup? You stand in scan­ner cham­ber, but in­stead of im­me­di­ate copy copy­ing ap­para­tus store scanned data for a year and then makes a copy of you. When per­sonal iden­tity splits in this setup? What you ex­pect to ex­pe­rience af­ter scan­ning? After copy is made? And if a copy will never be made?

After scan­ning, noth­ing un­usual. You’re still stand­ing in the cham­ber. You ask what you will ex­pe­rience af­ter the copy is made. After the copy is made, two peo­ple have the sub­jec­tive ex­pe­rience of be­ing “you”. One of them will ex­pe­rience a for­ward jump in time of a year. They are both equally you.

The point I try­ing to make is that there is no such thing as per­sonal iden­tity de­tached from your sub­jec­tive ex­pe­rience of be­ing you. So it can’t be dis­con­tin­u­ous.

The dis­con­ti­nu­ity is with the word “you”. Each copy has a con­tin­u­ous sub­jec­tive ex­pe­rience. But once there’s two copies, the word “you” sud­denly be­comes am­bigu­ous.

• the word “you” sud­denly be­comes am­bigu­ous.

..for ex­ter­nal ob­server. Origi­nal-you still knows that “you” refers to origi­nal-you, copy-you knows that “you” refers to copy-you.

Sorry, I am un­able to put it other way. Maybe we have in­com­pat­i­ble pri­ors.

• Origi­nal-you still knows that “you” refers to origi­nal-you, copy-you knows that “you” refers to copy-you.

I agree with this state­ment. So what’s the prob­lem?

• The prob­lem is to com­pute prob­a­bil­ity of be­com­ing you that refers to origi­nal-you, and prob­a­bil­ity of be­com­ing you that refers to copy-you. No one yet re­solved this prob­lem.

Edit: So you can’t be sure that you will not be­come you that refers to me for ex­am­ple.

• Be­cause it’s a mean­ingless ques­tion to ask what hap­pens to the origi­nal’s sub­jec­tive ex­pe­rience af­ter the copies are made. There is no flame or spirit that prob­a­bil­is­ti­cally shifts from you to one copy or the other. It’s not that you have a 50% chance of be­ing copy A and a 50% chance of be­ing copy B. It’s that both copies are you, and each of them will view them­selves as your con­tinu­a­tion. Your sub­jec­tive ex­pe­rience will split and con­tinue into both.

The in­ter­est­ing ques­tion is how to value what hap­pens to the copies. I can’t quite bring my­self to al­low one copy to be tor­tured for −1000 utiles and have the other re­warded for +1001, even though, if we value them evenly, this is a gain of 0.5 utiles. I’m not sure if this is a cog­ni­tive bias or not.

• The in­ter­est­ing ques­tion is how to value what hap­pens to the copies.

On what ba­sis you re­strict your eval­u­a­tion to those two par­tic­u­lar copies? The uni­verse is huge. And it doesn’t mat­ter when copy ex­ists (as we agreed ear­lier). There can be any num­ber of Boltz­mann brains, which con­tinue your cur­rent sub­jec­tive ex­pe­rience.

Edit: You can’t eval­u­ate any­thing if you can’t an­ti­ci­pate what hap­pens next or if you an­ti­ci­pate that ev­ery­thing that can hap­pen will hap­pen to you.

• Not sure I un­der­stand ex­actly. We don’t know if the uni­verse is Huge, or just how Huge it is. If Teg­mark’s hy­poth­e­sis is cor­rect the only uni­verses that ex­ist may be ones that cor­re­spond to cer­tain math­e­mat­i­cal struc­tures, and these struc­tures may be ones with spe­cific phys­i­cal reg­u­lar­i­ties that make Boltz­mann brains ex­tremely un­likely.

We don’t seem to no­tice any Boltz­mann-brain-like ac­tivity, which may be ev­i­dence that they are very rare.

• Here is rele­vant post. And if Teg­mark’s hy­poth­e­sis are true, than you don’t need Boltz­mann brains. There are in­finitely many con­tinu­a­tions of sub­jec­tive ex­pe­rience, as there are in­finitely many uni­verses which have same phys­i­cal laws as our uni­verse, but dis­tinct ini­tial con­di­tions. For ex­am­ple there is uni­verse with ini­tial con­di­tions iden­ti­cal to cur­rent state of our uni­verse, but the color of your room’s wal­l­pa­per.

Edit: spel­ling.

• Can you share your way of “dis­con­fu­sion”?

Right now, definitely not. That would in­volve re-im­mers­ing my­self in the is­sue, re­view­ing sev­eral post threads of com­ments and recre­at­ing all those thoughts that my brain oh so thought­fully chose to not main­tain in mem­ory in the ab­sence of suffi­cient repiti­tion. If I had made my post back then, taken the ex­tra steps of putting words to an ex­pla­na­tion that would be com­pre­hen­si­ble to oth­ers, then I would most likely not have lost those thoughts. (Note to self....)

• Sorry? Does it mean that we can as­sign prob­a­bil­ities to fu­ture ex­pe­riences? Or does it mean that we can’t do it, but nev­er­the­less we shouldn’t ex­pect be­ing Boltz­mann brain in 5 sec­onds?

• Blank map doesn’t im­ply blank ter­ri­tory. Did you read what I write on this? There is con­sis­tent (as far as I can see) way to deal with sub­jec­tive ex­pe­riences.

• Ob­jec­tions:

1. Isn’t it pos­si­ble (at least non-con­tra­dic­tory) that there could be a uni­verse whose min­i­mum de­scrip­tion length is in­finite? And even a brain within that uni­verse with in­finite min­i­mum de­scrip­tion length? If this uni­verse con­tains in­tel­li­gent be­ings hav­ing witty con­ver­sa­tions and do­ing sci­ence and stuff, do we re­ally want to just flat out deny that these be­ings are con­scious (and/​or deny that such a uni­verse is meta­phys­i­cally pos­si­ble.)

2. There isn’t always a fact of the mat­ter as to whether a be­ing is con­scious and if so, what it’s con­scious of. For the former, note that the ques­tion of when a foe­tus starts to have ex­pe­riences is ob­vi­ously in­de­ter­mi­nate. For the lat­ter, con­sider Den­nett’s dis­tinc­tion be­tween ‘Or­wellian’ and ‘Stal­i­nesque’ re­vi­sions. Nei­ther is there always a fact of the mat­ter as to how many minds are pre­sent (con­sider a split brain pa­tient). Don’t these con­sid­er­a­tions un­der­mine the idea of us­ing the Solomonoff prior? (If we don’t know (a) whether there are ex­pe­riences here at all, (b) what ex­pe­riences there are or even (c) whether this smaller ob­ject or this larger ob­ject counts as a ‘sin­gle mind’ then how on earth can we talk mean­ingfully about how the ‘thread’ of sub­jec­tivity is likely to con­tinue?)

1. Well, they will need in­finite pro­cess­ing power to effec­tively use their brains con­tent. And in­finite pro­cess­ing power is very strange thing. This situ­a­tion lies out­side of ap­pli­ca­bil­ity area of my pro­posal.

2. My pro­posal nor dis­cusses, nor uses emer­gence/​ex­is­tence of sub­jec­tive ex­pe­rience. If some­thing has sub­jec­tive ex­pe­riences and it has ex­pe­rience of hav­ing sub­jec­tive ex­pe­rience, then this some­thing can use Solomonoff prior to in­fer an­ti­ci­pa­tions of fu­ture ex­pe­riences.