Quantum immortality: Is decline of measure compensated by merging timelines?

I wrote an ar­ti­cle about the quan­tum im­mor­tal­ity which, I know, is a con­tro­ver­sial topic, and I would like to get com­ments on it. The in­ter­est­ing twist, sug­gested in the ar­ti­cle, is the idea of mea­sure in­crease which could com­pen­sate de­clin­ing mea­sure in quan­tum im­mor­tal­ity. (There are other top­ics in the ar­ti­cle, like the his­tory of QM, its re­la­tion to the mul­ti­verse im­mor­tal­ity, the util­ity of cry­on­ics, im­pos­si­bil­ity of eu­thana­sia and the re­la­tion of QI to differ­ent de­ci­sion the­o­ries.)

The stan­dard ar­gu­ment against quan­tum im­mor­tal­ity in MWI runs as fol­low­ing. One should calcu­late the ex­pected util­ity by mul­ti­ply­ing the ex­pected gain on the mea­sure of ex­is­tence (roughly equal to the one’s share of the world’s timelines). In that case, if some­one ex­pects to win 10.000 USD in the Quan­tum suicide lot­tery with 0.01 chance of sur­vival, her ac­tual ex­pected util­ity is 100 USD (ig­nor­ing negutil­ity of death). So, the rule of thumb is that the mea­sure de­clines very quickly af­ter se­ries of quan­tum suicide ex­per­i­ments, and thus this im­prob­a­ble timeline should be ig­nored. The fol­low­ing equa­tion could be used for U(to­tal) = mU, where m is mea­sure and U is ex­pected win in the lot­tery.

How­ever, if ev­ery­thing pos­si­ble ex­ists in the mul­ti­verse, there are many my pseudo-copies, which differ from me in a few bits, for ex­am­ple, they have a differ­ent phone num­ber or differ­ent ran­dom child mem­ory. The differ­ence is small but just enough for not re­gard them as my copies.

Imag­ine that this differ­ent child mem­ory is 1kb (if com­pressed) size. Now, one morn­ing both me and all my pseudo-copies for­get this mem­ory, and all we be­come ex­actly the same copies. In some sense, our timelines merged. This could be in­ter­preted as a jump in my mea­sure, which will as high as 2power1024 = (roughly) 10E300. If I use the equa­tion U(to­tal) = mU I can get an ex­treme jump of my util­ity. For ex­am­ple, I have 100 USD and now my mea­sure in­creased trillion of trillion of times, I sup­pos­edly get the same util­ity as if I be­come mega-multi-trillioner.

As a re­sult of this ab­surd con­clu­sion, I can spend the evening hit­ting my head with a stone and thus los­ing more and more mem­o­ries, and get­ting higher and higher mea­sure, which is ob­vi­ously ab­surd be­havi­our for a hu­man be­ing—but could be a failure mode for an AI, which uses the equa­tion to calcu­late the ex­pected util­ity.

In case of the Quan­tum suicide ex­per­i­ment, I can add to the bomb, which kills me with 0.5 prob­a­bil­ity, also a laser, which kills just one neu­ron in my brain (if I sur­vive), which—let’s as­sume it—is equal to for­get­ting 1 bit of in­for­ma­tion. In that case, QS re­duces my mea­sure in half, but for­get­ting one bit in­creases it in half. Ob­vi­ously, if I play the game for too long, I will dam­age my brain by the laser, but any­way, brain cells are dy­ing so of­ten in ag­ing brain (mil­lions a day), that it will be com­pletely non-ob­serv­able.

BTW, Pereira sug­gested the similar idea as an an­thropic ar­gu­ment against ex­is­tence of any su­per­in­tel­li­gence https://​​arxiv.org/​​abs/​​1705.03078