[Question] Looking for answers about quantum immortality.

I’ve been re­cently been ob­sess­ing over the risks of quan­tum tor­ment, and in the course of my re­search down­loaded this ar­ti­cle: https://​​philpa­pers.org/​​rec/​​TURFAA-3

Here’s a quote:

“4.3 Long-term in­escapable suffer­ing is possible

If death is im­pos­si­ble, some­one could be locked into a very bad situ­a­tion where she can’t die, but also can’t be­come healthy again. It is un­likely that such an im­prob­a­ble state of mind will ex­ist for too long a pe­riod, like mil­len­nia, as when the prob­a­bil­ity of sur­vival be­comes very small, strange sur­vival sce­nar­ios will dom­i­nate (called “low mea­sure marginal­iza­tion” by (Al­mond 2010). One such sce­nario might be aliens ar­riv­ing with a cure for the ill­ness, but more likely, the suffer­ing per­son will find her­self in a simu­la­tion or re­s­ur­rected by su­per­in­tel­li­gence in our world, per­haps fol­low­ing the use of cry­on­ics.

Aranyosi sum­ma­rized the prob­lem: “David Lewis’s point that there is a ter­rify­ing corol­lary to the ar­gu­ment, namely, that we should ex­pect to live for­ever in a crip­pled, more and more dam­aged state, that barely sus­tains life. This is the prospect of eter­nal quan­tum tor­ment” (Aranyosi 2012; Lewis 2004). The idea of out­comes in­finitely worse than death for the whole of hu­man­ity was ex­plored by Daniel (2017), who called them “s-risks”. If MI is true and there is no high-tech es­cape on the hori­zon, ev­ery­one will ex­pe­rience his own per­sonal hell.

Aranyosi sug­gested a com­fort­ing corol­lary (Aranyosi 2012), based on the idea that mul­ti­verse im­mor­tal­ity re­quires not re­main­ing in the “al­ive state”, but re­main­ing in the con­scious state, and thus dam­age to the brain should not be very high. It means, ac­cord­ing to Aranyosi, that be­ing in the near­est vicinity of death is less prob­a­ble than be­ing in just “the vicinity of the vicinity”: the differ­ence is akin to the differ­ence be­tween con­stant agony and short-term health im­prove­ment. How­ever, it is well known that very chronic states of health ex­ist which don’t af­fect con­scious­ness are pos­si­ble, e.g. can­cer, whole-body paral­y­sis, de­pres­sion, and lock-in syn­drome. How­ever, these bad out­comes be­come less prob­a­ble for peo­ple liv­ing in the 21st cen­tury, as de­vel­op­ments in med­i­cal tech­nol­ogy in­crease the num­ber of pos­si­ble fu­tures in which any dis­ease can be cured, or where a per­son will be put in cryosta­sis, or wake up in the next level of a nested simu­la­tion. Aranyosi sug­gested sev­eral other rea­sons why eter­nal suffer­ing is less prob­a­ble:

1) Early es­cape from a bad situ­a­tion: “Ac­cord­ing to my line of thought, you should rather ex­pect to always luck­ily avoid life-threat­en­ing events in in­finitely many such cross­ing at­tempts, by not be­ing hit (too hard) by a car to be­gin with. That is so be­cause ac­cord­ing to my ar­gu­ment the branch­ing of the world, rele­vant from the sub­jec­tive per­spec­tive, takes place ear­lier than it does ac­cord­ing to Lewis. Ac­cord­ing to him, it takes place just be­fore the mo­ment of death, ac­cord­ing to my rea­son­ing it takes place just be­fore the mo­ment of los­ing con­scious­ness”
(Aranyosi 2012, p.255).

2) Limits of suffer­ing. “The more dam­age your brain suffers, the less you are able to suffer”
(Aranyosi 2012, p.257).

3) In­abil­ity to re­mem­ber suffer­ing. “Emer­gence from coma or the veg­e­ta­tive state is fol­lowed by am­ne­sia is not an eter­nal life of suffer­ing, but rather one ex­tremely brief mo­ment of pos­si­bly painful self-aware­ness – call it the ‘Mo­men­tary Life’ sce­nario.” (Aranyosi 2012, p.257).

4.4 Bad in­fini­ties and bad circles

Mul­ti­verse im­mor­tal­ity may cause one to be locked into a very sta­ble but im­prob­a­ble world – much like the sce­nario in the epi­sode “White Christ­mas” of the TV se­ries “Black Mir­ror (Watk­ins 2014),” in which a char­ac­ter is locked into a simu­la­tion of a room for a sub­jec­tive 30 mil­lion years. Another bad op­tion is a cir­cu­lar chain of ob­server-mo­ments. Mul­ti­verse im­mor­tal­ity does not re­quire that the “next” mo­ment will be in the ac­tual fu­ture, es­pe­cially in the time­less uni­verse, where all mo­ments are equally ac­tual. Thus a “Ground­hog Day” sce­nario be­comes pos­si­ble. The cir­cle could be very short, like sev­eral sec­onds, in which a dy­ing con­scious­ness re­peat­edly re­turns to the same state as sev­eral sec­onds ago, and as it doesn’t have any fu­ture mo­ments it re­sets to the last similar mo­ment. Surely, this could hap­pen only in a very nar­row state of con­scious­ness, where the in­ter­nal clock and mem­ory are dam­aged.”

Look, I’m not at all knowl­edge­able in these mat­ter (be­sides hav­ing read Per­mu­ta­tion City and The Fi­nale of the Ul­ti­mate Meta Mega Crossover). Based on what I’ve read on­line on the pos­si­bil­ity of quan­tum im­mor­tal­ity, I don’t think it is prob­a­ble, and quan­tum tor­ment less so. But there’s some­thing about a pub­lished ar­ti­cle giv­ing se­ri­ous con­sid­er­a­tion to us suffer­ing eter­nally or go­ing through ‘The Jaunt’ from that Stephen King story which is cre­at­ing a nice lit­tle panic at­tack (in ad­di­tion to the already scary David Lewis ar­ti­cle).

I plan to die and have no in­ten­tion of sign­ing up for cry­on­ics. (EDIT: This meant die nat­u­rally. I have no de­sire to ex­pe­d­ite the pro­cess, it’s just that I’m not on board with the techno-im­mor­tal­ism pop­u­lar around here.) All I want to know is, is this stuff just be­ing pul­led out of his butt? Like, an ex­tremely un­likely hy­po­thet­i­cal that nonethe­less car­ries huge nega­tive util­ity? I’d be okay with that, as I’m not a util­i­tar­ian. Or have these sce­nar­ios ac­tu­ally been con­sid­ered plau­si­ble by AI the­o­rists?

I’m also des­per­ate to get in con­tact with some­one who’s stud­ied quan­tum me­chan­ics and can an­swer ques­tions of this na­ture. An ac­tual physi­cist (es­pe­cially a be­liever in MWI) would be great. I’d think an un­der­stand­ing of neu­ro­science would also be very im­por­tant for an­a­lyz­ing the risks, but how many peo­ple have stud­ied both fields? With some ex­cep­tions, the only ones I do see dis­cussing it are philoso­phers.

I’m in a bad place right now; any help would go a long way.