Since I didn’t see it brought up on a skim: One reason me and some of my physicist friends aren’t that concerned about vacuum decay is many-worlds. Since the decay is triggered by quantum tunneling and propagates at light speed, it’d be wiping out earth in one wavefunction branch that has amplitude roughly equal to the amplitude of the tunneling, while the decay just never happens in the other branches. Since we can’t experience being dead, this wouldn’t really affect our anticipated future experiences in any way. The vacuum would just never decay from our perspective.
So, if the vacuum were confirmed to be very likely meta-stable, and the projected base rate of collapses was confirmed to be high enough that it ought to have happened a lot already, we’d have accidentally stumbled into a natural and extremely clean experimental setup for testing quantum immortality.
The random fluctuations in macroscopic chaotic systems, like Plinko or a well-flipped coin in air, can be just as fundamentally quantum as vacuum decay through tunneling. So by this argument you’d be unconcerned getting into a machine that flips a coin and shoots you if tails. Bad idea.
No, because getting shot has a lot of outcomes that do not kill you but do cripple you. Vacuum decay should tend to have extremely few of those. It’s also instant, alleviating any lingering concerns about identity one might have in a setup where death is slow and gradual. It’s also synchronised to split off everyone hit by it into the same branch, whereas, say, a very high-yield bomb wired to a random number generator that uses atmospheric noise would split you off into a branch away from your friends.[1]
I’m not unconcerned about vacuum decay, mind you. It’s not like quantum immortality is all confirmed and the implications worked out well in math.[2]
Sometimes I think about the potential engineering applications of quantum immortality in a mature civilisation for fun. Controlled, synchronised civilisation-wide suicide seems like a neat way to transform many engineering problems into measurement problems.
Sometimes I think about the potential engineering applications of quantum immortality in a mature civilisation for fun. Controlled, synchronised civilisation-wide suicide seems like a neat way to transform many engineering problems into measurement problems.
Such thought experiments also serve as a solution of sorts to the fermi paradox, and as a rationalization of the sci-fi trope of sufficiently advanced civilizations “ascending”.
I don’t think so. You only need one alien civilisation in our light cone to have preferences about the shape of the universal wave function rather than their own subjective experience for our light cone to get eaten. E.g. a paperclip maximiser might want to do this.
Vacuum decay is fast but not instant, and there will almost certainly be branches where it maims you and then reverses. Likewise, you can make suicide machines very reliable and fast. It’s unreasonable to think any of these mechanical details matter.
It expands at light speed. That’s fast enough that no computational processing can possibly occur before we’re dead. Sure there’s branches where it maims us and then stops, but these are incredibly subdominant compared to branches where the tunneling doesn’t happen.
Yes, you can make suicide machines very reliable and fast. I claim that whether your proposed suicide machine actually is reliable does in factmatter for determining whether you are likely to find yourself maimed. Making suicide machines that are synchronised earth-wide seems very difficult with current technology.
I could be wrong, but from what I’ve read the domain wall should have mass, so it must travel below light speed. However, the energy difference between the two vacuums would put a large force on the wall, rapidly accelerating it to very close to light speed. Collisions with stars and gravitational effects might cause further weirdness, but ignoring that, I think after a while we basically expect constant acceleration, meaning that light cones starting inside the bubble that are at least a certain distance from the wall would never catch up with the wall. So yeah, definitely above 0.95c.
It seems like you’re assuming a value system where the ratio of positive to negative experience matters but where the ratio of positive to null (dead timelines) experiences doesn’t matter. I don’t think that’s the right way to salvage the human utility function, personally.
There may be a sense in which amplitude is a finite resource. Decay your branch enough, and your future anticipated experience might come to be dominated by some alien with higher amplitude simulating you, or even just by your inner product with quantum noise in a more mainline branch of the wave function. At that point, you lose pretty much all ability to control your future anticipated experience. Which seems very bad. This is a barrier I ran into when thinking about ways to use quantum immortality to cheat heat death.
The assumption that being totally dead/being aerosolised/being decayed vacuum can’t be a future experience is unprovable. Panpsychism should be our null hypothesis[1], and there never has and never can be any direct measurement of consciousness that could take us away from the null hypothesis.
Which is to say, I believe it’s possible to be dead.
the negation, that there’s something special about humans that makes them eligible to experience, is clearly held up by a conflation of having experiences and reporting experiences and the fact that humans are the only things that report anything.
You are assuming MW, and assuming a form where consciousness hops around between decoherent branches. The standard argument against.to quantum immortality applies...we don’t experience being very old and having experienced surviving against the odds multiple times. In fact, quantum immortality makes a mockery of the odds...you should have a high subjective probability of being in a low objective probability universe.
I don’t think anything in the linked passage conflicts with my model of anticipated experience. My claim is not that the branch where everyone dies doesn’t exist. Of course it exists. It just isn’t very relevant for our future observations.
To briefly factor out the quantum physics here, because they don’t actually matter much:
If someone tells me that they will create a copy of me while I’m anesthetized and unconscious, and put one of me in a room with red walls, and another of me in a room with blue walls, my anticipated experience is that I will wake up to see red walls with p=0.5 and blue walls with p=0.5. Because the set of people who will wake up and remember being me and getting anesthetized has size 2 now, and until I look at the walls I won’t know which of them I am.
If someone tells me that they will create a copy of me while I’m asleep, but they won’t copy the brain, making it functionally just a corpse, then put the corpse in a room with red walls, and me in a room with blue walls, my anticipated experience is that I will wake up to see blue walls with p=1.0. Because the set of people who will wake up and remember being me and going to sleep has size 1. There is no chance of me ‘being’ the corpse any more than there is a chance of me ‘being’ a rock. If the copy does include a brain, but the brain gets blown up with a bomb before the anaesthesia wears off, that doesn’t change anything. I’d see blue walls with p=1.0, not see blue walls with p=0.5 and ‘not experience anything’ with p=0.5.
The same basic principle applies to the copies of you that are constantly created as the wavefunction decoheres. The probability math in that case is slightly different because you’re dealing with uncertainty over a vector space rather than uncertainty over a set, so what matters is the squares of the amplitudes of the branches that contain versions of you. E.g. if there’s three branches, one in which you die, amplitude ≈0.8944, one in which you wake up to see red walls, amplitude ≈0.2828 and one in which you wake up to see blue walls, amplitude ≈0.3464, you’d see blue walls with probability ca.p=0.346420.34642+0.28282=0.6 and red walls with probability p=0.282820.34642+0.28282=0.4.[1]
If you start making up scenarios that involve both wave function decoherence and having classical copies of you created, you’re dealing with probabilities over vector spaces and probabilities over sets at the same time. At that point, you probably want to use density matrices to do calculations.
That’s like dying in your sleep. Presumably you strongly don’t want it to happen, no matter your opinion on parallel worlds. Then dying in your sleep is bad because you don’t want it to happen. For the same reason vacuum decay is bad.
Since I didn’t see it brought up on a skim: One reason me and some of my physicist friends aren’t that concerned about vacuum decay is many-worlds. Since the decay is triggered by quantum tunneling and propagates at light speed, it’d be wiping out earth in one wavefunction branch that has amplitude roughly equal to the amplitude of the tunneling, while the decay just never happens in the other branches. Since we can’t experience being dead, this wouldn’t really affect our anticipated future experiences in any way. The vacuum would just never decay from our perspective.
So, if the vacuum were confirmed to be very likely meta-stable, and the projected base rate of collapses was confirmed to be high enough that it ought to have happened a lot already, we’d have accidentally stumbled into a natural and extremely clean experimental setup for testing quantum immortality.
The random fluctuations in macroscopic chaotic systems, like Plinko or a well-flipped coin in air, can be just as fundamentally quantum as vacuum decay through tunneling. So by this argument you’d be unconcerned getting into a machine that flips a coin and shoots you if tails. Bad idea.
No, because getting shot has a lot of outcomes that do not kill you but do cripple you. Vacuum decay should tend to have extremely few of those. It’s also instant, alleviating any lingering concerns about identity one might have in a setup where death is slow and gradual. It’s also synchronised to split off everyone hit by it into the same branch, whereas, say, a very high-yield bomb wired to a random number generator that uses atmospheric noise would split you off into a branch away from your friends.[1]
I’m not unconcerned about vacuum decay, mind you. It’s not like quantum immortality is all confirmed and the implications worked out well in math.[2]
They’re still there for you of course, but you aren’t there for most of them. Because in the majority of their anticipated experience, you explode.
Sometimes I think about the potential engineering applications of quantum immortality in a mature civilisation for fun. Controlled, synchronised civilisation-wide suicide seems like a neat way to transform many engineering problems into measurement problems.
Such thought experiments also serve as a solution of sorts to the fermi paradox, and as a rationalization of the sci-fi trope of sufficiently advanced civilizations “ascending”.
I don’t think so. You only need one alien civilisation in our light cone to have preferences about the shape of the universal wave function rather than their own subjective experience for our light cone to get eaten. E.g. a paperclip maximiser might want to do this.
Also, the fermi paradox isn’t really a thing.
Vacuum decay is fast but not instant, and there will almost certainly be branches where it maims you and then reverses. Likewise, you can make suicide machines very reliable and fast. It’s unreasonable to think any of these mechanical details matter.
It expands at light speed. That’s fast enough that no computational processing can possibly occur before we’re dead. Sure there’s branches where it maims us and then stops, but these are incredibly subdominant compared to branches where the tunneling doesn’t happen.
Yes, you can make suicide machines very reliable and fast. I claim that whether your proposed suicide machine actually is reliable does in fact matter for determining whether you are likely to find yourself maimed. Making suicide machines that are synchronised earth-wide seems very difficult with current technology.
No, vacuum decay generally expands at sub-light speed.
How sub-light? I was mostly just guessing here, but if it’s below like 0.95c I’d be surprised.
I could be wrong, but from what I’ve read the domain wall should have mass, so it must travel below light speed. However, the energy difference between the two vacuums would put a large force on the wall, rapidly accelerating it to very close to light speed. Collisions with stars and gravitational effects might cause further weirdness, but ignoring that, I think after a while we basically expect constant acceleration, meaning that light cones starting inside the bubble that are at least a certain distance from the wall would never catch up with the wall. So yeah, definitely above 0.95c.
I’d also be surprised.
It seems like you’re assuming a value system where the ratio of positive to negative experience matters but where the ratio of positive to null (dead timelines) experiences doesn’t matter. I don’t think that’s the right way to salvage the human utility function, personally.
I don’t think Lucius is claiming we’d be happy about it. Maybe the no anticipated impact carries that implicit claim, I guess.
There may be a sense in which amplitude is a finite resource. Decay your branch enough, and your future anticipated experience might come to be dominated by some alien with higher amplitude simulating you, or even just by your inner product with quantum noise in a more mainline branch of the wave function. At that point, you lose pretty much all ability to control your future anticipated experience. Which seems very bad. This is a barrier I ran into when thinking about ways to use quantum immortality to cheat heat death.
The assumption that being totally dead/being aerosolised/being decayed vacuum can’t be a future experience is unprovable. Panpsychism should be our null hypothesis[1], and there never has and never can be any direct measurement of consciousness that could take us away from the null hypothesis.
Which is to say, I believe it’s possible to be dead.
the negation, that there’s something special about humans that makes them eligible to experience, is clearly held up by a conflation of having experiences and reporting experiences and the fact that humans are the only things that report anything.
It’s the old argument by Epicurus from his letter to Menoeceus:
I have preferences about how things are after I stop existing. Mostly about other people, who I love, and at times, want there to be more of.
I am not an epicurean, and I am somewhat skeptical of the reality of epicureans.
Exactly. That’s also why it’s bad for humanity to be replaced by AIs after we die: We don’t want it to happen.
You are assuming MW, and assuming a form where consciousness hops around between decoherent branches. The standard argument against.to quantum immortality applies...we don’t experience being very old and having experienced surviving against the odds multiple times. In fact, quantum immortality makes a mockery of the odds...you should have a high subjective probability of being in a low objective probability universe.
That’s a mistaken way of thinking about anticipated experience, see here:
I don’t think anything in the linked passage conflicts with my model of anticipated experience. My claim is not that the branch where everyone dies doesn’t exist. Of course it exists. It just isn’t very relevant for our future observations.
To briefly factor out the quantum physics here, because they don’t actually matter much:
If someone tells me that they will create a copy of me while I’m anesthetized and unconscious, and put one of me in a room with red walls, and another of me in a room with blue walls, my anticipated experience is that I will wake up to see red walls with p=0.5 and blue walls with p=0.5. Because the set of people who will wake up and remember being me and getting anesthetized has size 2 now, and until I look at the walls I won’t know which of them I am.
If someone tells me that they will create a copy of me while I’m asleep, but they won’t copy the brain, making it functionally just a corpse, then put the corpse in a room with red walls, and me in a room with blue walls, my anticipated experience is that I will wake up to see blue walls with p=1.0. Because the set of people who will wake up and remember being me and going to sleep has size 1. There is no chance of me ‘being’ the corpse any more than there is a chance of me ‘being’ a rock. If the copy does include a brain, but the brain gets blown up with a bomb before the anaesthesia wears off, that doesn’t change anything. I’d see blue walls with p=1.0, not see blue walls with p=0.5 and ‘not experience anything’ with p=0.5.
The same basic principle applies to the copies of you that are constantly created as the wavefunction decoheres. The probability math in that case is slightly different because you’re dealing with uncertainty over a vector space rather than uncertainty over a set, so what matters is the squares of the amplitudes of the branches that contain versions of you. E.g. if there’s three branches, one in which you die, amplitude ≈0.8944, one in which you wake up to see red walls, amplitude ≈0.2828 and one in which you wake up to see blue walls, amplitude ≈0.3464, you’d see blue walls with probability ca.p=0.346420.34642+0.28282=0.6 and red walls with probability p=0.282820.34642+0.28282=0.4.[1]
If you start making up scenarios that involve both wave function decoherence and having classical copies of you created, you’re dealing with probabilities over vector spaces and probabilities over sets at the same time. At that point, you probably want to use density matrices to do calculations.
That’s like dying in your sleep. Presumably you strongly don’t want it to happen, no matter your opinion on parallel worlds. Then dying in your sleep is bad because you don’t want it to happen. For the same reason vacuum decay is bad.