I’ve been anesthetized twice. I don’t remember any dreams whatsoever, but I had the distant feeling that I did dream upon waking (though they may have happened as the drug was loosening its hold).
But you still experience things when you sleep, hence are observing. Also, quantum insomnia should exist if you’re correct, but it doesn’t.
I don’t see how a Boltzmann brain spontaneously forming could ever be more likely than existing in a universe with all the infrastructure necessary to support a natural brain—even if that infrastructure beats some amazing odds, it only has to maintain itself. The theory further requires that mind unification be true.
And I don’t see how a death being “natural” makes it OK.
That’s not what I said (though it is a good reason to be suspicious of attempts to remove it.) I’ll just leave it that I have some philosophical opinions which lead me to believe it is not annihilation.
Also, the baseball example is not a natural phenomenon. If it were, I’d consider it rational to accept it as a good thing.
Actually, I just realized there’s no reason you would remain conscious in QI. Surely the damage to your brain and body would put you into a coma—a fate I’d like to avoid, but definitely better than Literally Hell.
Also, what is all this talk about suicide? All I said was that I plan to die normally. You guys are reading weird things into that...
...and here’s about when I realize what a mistake it was setting foot in Lesswrong again for answers.
To be clear: your argument is that every human being who has ever lived may suffer eternally after death, and there are good reasons for not caring...?
That requires an answer that, at the very least, you should be able to put in your own words. How does our subjective suffering improve anything in the worlds where you die?
Yes, you either lose or you win. Two choices.
No, it isn’t. The same thing will happen to everyone in your branch (you don’t see it, of course, but it will subjectively happen to them).
Perhaps you don’t understand what the argument says. You, as in the person you are right now, is going to experience that. Not a infinitesimal proportion of other ‘yous’ while the majority die. Your own subjective experience, 100% of it.
Why wouldn’t it create random minds if it’s trying to grab as much ‘human-space’ as possible?
EDIT: Why focus on the potential of quantum immortality at all? There’s no special reason to focus on what happens when we *die*, in terms of AI simulation.
By “essentially impossible” I meant “extremely improbable”. The word “essentially” was meant to distinguish this from “physically impossible”.
I don’t see how it refutes the possibility of QI, then.
There is a useful distinction between knowing the meaning of an idea and knowing its truth. I’m disagreeing with the claim that “all of our measure is going into those branches where we survive”, understood in the sense that only those branches have moral value (see What Are Probabilities, Anyway?), in particular the other branches taken together have less value. See the posts linked from the grandparent comment for a more detailed discussion (I’ve edited it a bit).
This meaning could be different from one you intend, in which case I’m not understanding your claim correctly, and I’m only disagreeing with my incorrect interpretation of it. But in that case I’m not understanding what you mean by “all of our measure is going into those branches where we survive”, not that “all of our measure is going into those branches where we survive” in the sense you intend, because the latter would require me to know the intended meaning of that claim first, at which point it becomes possible for me to fail to understand its truth.
According to QI, we (as in our internal subjective experience) will continue on only in branches where we stay alive. Since I care about my subjective internal experience, I wouldn’t want it to suffer (if you disagree, press a live clothes iron to your arm and you’ll see what I mean).
Why? Surely they’re trying to rescue us. Maintaining the simulation would take away resources from grabbing even more human-measure.
The meaning of “you will always find” has a connotation of certainty or high probability, but we are specifically talking about essentially impossible outcomes.
Why? Nothing is technically impossible with quantum mechanics. It is indeed possible for every single atom of our planet to spontaneously disappear.
This could make sense as a risk on the dust speck side of , but conditioning on survival seems to be just wrong as a way of formulating values (see also).
You’re not understanding that all of our measure is going into those branches where we survive.
But you wanted something who adheres to MWI, and that is not me.
That would be optimal, but I still would like to hear your thoughts.
Some thoughts from Sean Carroll on the topic of Quantum Immortality:
And this one from Scott Aaronson:
Celebrity or not, both are quite likely to reply to a polite yet anxious email, since they can actually relate to your worries, if maybe not on the same topic.
Unfortunately, neither of them seem to grasp the argument—the whole point of it is that as a conscious being, you cannot experience any outcome where you die. So even if your survival is ridiculously improbable in the universal wavefunction, you can’t ‘wake up dead’. Hence you will always find your subjective self in that improbable branch.
Another terrible thought: what if it doesn’t depend on you dying as a whole? What if no part of your consciousness can be removed or degrade?
EDIT: Sleep doesn’t refute that as there is no real proof that you experience less when unconscious (rather, you may simply just not be self-aware). But it would imply people with brain damage are P-zombies, so that seems untenable.
the second is “unification of identical experiences”
I disagree. Quantum Immortality can still exist without it; it’s only this supposition of the AI ‘rescuing you’ that requires that. Also, if AIs are trying to grab as many humans as possible, there’s no special reason to focus on dying ones. They could just simulate all sorts of brain states with memories and varied experiences, and then immediately shut down the simulation.
If we assume that we cannot apply self-locating belief to our experience of time (and assume AIs are indeed doing this), we should expect at every moment to enter an AI-dominated world. If we can apply self-locating beliefs, then the simulation would almost certainly be already shut down and we would be in that world. Since we aren’t, there’s no reason to suppose that these AIs exist or that they can ‘grab a share of our souls’ at all.
The question is, can we apply self-locating belief to our experience of time?
and the third one is that we could ignore the decline of measure corresponding to survival in MWI
How would measure affect this? If you’re forced to follow certain paths due to not existing in any others, then why does it matter how much measure it has?
But wait, doesn’t that require the computational theory of mind and ‘unification’ of identical experiences? If they don’t hold, then we can’t go into other universes regardless of whether MWI is true (if they do, then we could even if MWI is false). I would have to already be simulated, and if I am, then there’s no reason to suppose it is by the sort of AI you describe.
Your suggestion was based on the assumption of an AI doing it, correct? It isn’t something we can naturally fall into? Also, even if all your other assumptions are true, why suppose that ‘semi-evil’ AIs, which you apparently think have low measure, take the lion’s share of highly degraded experiences? Why wouldn’t a friendly (or at least friendlier) AI try to rescue them?
What was that talk about ‘stable but improbable’ worlds? If someone cares enough to revive me (I assume my measure would mostly enter universes where I was being simulated), then that doesn’t seem likely. I also can’t fathom that an AI wanting to torture humans would take up a more-than-tiny share of such universes. Do you think such things are likely, or is it that their downsides are so bad that they must be figured into the utilitarian calculus?
What about Tegmark’s argument that dying would have to be a binary event in order to experience immortality? If so, wouldn’t your consciousness just dissolve? Or can no iota of consciousness be lost?
You’ve been basilisked.
Yes, but how plausible are such scenarios considered? If I die naturally? I don’t find AI superintelligence very plausible.
What about that talk of being ‘locked in a very unlikely but stable world’? Where is he getting that from?
There is no empirical evidence for MWI, but a number of physicists do believe that it can be something related to reality, with some heavy modifications, since, as stated, it contradicts General Relativity. Sean Carroll, an expert in both Quantum Mechanics and General Relativity, is one of them. Consider reading his blog. His latest article about the current (pitiful) state of fundamental research in Quantum Mechanics, can be found in the New York Times. His book on the topic is coming out in a couple of days, and it is guaranteed to be a highly entertaining and insightful read, which might also alleviate some of your worries.
Thanks, but I need someone who specifically addresses quantum immortality. Or better yet, a non-celebrity physicist who I can talk to.
EDIT: You claim here to have a Phd in Physics, so aren’t you at least as qualified?
However, in current situation of quick technological progress such eternal sufferings are unlikely, as in 100 years some life extending and pain reducing technologies will appear. Or, if our civilization will crash, some aliens (or owners of simulation) will eventually bring pain reduction technics.
What if I don’t agree?
If you have thoughts about non-existence, it may be some form of suicidal ideation, which could be side effect of antidepressants or bad circumstances. I had it, and I am happy that it is in the past. If such ideation persists, ask professional help.
I only meant that I plan to die naturally, with no attempt at cryogrenic freezing. I’ve no wish to die before my natural lifespan.
While death is impossible in QI setup, a partial death is still possible, when a person forgets those parts of him-her which want to die. Partial death has already happened many times with average adult person, when she forgets her childhood personality.
I’m afraid I don’t understand what you’re saying here.