I’ve contemplated writing a post about the same subject of “big world immortality” (could be call it BWI for short?) myself, but mostly focusing on this part: “There is nothing good in it also, because most of my survived branches will be very old and ill. But we could use QI to work for us, if we combine it with cryonics. Just sign up for it or have an idea to sign up, and most likely you will find your self in survived branch where you will be resurrected after cryostasis. (The same is true for digital immortality—record more about your self and future FAI will resurrect you, and QI rises chances of it.)”
It seems to me that we should be very pessimistic about the future because of QI/BWI. After all, what guarantee is there that you will wake up in a friendly world, or that the AI who resurrects is friendly? Should we be worried about this? What could we do to increase the likelihood that we’ll find ourselves in a comfortable future?
I’m very confused about this myself. It seems to me, too, that there’s a significant chance that QI is true, but there are objections, of course: the inventor of the mathematical universe hypothesis, Max Tegmark, disputes it himself in his 2014 book, arguing that “infinitely big” and “infinitely small” don’t actually exist and QI will therefore not work. I have no idea if this makes sense or not. There are also attempts to rid physics of somewhat related ideas such as Boltzmann brains.
It’s even more confusing since I’m not really interested in immortality myself. Normally I would be mildly enthusiastic about “ordinary” ways of life extension, but avoid things such as cryonics. With QI, I don’t know. Now that this post is here, I hope people will share their thoughts.
If I will be resurrected, I expect that the AI that will do it will be with probability 90 per cent friendly. Why UFAI will be interested to resurrect me? Just to punish? Or to test his ideas about the end of the world in a simulation? In this case it will simulate me from my birth.
Anyway signing to cryonics is the best way to escape from eternal suffering of bad quantum immortality in very old body.
I don’t understand Tegmark objection. We don’t need infinite world for BWI, just very big one, big enough to have many my copies.
BWI will help me to survive if I am a Bolzmann brain now. I will die next moment, but in another world, there I am part of a real world, I will continue to exist, so the same logic as in BWI may be applied.
I still think that BWI is too speculative to be use in actual decision making. I also think that ones enthusiasm about death prevention may depend on urgency of situation: if there is fire in a house everybody in it will be very enthusiastic to save their life.
If I will be resurrected, I expect that the AI that will do it will be with probability 90 per cent friendly. Why UFAI will be interested to resurrect me? Just to punish?
Maybe; there’s a certain scenario, for instance, that for a time wasn’t allowed to be mentioned on LW (not anymore, I suppose). In any case, the ratio of UFAIs to FAIs is also important; even if few UFAIs care about resurrecting you, they can be much more numerous than FAIs.
Or to test his ideas about the end of the world in a simulation? In this case it will simulate me from my birth.
This is actually what I would suppose to be most common. In which case we’re back to the enormously prolonged old age scenario, I suppose.
I don’t understand Tegmark objection. We don’t need infinite world for BWI, just very big one, big enough to have many my copies.
Basically, I think you’re right. Either Tegmark hasn’t thought about this enough, or he believes that it would shrink the size of our big world enormously. Kudos to him for devoting a chapter of a popular science book for the subject, though.
I still think that BWI is too speculative to be use in actual decision making.
Why do you think that it’s so speculative? MWI has a lot of support in LW and among people working on quantum foundations; cosmic inflation has basically universal acceptance among physicists (and alternatives, such as Steinhardt’s epkyrotic cosmology, have basically the same implications in this regard); string theory is very plausible; Tegmark’s mathematical universe is what I would call speculative, but even it makes a lot of sense; and patternism, the other necessary ingredient, is again almost universally accepted on LW.
I also think that ones enthusiasm about death prevention may depend on urgency of situation: if there is fire in a house everybody in it will be very enthusiastic to save their life.
Probably. But as humans we’re basically built to strive to survive in a situation like that, meaning that their jugdment is likely pretty severely impaired.
Now we could speak about RB for free. I mostly think that mild version is true, that is good people will be more rewarded, but not punishment or sufferings. I know some people who independently come to idea that future AI will reward them. For me, I don’t afraid any version of RB as I did a lot to promote ideas of AI safety.
Still don’t get Tegmark’s idea, may be need to go to his book.
For example, we could live in a simulation with afterlife, and suicide in it is punished.
If we strongly believe in BWI we could build universal fulfilment of desires machine. Just connect any desired outcome with bomb, so it explodes if our goal is not reach. But I am sceptical about all believed in general, which is probably also shared idea in LW )) I will not risk permanent injury or death if I have chance to survive without it. But i could imagine situation where I will change my mind, if real danger overweight my uncertainty about BWI.
For example if one have cancer, he may prefer an operation which has 20 per cent positive outcome to chemo with 40 per cent positive outcome, but slow and painful decline in case of failure. In this case BWI gives him large chance to become completely illness free.
This thread is not about values, but I think that values exist only inside human beings. Abstract rational agent may have no values at all, because it may prove that any value is just logical mistake.
I’ve contemplated writing a post about the same subject of “big world immortality” (could be call it BWI for short?) myself, but mostly focusing on this part: “There is nothing good in it also, because most of my survived branches will be very old and ill. But we could use QI to work for us, if we combine it with cryonics. Just sign up for it or have an idea to sign up, and most likely you will find your self in survived branch where you will be resurrected after cryostasis. (The same is true for digital immortality—record more about your self and future FAI will resurrect you, and QI rises chances of it.)”
It seems to me that we should be very pessimistic about the future because of QI/BWI. After all, what guarantee is there that you will wake up in a friendly world, or that the AI who resurrects is friendly? Should we be worried about this? What could we do to increase the likelihood that we’ll find ourselves in a comfortable future?
I’m very confused about this myself. It seems to me, too, that there’s a significant chance that QI is true, but there are objections, of course: the inventor of the mathematical universe hypothesis, Max Tegmark, disputes it himself in his 2014 book, arguing that “infinitely big” and “infinitely small” don’t actually exist and QI will therefore not work. I have no idea if this makes sense or not. There are also attempts to rid physics of somewhat related ideas such as Boltzmann brains.
It’s even more confusing since I’m not really interested in immortality myself. Normally I would be mildly enthusiastic about “ordinary” ways of life extension, but avoid things such as cryonics. With QI, I don’t know. Now that this post is here, I hope people will share their thoughts.
If I will be resurrected, I expect that the AI that will do it will be with probability 90 per cent friendly. Why UFAI will be interested to resurrect me? Just to punish? Or to test his ideas about the end of the world in a simulation? In this case it will simulate me from my birth.
Anyway signing to cryonics is the best way to escape from eternal suffering of bad quantum immortality in very old body.
I don’t understand Tegmark objection. We don’t need infinite world for BWI, just very big one, big enough to have many my copies.
BWI will help me to survive if I am a Bolzmann brain now. I will die next moment, but in another world, there I am part of a real world, I will continue to exist, so the same logic as in BWI may be applied.
I still think that BWI is too speculative to be use in actual decision making. I also think that ones enthusiasm about death prevention may depend on urgency of situation: if there is fire in a house everybody in it will be very enthusiastic to save their life.
Maybe; there’s a certain scenario, for instance, that for a time wasn’t allowed to be mentioned on LW (not anymore, I suppose). In any case, the ratio of UFAIs to FAIs is also important; even if few UFAIs care about resurrecting you, they can be much more numerous than FAIs.
This is actually what I would suppose to be most common. In which case we’re back to the enormously prolonged old age scenario, I suppose.
Basically, I think you’re right. Either Tegmark hasn’t thought about this enough, or he believes that it would shrink the size of our big world enormously. Kudos to him for devoting a chapter of a popular science book for the subject, though.
Why do you think that it’s so speculative? MWI has a lot of support in LW and among people working on quantum foundations; cosmic inflation has basically universal acceptance among physicists (and alternatives, such as Steinhardt’s epkyrotic cosmology, have basically the same implications in this regard); string theory is very plausible; Tegmark’s mathematical universe is what I would call speculative, but even it makes a lot of sense; and patternism, the other necessary ingredient, is again almost universally accepted on LW.
Probably. But as humans we’re basically built to strive to survive in a situation like that, meaning that their jugdment is likely pretty severely impaired.
Now we could speak about RB for free. I mostly think that mild version is true, that is good people will be more rewarded, but not punishment or sufferings. I know some people who independently come to idea that future AI will reward them. For me, I don’t afraid any version of RB as I did a lot to promote ideas of AI safety.
Still don’t get Tegmark’s idea, may be need to go to his book.
For example, we could live in a simulation with afterlife, and suicide in it is punished. If we strongly believe in BWI we could build universal fulfilment of desires machine. Just connect any desired outcome with bomb, so it explodes if our goal is not reach. But I am sceptical about all believed in general, which is probably also shared idea in LW )) I will not risk permanent injury or death if I have chance to survive without it. But i could imagine situation where I will change my mind, if real danger overweight my uncertainty about BWI.
For example if one have cancer, he may prefer an operation which has 20 per cent positive outcome to chemo with 40 per cent positive outcome, but slow and painful decline in case of failure. In this case BWI gives him large chance to become completely illness free.
This thread is not about values, but I think that values exist only inside human beings. Abstract rational agent may have no values at all, because it may prove that any value is just logical mistake.