If I will be resurrected, I expect that the AI that will do it will be with probability 90 per cent friendly. Why UFAI will be interested to resurrect me? Just to punish?
Maybe; there’s a certain scenario, for instance, that for a time wasn’t allowed to be mentioned on LW (not anymore, I suppose). In any case, the ratio of UFAIs to FAIs is also important; even if few UFAIs care about resurrecting you, they can be much more numerous than FAIs.
Or to test his ideas about the end of the world in a simulation? In this case it will simulate me from my birth.
This is actually what I would suppose to be most common. In which case we’re back to the enormously prolonged old age scenario, I suppose.
I don’t understand Tegmark objection. We don’t need infinite world for BWI, just very big one, big enough to have many my copies.
Basically, I think you’re right. Either Tegmark hasn’t thought about this enough, or he believes that it would shrink the size of our big world enormously. Kudos to him for devoting a chapter of a popular science book for the subject, though.
I still think that BWI is too speculative to be use in actual decision making.
Why do you think that it’s so speculative? MWI has a lot of support in LW and among people working on quantum foundations; cosmic inflation has basically universal acceptance among physicists (and alternatives, such as Steinhardt’s epkyrotic cosmology, have basically the same implications in this regard); string theory is very plausible; Tegmark’s mathematical universe is what I would call speculative, but even it makes a lot of sense; and patternism, the other necessary ingredient, is again almost universally accepted on LW.
I also think that ones enthusiasm about death prevention may depend on urgency of situation: if there is fire in a house everybody in it will be very enthusiastic to save their life.
Probably. But as humans we’re basically built to strive to survive in a situation like that, meaning that their jugdment is likely pretty severely impaired.
Now we could speak about RB for free. I mostly think that mild version is true, that is good people will be more rewarded, but not punishment or sufferings. I know some people who independently come to idea that future AI will reward them. For me, I don’t afraid any version of RB as I did a lot to promote ideas of AI safety.
Still don’t get Tegmark’s idea, may be need to go to his book.
For example, we could live in a simulation with afterlife, and suicide in it is punished.
If we strongly believe in BWI we could build universal fulfilment of desires machine. Just connect any desired outcome with bomb, so it explodes if our goal is not reach. But I am sceptical about all believed in general, which is probably also shared idea in LW )) I will not risk permanent injury or death if I have chance to survive without it. But i could imagine situation where I will change my mind, if real danger overweight my uncertainty about BWI.
For example if one have cancer, he may prefer an operation which has 20 per cent positive outcome to chemo with 40 per cent positive outcome, but slow and painful decline in case of failure. In this case BWI gives him large chance to become completely illness free.
This thread is not about values, but I think that values exist only inside human beings. Abstract rational agent may have no values at all, because it may prove that any value is just logical mistake.
Maybe; there’s a certain scenario, for instance, that for a time wasn’t allowed to be mentioned on LW (not anymore, I suppose). In any case, the ratio of UFAIs to FAIs is also important; even if few UFAIs care about resurrecting you, they can be much more numerous than FAIs.
This is actually what I would suppose to be most common. In which case we’re back to the enormously prolonged old age scenario, I suppose.
Basically, I think you’re right. Either Tegmark hasn’t thought about this enough, or he believes that it would shrink the size of our big world enormously. Kudos to him for devoting a chapter of a popular science book for the subject, though.
Why do you think that it’s so speculative? MWI has a lot of support in LW and among people working on quantum foundations; cosmic inflation has basically universal acceptance among physicists (and alternatives, such as Steinhardt’s epkyrotic cosmology, have basically the same implications in this regard); string theory is very plausible; Tegmark’s mathematical universe is what I would call speculative, but even it makes a lot of sense; and patternism, the other necessary ingredient, is again almost universally accepted on LW.
Probably. But as humans we’re basically built to strive to survive in a situation like that, meaning that their jugdment is likely pretty severely impaired.
Now we could speak about RB for free. I mostly think that mild version is true, that is good people will be more rewarded, but not punishment or sufferings. I know some people who independently come to idea that future AI will reward them. For me, I don’t afraid any version of RB as I did a lot to promote ideas of AI safety.
Still don’t get Tegmark’s idea, may be need to go to his book.
For example, we could live in a simulation with afterlife, and suicide in it is punished. If we strongly believe in BWI we could build universal fulfilment of desires machine. Just connect any desired outcome with bomb, so it explodes if our goal is not reach. But I am sceptical about all believed in general, which is probably also shared idea in LW )) I will not risk permanent injury or death if I have chance to survive without it. But i could imagine situation where I will change my mind, if real danger overweight my uncertainty about BWI.
For example if one have cancer, he may prefer an operation which has 20 per cent positive outcome to chemo with 40 per cent positive outcome, but slow and painful decline in case of failure. In this case BWI gives him large chance to become completely illness free.
This thread is not about values, but I think that values exist only inside human beings. Abstract rational agent may have no values at all, because it may prove that any value is just logical mistake.