Interesting! I haven’t read the paper, but I wonder whether you consider the question of, how the values of this hypothetical “next civilization” might radically depart from our own values, and whether it might be the case that we don’t want them to be able to reconstruct us (because of the things they would want to do to us)?
I wrote in the article that our relation with the next civilization is the form of acausal deal: we provide to them information about how we fought global risks and failed, which will help them to survive, and they provide to us “resurrection”.
I didn’t touch the problem that we may not like the resurrection, as it is more general problem applicable to cryonics and even to life extension: what if I will survive until 22 century and will not like the values of people (or AIs) who will dominate in that time?
I don’t think that their values will radically depart from our values, because human values are convergent product of socio-biological evolution—and the next civilization will most likely have similar evolutionary pressure on values. However, even so-called human values may be non-pleasant—think about a zoo.
But anyway, I think that to be alive is better than to be dead, except the case of infinite torture, and I don’t see the reason to recreate humans only to put them in situation of infinite torture. Surely, some unpleasant moments could happen, but it is also part of human life even here on Earth.
Interesting! I haven’t read the paper, but I wonder whether you consider the question of, how the values of this hypothetical “next civilization” might radically depart from our own values, and whether it might be the case that we don’t want them to be able to reconstruct us (because of the things they would want to do to us)?
I wrote in the article that our relation with the next civilization is the form of acausal deal: we provide to them information about how we fought global risks and failed, which will help them to survive, and they provide to us “resurrection”.
I didn’t touch the problem that we may not like the resurrection, as it is more general problem applicable to cryonics and even to life extension: what if I will survive until 22 century and will not like the values of people (or AIs) who will dominate in that time?
I don’t think that their values will radically depart from our values, because human values are convergent product of socio-biological evolution—and the next civilization will most likely have similar evolutionary pressure on values. However, even so-called human values may be non-pleasant—think about a zoo.
But anyway, I think that to be alive is better than to be dead, except the case of infinite torture, and I don’t see the reason to recreate humans only to put them in situation of infinite torture. Surely, some unpleasant moments could happen, but it is also part of human life even here on Earth.