AI singularity won’t affects points 1 and 2: If information about your personality has not been preserved, there is nothing an AI can do to revive you.
It might affect points 3 and 4, but to a limited extent: an AI might be better than vanilla humans at doing research, but it would not be able to develop technologies which are impossible or intrinsically impractical for physical reasons. A truly benevolent AI might be more motivated to revive cryopatients than regular people with selfish desires, but it would still have to allocate its resources economically, and cryopatient revival might not be the best use of them.
Points 5 and 6: clearly the sooner the super-duper AI appears and develops revival tech, the higher the probability that your cryoremains are still around, but super AI appearing early and developing revival tech soon is less probable than it appearing late and/or taking a long time to develop revival tech, hence I would think that the two effects roughly cancel out. Also, as other people have noted, super AI appearing and giving you radical life extension within your lifetime would make cryonics a waste of money.
More generally, I think that AI singularity is itself a conjunctive event, with the more extreme and earlier scenarios being less probable than the less extreme and later ones. Therefore I don’t think that taking into accounts AIs should significantly affect any estimation of cryonics success.
I think that AI singularity is itself a conjunctive event,
The core thesis of my book Singularity Rising is (basically) that this isn’t true, for the singularity at least, because there are many paths to a singularity and making progress along any one of them will help advance the others. For example, it seems highly likely that (conditional on our high tech civilization continuing) within 40 years genetic engineering will have created much smarter humans than have ever existed and these people will excel at computer programming compared to non-augmented humans.
The core thesis of my book Singularity Rising is (basically) that this isn’t true, for the singularity at least, because there are many paths to a singularity and making progress along any one of them will help advance the others.
Well, I haven’t read your book, hence I can’t exclude that you might have made some good arguments I’m not aware of, but given the publicly available arguments I know, I don’t think this is true.
For example, it seems highly likely that (conditional on our high tech civilization continuing) within 40 years genetic engineering will have created much smarter humans than have ever existed and these people will excel at computer programming compared to non-augmented humans.
Is it?
There are some neurological arguments that the human brain is near the maximum intelligence limit for a biological brain. We are probably not going to breed people with IQ >200, perhaps we might breed people with IQ 140-160, but will there be tradeoffs that make it problematic to do it at large? Will there be a demand for such humans? Will they devote their efforts to AI research, or will their comparative advantage drive them to something else? And how good will they be at developing super AI? As technology becomes more mature, making progress becomes more difficult because the low-hanging fruits have already have been picked, and intelligence itself might have diminishing returns (at very least, I would be surprised to observe an inverse linear correlation between average AI researcher IQ and time to AI). And, of course, if singularity-inducing AI is impossible/impractical, the point is moot: these genetically enhanced Einstens will not develop it.
In general, with enough imagination you can envision many highly conjunctive ad hoc scenarios and put them into a disjunction, but I find this type of thinking highly suspicious, because you could use it to justify pretty much everything you wanted to believe. I think it’s better to recognize that we don’t have any crystal ball to predict the future, and betting on extreme scenarios is probably not going to be a good deal.
I don’t think it would make much difference.
Consider my comment in Hallquist’s thread:
AI singularity won’t affects points 1 and 2: If information about your personality has not been preserved, there is nothing an AI can do to revive you.
It might affect points 3 and 4, but to a limited extent: an AI might be better than vanilla humans at doing research, but it would not be able to develop technologies which are impossible or intrinsically impractical for physical reasons. A truly benevolent AI might be more motivated to revive cryopatients than regular people with selfish desires, but it would still have to allocate its resources economically, and cryopatient revival might not be the best use of them.
Points 5 and 6: clearly the sooner the super-duper AI appears and develops revival tech, the higher the probability that your cryoremains are still around, but super AI appearing early and developing revival tech soon is less probable than it appearing late and/or taking a long time to develop revival tech, hence I would think that the two effects roughly cancel out. Also, as other people have noted, super AI appearing and giving you radical life extension within your lifetime would make cryonics a waste of money.
More generally, I think that AI singularity is itself a conjunctive event, with the more extreme and earlier scenarios being less probable than the less extreme and later ones. Therefore I don’t think that taking into accounts AIs should significantly affect any estimation of cryonics success.
The core thesis of my book Singularity Rising is (basically) that this isn’t true, for the singularity at least, because there are many paths to a singularity and making progress along any one of them will help advance the others. For example, it seems highly likely that (conditional on our high tech civilization continuing) within 40 years genetic engineering will have created much smarter humans than have ever existed and these people will excel at computer programming compared to non-augmented humans.
Well, I haven’t read your book, hence I can’t exclude that you might have made some good arguments I’m not aware of, but given the publicly available arguments I know, I don’t think this is true.
Is it?
There are some neurological arguments that the human brain is near the maximum intelligence limit for a biological brain.
We are probably not going to breed people with IQ >200, perhaps we might breed people with IQ 140-160, but will there be tradeoffs that make it problematic to do it at large?
Will there be a demand for such humans?
Will they devote their efforts to AI research, or will their comparative advantage drive them to something else?
And how good will they be at developing super AI? As technology becomes more mature, making progress becomes more difficult because the low-hanging fruits have already have been picked, and intelligence itself might have diminishing returns (at very least, I would be surprised to observe an inverse linear correlation between average AI researcher IQ and time to AI).
And, of course, if singularity-inducing AI is impossible/impractical, the point is moot: these genetically enhanced Einstens will not develop it.
In general, with enough imagination you can envision many highly conjunctive ad hoc scenarios and put them into a disjunction, but I find this type of thinking highly suspicious, because you could use it to justify pretty much everything you wanted to believe.
I think it’s better to recognize that we don’t have any crystal ball to predict the future, and betting on extreme scenarios is probably not going to be a good deal.