Cryonics was mentioned, but nobody mentioned irony that if you replace P(doom) with P(cryonics doesn’t work), you get pretty good argument in support of cryonics, and if you account for the whole “possibly murdering everyone in a mad bid for immortality” business cryonics is clearly superior and progress in cryonics is non-glacialand we have alternative in form of formaldehyde fixationand we are likely to get mind uploading technology in 50 years, which gives us for free benefits of AGI without majority of downsides.
We have such good chances in getting all the universe, let’s not destroy them by rushing.
Making progress in cryonics or, better still, in longevity medicine, would be good ways to lengthen optimal [for existing people, from a mundane perspective] AI timelines.
With cryonics, the social obstacles seem formidable. My impression is that there’s at least a somewhat decent chance that current technology is already sufficient for successful preservation, and has been so for decades; yet uptake remains negligible. The discontinuity that suspension involves—in social participation etc. - also reduces its attractiveness.
With longevity medicine, uptake would be easier (although it would still be challenging to achieve a globally significant impact if treatments are expensive). So far, technical progress has been slow. Maybe there’s some hope that AI progress could accelerate this, even short of full general superintelligence—an additional reason for why, if there is to be a pause, it may be best for that pause to occur as late as possible.
There is also accumulating risks that our civilization gets destroyed before superintelligence. This includes xrisks. But also significant from a person-affecting perspective are scenarios of collapse or derailment that fall short of extinction or permanent loss of potential. For example, a big nuclear war or a bad engineered pandemic or a global breakdown of order could be devastating from a person-affecting perspective even if in the long-run humanity recovered.
If somehow we could be become both individually and collectively safe, then I think the person-affecting perspective would favor a much slower and more risk-averse pace of AI development.
Cryonics was mentioned, but nobody mentioned irony that if you replace P(doom) with P(cryonics doesn’t work), you get pretty good argument in support of cryonics, and if you account for the whole “possibly murdering everyone in a mad bid for immortality” business cryonics is clearly superior and progress in cryonics is non-glacial and we have alternative in form of formaldehyde fixation and we are likely to get mind uploading technology in 50 years, which gives us for free benefits of AGI without majority of downsides.
We have such good chances in getting all the universe, let’s not destroy them by rushing.
Making progress in cryonics or, better still, in longevity medicine, would be good ways to lengthen optimal [for existing people, from a mundane perspective] AI timelines.
With cryonics, the social obstacles seem formidable. My impression is that there’s at least a somewhat decent chance that current technology is already sufficient for successful preservation, and has been so for decades; yet uptake remains negligible. The discontinuity that suspension involves—in social participation etc. - also reduces its attractiveness.
With longevity medicine, uptake would be easier (although it would still be challenging to achieve a globally significant impact if treatments are expensive). So far, technical progress has been slow. Maybe there’s some hope that AI progress could accelerate this, even short of full general superintelligence—an additional reason for why, if there is to be a pause, it may be best for that pause to occur as late as possible.
There is also accumulating risks that our civilization gets destroyed before superintelligence. This includes xrisks. But also significant from a person-affecting perspective are scenarios of collapse or derailment that fall short of extinction or permanent loss of potential. For example, a big nuclear war or a bad engineered pandemic or a global breakdown of order could be devastating from a person-affecting perspective even if in the long-run humanity recovered.
If somehow we could be become both individually and collectively safe, then I think the person-affecting perspective would favor a much slower and more risk-averse pace of AI development.