Making progress in cryonics or, better still, in longevity medicine, would be good ways to lengthen optimal [for existing people, from a mundane perspective] AI timelines.
With cryonics, the social obstacles seem formidable. My impression is that there’s at least a somewhat decent chance that current technology is already sufficient for successful preservation, and has been so for decades; yet uptake remains negligible. The discontinuity that suspension involves—in social participation etc. - also reduces its attractiveness.
With longevity medicine, uptake would be easier (although it would still be challenging to achieve a globally significant impact if treatments are expensive). So far, technical progress has been slow. Maybe there’s some hope that AI progress could accelerate this, even short of full general superintelligence—an additional reason for why, if there is to be a pause, it may be best for that pause to occur as late as possible.
There is also accumulating risks that our civilization gets destroyed before superintelligence. This includes xrisks. But also significant from a person-affecting perspective are scenarios of collapse or derailment that fall short of extinction or permanent loss of potential. For example, a big nuclear war or a bad engineered pandemic or a global breakdown of order could be devastating from a person-affecting perspective even if in the long-run humanity recovered.
If somehow we could be become both individually and collectively safe, then I think the person-affecting perspective would favor a much slower and more risk-averse pace of AI development.
Making progress in cryonics or, better still, in longevity medicine, would be good ways to lengthen optimal [for existing people, from a mundane perspective] AI timelines.
With cryonics, the social obstacles seem formidable. My impression is that there’s at least a somewhat decent chance that current technology is already sufficient for successful preservation, and has been so for decades; yet uptake remains negligible. The discontinuity that suspension involves—in social participation etc. - also reduces its attractiveness.
With longevity medicine, uptake would be easier (although it would still be challenging to achieve a globally significant impact if treatments are expensive). So far, technical progress has been slow. Maybe there’s some hope that AI progress could accelerate this, even short of full general superintelligence—an additional reason for why, if there is to be a pause, it may be best for that pause to occur as late as possible.
There is also accumulating risks that our civilization gets destroyed before superintelligence. This includes xrisks. But also significant from a person-affecting perspective are scenarios of collapse or derailment that fall short of extinction or permanent loss of potential. For example, a big nuclear war or a bad engineered pandemic or a global breakdown of order could be devastating from a person-affecting perspective even if in the long-run humanity recovered.
If somehow we could be become both individually and collectively safe, then I think the person-affecting perspective would favor a much slower and more risk-averse pace of AI development.