(Astronomical) suffering risks, also known as s-risks, are risks of the creation of suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.
S-risks are an example of existential risk (also known as x-risks) according to Nick Bostrom’s original definition, as they threaten to “permanently and drastically curtail [Earth-originating intelligent life’s] potential”. Most existential risks are of the form “event E happens which drastically reduces the number of conscious experiences in the future”. S-risks therefore serve as a useful reminder that some x-risks are scary because they cause bad experiences, and not just because they prevent good ones.
Within the space of x-risks, we can distinguish x-risks that are s-risks, x-risks involving human extinction, x-risks that involve immense suffering and human extinction, and x-risks that involve neither. For example:
|extinction risk||non-extinction risk|
|suffering risk||Misaligned AGI wipes out humans, simulates many suffering alien civilizations.||Misaligned AGI tiles the universe with unhappy human experiences.|
|non-suffering risk||Misaligned AGI wipes out humans.||Misaligned AGI keeps humans as “pets,” limiting growth but not causing immense suffering.|
A related concept is hyperexistential risk, the risk of “fates worse than death” on an astronomical scale. It is not clear whether all hyperexistential risks are s-risks per se. It is clear that not all s-risks are hyperexistential, since “tiling the universe with mildly unhappy experiences” would be an s-risk but very likely wouldn’t be a worse fate than death.
Although the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks, the only EA organization with s-risk prevention research as its primary focus is the Foundational Research Institute. Much of FRI’s work is on suffering-focused AI safety and crucial considerations.
Another approach to reducing s-risk is to “expand the moral circle”, so that future (post)human civilizations and AI are less likely to instrumentally cause suffering to non-human minds such as animals or digital sentience. Sentience Institute works on this value-spreading problem.