RSS

Risks of Astro­nom­i­cal Suffer­ing (S-risks)

TagLast edit: 25 Apr 2021 13:01 UTC by eFish

(Astronomical) suffering risks, also known as s-risks, are risks of the creation of intense suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.

S-risks are an example of existential risk (also known as x-risks) according to Nick Bostrom’s original definition, as they threaten to “permanently and drastically curtail [Earth-originating intelligent life’s] potential”. Most existential risks are of the form “event E happens which drastically reduces the number of conscious experiences in the future”. S-risks therefore serve as a useful reminder that some x-risks are scary because they cause bad experiences, and not just because they prevent good ones.

Within the space of x-risks, we can distinguish x-risks that are s-risks, x-risks involving human extinction, x-risks that involve immense suffering and human extinction, and x-risks that involve neither. For example:

extinction risknon-extinction risk
suffering riskMisaligned AGI wipes out humans, simulates many suffering alien civilizations.Misaligned AGI tiles the universe with experiences of severe suffering.
non-suffering riskMisaligned AGI wipes out humans.Misaligned AGI keeps humans as “pets,” limiting growth but not causing immense suffering.

A related concept is hyperexistential risk, the risk of “fates worse than death” on an astronomical scale. It is not clear whether all hyperexistential risks are s-risks per se. But arguably all s-risks are hyperexistential, since “tiling the universe with experiences of severe suffering” would likely be worse than death.

There are two EA organizations with s-risk prevention research as their primary focus: the Center on Long-Term Risk (CLR) and the Center for Reducing Suffering. Much of CLR’s work is on suffering-focused AI safety and crucial considerations. Although to a much lesser extent, the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks too.

Another approach to reducing s-risk is to “expand the moral circle” together with raising concern for suffering, so that future (post)human civilizations and AI are less likely to instrumentally cause suffering to non-human minds such as animals or digital sentience. Sentience Institute works on this value-spreading problem.

See also

External links

S-risks: Why they are the worst ex­is­ten­tial risks, and how to pre­vent them

Kaj_Sotala20 Jun 2017 12:34 UTC
31 points
106 comments1 min readLW link
(foundational-research.org)

Pre­face to CLR’s Re­search Agenda on Co­op­er­a­tion, Con­flict, and TAI

JesseClifton13 Dec 2019 21:02 UTC
55 points
9 comments2 min readLW link

How eas­ily can we sep­a­rate a friendly AI in de­sign space from one which would bring about a hy­per­ex­is­ten­tial catas­tro­phe?

Anirandis10 Sep 2020 0:40 UTC
18 points
20 comments2 min readLW link

[Question] Out­come Ter­minol­ogy?

Dach14 Sep 2020 18:04 UTC
6 points
0 comments1 min readLW link

Mini map of s-risks

turchin8 Jul 2017 12:33 UTC
3 points
34 comments2 min readLW link

Sec­tions 1 & 2: In­tro­duc­tion, Strat­egy and Governance

JesseClifton17 Dec 2019 21:27 UTC
33 points
5 comments14 min readLW link

Sec­tions 3 & 4: Cred­i­bil­ity, Peace­ful Bar­gain­ing Mechanisms

JesseClifton17 Dec 2019 21:46 UTC
19 points
2 comments12 min readLW link

Sec­tions 5 & 6: Con­tem­po­rary Ar­chi­tec­tures, Hu­mans in the Loop

JesseClifton20 Dec 2019 3:52 UTC
27 points
4 comments10 min readLW link

Sec­tion 7: Foun­da­tions of Ra­tional Agency

JesseClifton22 Dec 2019 2:05 UTC
14 points
3 comments8 min readLW link

Re­duc­ing Risks of Astro­nom­i­cal Suffer­ing (S-Risks): A Ne­glected Global Priority

ignoranceprior14 Oct 2016 19:58 UTC
9 points
4 comments1 min readLW link
(foundational-research.org)

Prevent­ing s-risks via in­dex­i­cal un­cer­tainty, acausal trade and dom­i­na­tion in the multiverse

avturchin27 Sep 2018 10:09 UTC
7 points
6 comments4 min readLW link

The Dilemma of Worse Than Death Scenarios

arkaeik10 Jul 2018 9:18 UTC
6 points
17 comments4 min readLW link

Siren wor­lds and the per­ils of over-op­ti­mised search

Stuart_Armstrong7 Apr 2014 11:00 UTC
71 points
417 comments7 min readLW link

Risk of Mass Hu­man Suffer­ing /​ Ex­tinc­tion due to Cli­mate Emer­gency

willfranks14 Mar 2019 18:32 UTC
4 points
3 comments1 min readLW link

Suffer­ing-Fo­cused Ethics in the In­finite Uni­verse. How can we re­deem our­selves if Mul­ti­verse Im­mor­tal­ity is real and sub­jec­tive death is im­pos­si­ble.

Szymon Kucharski24 Feb 2021 21:02 UTC
−5 points
4 comments70 min readLW link

Phys­i­cal­ism im­plies ex­pe­rience never dies. So what am I go­ing to ex­pe­rience af­ter it does?

Szymon Kucharski14 Mar 2021 14:45 UTC
−4 points
0 comments30 min readLW link

Avert­ing suffer­ing with sen­tience throt­tlers (pro­posal)

Quinn5 Apr 2021 10:54 UTC
8 points
7 comments3 min readLW link

CLR’s re­cent work on multi-agent systems

JesseClifton9 Mar 2021 2:28 UTC
51 points
0 comments13 min readLW link

[Book Re­view] “Suffer­ing-fo­cused Ethics” by Mag­nus Vinding

KStub18 Oct 2021 23:34 UTC
7 points
2 comments25 min readLW link
No comments.