RSS

Risks of Astro­nom­i­cal Suffer­ing (S-risks)

Suffer­ing risks (also known as s-risks) are risks of the cre­ation of suffer­ing in the far fu­ture on an as­tro­nom­i­cal scale, vastly ex­ceed­ing all suffer­ing that has ex­isted on Earth so far. In this sense, many s-risks can be con­sid­ered a form of ex­is­ten­tial risk ac­cord­ing to Bostrom’s origi­nal defi­ni­tion, as they threaten to “cur­tail [hu­man­ity’s] po­ten­tial”. How­ever, it is of­ten use­ful to dis­t­in­guish be­tween risks that threaten to pre­vent fu­ture pop­u­la­tions from com­ing into ex­is­tence (ex­tinc­tion risks) and those which would cre­ate a large amount of suffer­ing (s-risks).

Although the Ma­chine In­tel­li­gence Re­search In­sti­tute and Fu­ture of Hu­man­ity In­sti­tute have in­ves­ti­gated strate­gies to pre­vent s-risks, the only EA or­ga­ni­za­tion with s-risk pre­ven­tion re­search as its pri­mary fo­cus is the Foun­da­tional Re­search In­sti­tute. Much of FRI’s work is on suffer­ing-fo­cused AI safety and cru­cial con­sid­er­a­tions. Another ap­proach to re­duc­ing s-risk is to “ex­pand the moral cir­cle”, so that fu­ture (post)hu­man civ­i­liza­tions and AI are less likely to in­stru­men­tally cause suffer­ing to non-hu­man minds such as an­i­mals or digi­tal sen­tience. Sen­tience In­sti­tute works on this value-spread­ing prob­lem.

See also

Ex­ter­nal links

S-risks: Why they are the worst ex­is­ten­tial risks, and how to pre­vent them

Kaj_Sotala
20 Jun 2017 12:34 UTC
21 points
107 comments1 min readLW link
(foundational-research.org)

How eas­ily can we sep­a­rate a friendly AI in de­sign space from one which would bring about a hy­per­ex­is­ten­tial catas­tro­phe?

Anirandis
10 Sep 2020 0:40 UTC
18 points
20 comments2 min readLW link

[Question] Out­come Ter­minol­ogy?

Dach
14 Sep 2020 18:04 UTC
6 points
0 comments1 min readLW link

Re­duc­ing Risks of Astro­nom­i­cal Suffer­ing (S-Risks): A Ne­glected Global Priority

ignoranceprior
14 Oct 2016 19:58 UTC
6 points
4 comments1 min readLW link
(foundational-research.org)

Prevent­ing s-risks via in­dex­i­cal un­cer­tainty, acausal trade and dom­i­na­tion in the multiverse

avturchin
27 Sep 2018 10:09 UTC
7 points
3 comments4 min readLW link

Mini map of s-risks

turchin
8 Jul 2017 12:33 UTC
3 points
34 comments2 min readLW link

[Link] Suffer­ing-fo­cused AI safety: Why “fail-safe” mea­sures might be par­tic­u­larly promis­ing

David Althaus
21 Jul 2016 20:22 UTC
9 points
5 comments1 min readLW link

Pre­face to CLR’s Re­search Agenda on Co­op­er­a­tion, Con­flict, and TAI

JesseClifton
13 Dec 2019 21:02 UTC
55 points
8 comments2 min readLW link

Sec­tions 1 & 2: In­tro­duc­tion, Strat­egy and Governance

JesseClifton
17 Dec 2019 21:27 UTC
35 points
5 comments14 min readLW link

Sec­tions 3 & 4: Cred­i­bil­ity, Peace­ful Bar­gain­ing Mechanisms

JesseClifton
17 Dec 2019 21:46 UTC
21 points
2 comments12 min readLW link

Sec­tions 5 & 6: Con­tem­po­rary Ar­chi­tec­tures, Hu­mans in the Loop

JesseClifton
20 Dec 2019 3:52 UTC
29 points
4 comments10 min readLW link

Sec­tion 7: Foun­da­tions of Ra­tional Agency

JesseClifton
22 Dec 2019 2:05 UTC
16 points
3 comments8 min readLW link

The Dilemma of Worse Than Death Scenarios

arkaeik
10 Jul 2018 9:18 UTC
3 points
17 comments4 min readLW link

Siren wor­lds and the per­ils of over-op­ti­mised search

Stuart_Armstrong
7 Apr 2014 11:00 UTC
45 points
415 comments7 min readLW link

Risk of Mass Hu­man Suffer­ing /​ Ex­tinc­tion due to Cli­mate Emer­gency

willfranks
14 Mar 2019 18:32 UTC
6 points
3 comments1 min readLW link
No comments.