RSS

Cen­ter on Long-Term Risk (CLR)

TagLast edit: 28 Oct 2022 6:16 UTC by Konstantin P

The Center on Long-Term Risk, formerly Foundational Research Institute, is an effective altruist research group affiliated with the Swiss/​German Effective Altruism Foundation. It investigates cooperative strategies to reduce risks of astronomical suffering in humanity’s future (s-risks). This includes not only (post-)human suffering, but also the suffering of non-human animals and potential digital sentience. Their research is interdisciplinary, drawing on insights from artificial intelligence, anthropic reasoning, international relations, sociology, philosophy, and other fields.

See also

External links

Sec­tions 1 & 2: In­tro­duc­tion, Strat­egy and Governance

JesseClifton17 Dec 2019 21:27 UTC
35 points
8 comments14 min readLW link

Sec­tions 5 & 6: Con­tem­po­rary Ar­chi­tec­tures, Hu­mans in the Loop

JesseClifton20 Dec 2019 3:52 UTC
27 points
4 comments10 min readLW link

Sec­tions 3 & 4: Cred­i­bil­ity, Peace­ful Bar­gain­ing Mechanisms

JesseClifton17 Dec 2019 21:46 UTC
20 points
2 comments12 min readLW link

Sec­tion 7: Foun­da­tions of Ra­tional Agency

JesseClifton22 Dec 2019 2:05 UTC
14 points
4 comments8 min readLW link

Pre­face to CLR’s Re­search Agenda on Co­op­er­a­tion, Con­flict, and TAI

JesseClifton13 Dec 2019 21:02 UTC
62 points
10 comments2 min readLW link

Mul­ti­verse-wide Co­op­er­a­tion via Cor­re­lated De­ci­sion Making

Kaj_Sotala20 Aug 2017 12:01 UTC
7 points
2 comments1 min readLW link
(foundational-research.org)

Against GDP as a met­ric for timelines and take­off speeds

Daniel Kokotajlo29 Dec 2020 17:42 UTC
134 points
18 comments14 min readLW link1 review

Birds, Brains, Planes, and AI: Against Ap­peals to the Com­plex­ity/​Mys­te­ri­ous­ness/​Effi­ciency of the Brain

Daniel Kokotajlo18 Jan 2021 12:08 UTC
185 points
85 comments14 min readLW link1 review

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks19 Dec 2019 3:00 UTC
130 points
18 comments62 min readLW link

2018 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks18 Dec 2018 4:46 UTC
190 points
26 comments62 min readLW link1 review

CLR’s re­cent work on multi-agent systems

JesseClifton9 Mar 2021 2:28 UTC
54 points
1 comment13 min readLW link

For­mal­iz­ing Ob­jec­tions against Sur­ro­gate Goals

VojtaKovarik2 Sep 2021 16:24 UTC
7 points
23 comments20 min readLW link

When does tech­ni­cal work to re­duce AGI con­flict make a differ­ence?: Introduction

14 Sep 2022 19:38 UTC
49 points
3 comments6 min readLW link

When would AGIs en­gage in con­flict?

14 Sep 2022 19:38 UTC
48 points
3 comments13 min readLW link

When is in­tent al­ign­ment suffi­cient or nec­es­sary to re­duce AGI con­flict?

14 Sep 2022 19:39 UTC
37 points
0 comments9 min readLW link

[Question] Like­li­hood of hy­per­ex­is­ten­tial catas­tro­phe from a bug?

Anirandis18 Jun 2020 16:23 UTC
13 points
27 comments1 min readLW link

[Question] (Cross­post) Ask­ing for on­line calls on AI s-risks dis­cus­sions

jackchang11015 May 2023 17:42 UTC
1 point
0 comments1 min readLW link
(forum.effectivealtruism.org)
No comments.