Cen­ter on Long-Term Risk (CLR)

TagLast edit: 16 Sep 2020 22:28 UTC by Ruby

The Center on Long-Term Risk, formerly Foundational Research Institute, is an effective altruist is a research group affiliated with the Swiss/​German Effective Altruism Foundation. It investigates cooperative strategies to reduce risks of astronomical suffering in humanity’s future (s-risks). This includes not only (post-)human suffering, but also the suffering of non-human animals and potential digital sentience. Their research is interdisciplinary, drawing on insights from artificial intelligence, anthropic reasoning, international relations, sociology, philosophy, and other fields.

See also

External links

Pre­face to CLR’s Re­search Agenda on Co­op­er­a­tion, Con­flict, and TAI

JesseClifton13 Dec 2019 21:02 UTC
56 points
10 comments2 min readLW link

Sec­tions 1 & 2: In­tro­duc­tion, Strat­egy and Governance

JesseClifton17 Dec 2019 21:27 UTC
34 points
5 comments14 min readLW link

Sec­tions 5 & 6: Con­tem­po­rary Ar­chi­tec­tures, Hu­mans in the Loop

JesseClifton20 Dec 2019 3:52 UTC
27 points
4 comments10 min readLW link

Sec­tions 3 & 4: Cred­i­bil­ity, Peace­ful Bar­gain­ing Mechanisms

JesseClifton17 Dec 2019 21:46 UTC
19 points
2 comments12 min readLW link

Sec­tion 7: Foun­da­tions of Ra­tional Agency

JesseClifton22 Dec 2019 2:05 UTC
14 points
4 comments8 min readLW link

Mul­ti­verse-wide Co­op­er­a­tion via Cor­re­lated De­ci­sion Making

Kaj_Sotala20 Aug 2017 12:01 UTC
6 points
2 comments1 min readLW link

Against GDP as a met­ric for timelines and take­off speeds

Daniel Kokotajlo29 Dec 2020 17:42 UTC
130 points
15 comments14 min readLW link1 review

Birds, Brains, Planes, and AI: Against Ap­peals to the Com­plex­ity/​Mys­te­ri­ous­ness/​Effi­ciency of the Brain

Daniel Kokotajlo18 Jan 2021 12:08 UTC
181 points
84 comments14 min readLW link

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks19 Dec 2019 3:00 UTC
130 points
18 comments62 min readLW link

2018 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks18 Dec 2018 4:46 UTC
190 points
26 comments62 min readLW link1 review

CLR’s re­cent work on multi-agent systems

JesseClifton9 Mar 2021 2:28 UTC
51 points
1 comment13 min readLW link

For­mal­iz­ing Ob­jec­tions against Sur­ro­gate Goals

VojtaKovarik2 Sep 2021 16:24 UTC
7 points
22 comments20 min readLW link
No comments.