RSS

Cen­ter on Long-Term Risk (CLR)

TagLast edit: 8 Dec 2023 22:35 UTC by Jonas V

The Center on Long-Term Risk, formerly Foundational Research Institute, is a research group that investigates cooperative strategies to reduce risks of astronomical suffering (s-risks). This includes not only (post-)human suffering, but also potential digital sentience. Their research is interdisciplinary, drawing on insights from artificial intelligence, anthropic reasoning, international relations, philosophy, and other fields. Its research agenda focuses on encouraging cooperative behavior in and avoiding conflict between transformative AI systems.

See also

External links

Sec­tion 7: Foun­da­tions of Ra­tional Agency

JesseClifton22 Dec 2019 2:05 UTC
14 points
4 comments8 min readLW link

Pre­face to CLR’s Re­search Agenda on Co­op­er­a­tion, Con­flict, and TAI

JesseClifton13 Dec 2019 21:02 UTC
62 points
10 comments2 min readLW link

Sec­tions 1 & 2: In­tro­duc­tion, Strat­egy and Governance

JesseClifton17 Dec 2019 21:27 UTC
35 points
8 comments14 min readLW link

Sec­tions 5 & 6: Con­tem­po­rary Ar­chi­tec­tures, Hu­mans in the Loop

JesseClifton20 Dec 2019 3:52 UTC
27 points
4 comments10 min readLW link

Sec­tions 3 & 4: Cred­i­bil­ity, Peace­ful Bar­gain­ing Mechanisms

JesseClifton17 Dec 2019 21:46 UTC
20 points
2 comments12 min readLW link

Mul­ti­verse-wide Co­op­er­a­tion via Cor­re­lated De­ci­sion Making

Kaj_Sotala20 Aug 2017 12:01 UTC
5 points
2 comments1 min readLW link
(foundational-research.org)

Against GDP as a met­ric for timelines and take­off speeds

Daniel Kokotajlo29 Dec 2020 17:42 UTC
140 points
19 comments14 min readLW link1 review

Mak­ing AIs less likely to be spiteful

26 Sep 2023 14:12 UTC
104 points
4 comments10 min readLW link

Birds, Brains, Planes, and AI: Against Ap­peals to the Com­plex­ity/​Mys­te­ri­ous­ness/​Effi­ciency of the Brain

Daniel Kokotajlo18 Jan 2021 12:08 UTC
194 points
86 comments13 min readLW link1 review

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks19 Dec 2019 3:00 UTC
130 points
18 comments62 min readLW link

2018 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks18 Dec 2018 4:46 UTC
190 points
26 comments62 min readLW link1 review

[Question] Like­li­hood of hy­per­ex­is­ten­tial catas­tro­phe from a bug?

Anirandis18 Jun 2020 16:23 UTC
14 points
27 comments1 min readLW link

Re­sponses to ap­par­ent ra­tio­nal­ist con­fu­sions about game /​ de­ci­sion theory

Anthony DiGiovanni30 Aug 2023 22:02 UTC
142 points
14 comments12 min readLW link

In­di­vi­d­u­ally in­cen­tivized safe Pareto im­prove­ments in open-source bargaining

17 Jul 2024 18:26 UTC
39 points
2 comments17 min readLW link

[Question] (Cross­post) Ask­ing for on­line calls on AI s-risks dis­cus­sions

jackchang11015 May 2023 17:42 UTC
1 point
0 comments1 min readLW link
(forum.effectivealtruism.org)

CLR’s re­cent work on multi-agent systems

JesseClifton9 Mar 2021 2:28 UTC
54 points
2 comments13 min readLW link

For­mal­iz­ing Ob­jec­tions against Sur­ro­gate Goals

VojtaKovarik2 Sep 2021 16:24 UTC
16 points
23 comments1 min readLW link

When does tech­ni­cal work to re­duce AGI con­flict make a differ­ence?: Introduction

14 Sep 2022 19:38 UTC
52 points
3 comments6 min readLW link

When would AGIs en­gage in con­flict?

14 Sep 2022 19:38 UTC
52 points
5 comments13 min readLW link

When is in­tent al­ign­ment suffi­cient or nec­es­sary to re­duce AGI con­flict?

14 Sep 2022 19:39 UTC
40 points
0 comments9 min readLW link

Open-minded updatelessness

10 Jul 2023 11:08 UTC
65 points
21 comments12 min readLW link
No comments.