RSS

Mul­tipo­lar Scenarios

TagLast edit: 28 Jul 2022 15:57 UTC by Multicore

A multipolar scenario is one where no single AI or agent takes over the world.

Featured in the book “Superintelligence” by Nick Bostrom.

Su­per­in­tel­li­gence 17: Mul­tipo­lar scenarios

KatjaGrace6 Jan 2015 6:44 UTC
9 points
38 comments6 min readLW link

Equil­ibrium and prior se­lec­tion prob­lems in mul­ti­po­lar deployment

JesseClifton2 Apr 2020 20:06 UTC
21 points
11 comments11 min readLW link

What Failure Looks Like: Distill­ing the Discussion

Ben Pace29 Jul 2020 21:49 UTC
81 points
14 comments7 min readLW link

[Question] In a mul­ti­po­lar sce­nario, how do peo­ple ex­pect sys­tems to be trained to in­ter­act with sys­tems de­vel­oped by other labs?

JesseClifton1 Dec 2020 20:04 UTC
14 points
6 comments1 min readLW link

Com­mit­ment and cred­i­bil­ity in mul­ti­po­lar AI scenarios

anni_leskela4 Dec 2020 18:48 UTC
31 points
3 comments18 min readLW link

What Mul­tipo­lar Failure Looks Like, and Ro­bust Agent-Ag­nos­tic Pro­cesses (RAAPs)

Andrew_Critch31 Mar 2021 23:50 UTC
272 points
64 comments22 min readLW link1 review

Why multi-agent safety is im­por­tant

Akbir Khan14 Jun 2022 9:23 UTC
10 points
2 comments10 min readLW link

[Question] How would two su­per­in­tel­li­gent AIs in­ter­act, if they are un­al­igned with each other?

Nathan11239 Aug 2022 18:58 UTC
4 points
6 comments1 min readLW link

Tra­jec­to­ries to 2036

ukc1001420 Oct 2022 20:23 UTC
3 points
1 comment14 min readLW link

Nine Points of Col­lec­tive Insanity

27 Dec 2022 3:14 UTC
−2 points
3 comments1 min readLW link
(mflb.com)

Align­ment is not enough

Alan Chan12 Jan 2023 0:33 UTC
11 points
6 comments11 min readLW link
(coordination.substack.com)

The Align­ment Problems

Martín Soto12 Jan 2023 22:29 UTC
19 points
0 comments4 min readLW link

Agen­tized LLMs will change the al­ign­ment landscape

Seth Herd9 Apr 2023 2:29 UTC
153 points
95 comments3 min readLW link

AI x-risk, ap­prox­i­mately or­dered by embarrassment

Alex Lawsen 12 Apr 2023 23:01 UTC
140 points
7 comments19 min readLW link

Ca­pa­bil­ities and al­ign­ment of LLM cog­ni­tive architectures

Seth Herd18 Apr 2023 16:29 UTC
80 points
18 comments20 min readLW link

60+ Pos­si­ble Futures

Bart Bussmann26 Jun 2023 9:16 UTC
92 points
18 comments11 min readLW link

A Com­mon-Sense Case For Mu­tu­ally-Misal­igned AGIs Ally­ing Against Humans

Thane Ruthenis17 Dec 2023 20:28 UTC
29 points
7 comments11 min readLW link

Achiev­ing AI Align­ment through De­liber­ate Uncer­tainty in Mul­ti­a­gent Systems

Florian_Dietz17 Feb 2024 8:45 UTC
3 points
0 comments13 min readLW link
No comments.