RSS

AI Risk Con­crete Stories

TagLast edit: 10 Jun 2022 18:18 UTC by Raemon

It Looks Like You’re Try­ing To Take Over The World

gwern9 Mar 2022 16:35 UTC
391 points
125 comments1 min readLW link
(www.gwern.net)

What Mul­tipo­lar Failure Looks Like, and Ro­bust Agent-Ag­nos­tic Pro­cesses (RAAPs)

Andrew_Critch31 Mar 2021 23:50 UTC
221 points
64 comments22 min readLW link1 review

Another plau­si­ble sce­nario of AI risk: AI builds mil­i­tary in­fras­truc­ture while col­lab­o­rat­ing with hu­mans, defects later.

avturchin10 Jun 2022 17:24 UTC
10 points
2 comments1 min readLW link

Clar­ify­ing “What failure looks like”

Sam Clarke20 Sep 2020 20:40 UTC
95 points
14 comments17 min readLW link

A plau­si­ble story about AI risk.

DeLesley Hutchins10 Jun 2022 2:08 UTC
14 points
1 comment4 min readLW link

Slow mo­tion videos as AI risk in­tu­ition pumps

Andrew_Critch14 Jun 2022 19:31 UTC
222 points
38 comments2 min readLW link

The next decades might be wild

Marius Hobbhahn15 Dec 2022 16:10 UTC
167 points
37 comments41 min readLW link

A Modest Pivotal Act

anonymousaisafety13 Jun 2022 19:24 UTC
−15 points
1 comment5 min readLW link

What suc­cess looks like

28 Jun 2022 14:38 UTC
19 points
4 comments1 min readLW link
(forum.effectivealtruism.org)

Re­spond­ing to ‘Beyond Hyper­an­thro­po­mor­phism’

ukc1001414 Sep 2022 20:37 UTC
8 points
0 comments16 min readLW link

AI Safety Endgame Stories

Ivan Vendrov28 Sep 2022 16:58 UTC
27 points
11 comments11 min readLW link

A Story of AI Risk: In­struc­tGPT-N

peterbarnett26 May 2022 23:22 UTC
24 points
0 comments8 min readLW link

A bridge to Dath Ilan? Im­proved gov­er­nance on the crit­i­cal path to AI al­ign­ment.

Jackson Wagner18 May 2022 15:51 UTC
23 points
0 comments11 min readLW link

Hu­man level AI can plau­si­bly take over the world

anithite1 Mar 2023 23:27 UTC
18 points
9 comments2 min readLW link

Grad­ual take­off, fast failure

Max H16 Mar 2023 22:02 UTC
10 points
4 comments5 min readLW link
No comments.