RSS

AI Devel­op­ment Pause

TagLast edit: 6 Apr 2023 17:32 UTC by Ruby

Pause AI Devel­op­ment?

PeterMcCluskey6 Apr 2023 17:23 UTC
11 points
0 comments2 min readLW link
(bayesianinvestor.com)

Paus­ing AI Devel­op­ments Isn’t Enough. We Need to Shut it All Down

Eliezer Yudkowsky8 Apr 2023 0:36 UTC
248 points
39 comments12 min readLW link

Fi­nan­cial Times: We must slow down the race to God-like AI

trevor13 Apr 2023 19:55 UTC
103 points
17 comments16 min readLW link
(www.ft.com)

List of re­quests for an AI slow­down/​halt.

Cleo Nardo14 Apr 2023 23:55 UTC
46 points
6 comments1 min readLW link

Re­quest: stop ad­vanc­ing AI capabilities

So8res26 May 2023 17:42 UTC
155 points
23 comments1 min readLW link

Public Opinion on AI Safety: AIMS 2023 and 2021 Summary

25 Sep 2023 18:55 UTC
3 points
2 comments3 min readLW link
(www.sentienceinstitute.org)

The In­ter­na­tional PauseAI Protest: Ac­tivism un­der uncertainty

Joseph Miller12 Oct 2023 17:36 UTC
32 points
1 comment1 min readLW link

Global Pause AI Protest 10/​21

14 Oct 2023 3:20 UTC
5 points
0 comments1 min readLW link

RSPs are pauses done right

evhub14 Oct 2023 4:06 UTC
166 points
70 comments7 min readLW link

Mud­dling Along Is More Likely Than Dystopia

Jeffrey Heninger20 Oct 2023 21:25 UTC
82 points
10 comments8 min readLW link

AI Safety is Drop­ping the Ball on Clown Attacks

trevor22 Oct 2023 20:09 UTC
69 points
72 comments34 min readLW link

AI Pause Will Likely Back­fire (Guest Post)

jsteinhardt24 Oct 2023 4:30 UTC
45 points
6 comments15 min readLW link
(bounded-regret.ghost.io)

Thoughts on re­spon­si­ble scal­ing poli­cies and regulation

paulfchristiano24 Oct 2023 22:21 UTC
214 points
33 comments6 min readLW link

AI as a sci­ence, and three ob­sta­cles to al­ign­ment strategies

So8res25 Oct 2023 21:00 UTC
175 points
79 comments11 min readLW link

Re­spon­si­ble Scal­ing Poli­cies Are Risk Man­age­ment Done Wrong

simeon_c25 Oct 2023 23:46 UTC
114 points
33 comments22 min readLW link
(www.navigatingrisks.ai)

Ar­chi­tects of Our Own Demise: We Should Stop Devel­op­ing AI

Roko26 Oct 2023 0:36 UTC
174 points
74 comments3 min readLW link

Sen­sor Ex­po­sure can Com­pro­mise the Hu­man Brain in the 2020s

trevor26 Oct 2023 3:31 UTC
17 points
6 comments10 min readLW link

5 Rea­sons Why Govern­ments/​Mili­taries Already Want AI for In­for­ma­tion Warfare

trevor30 Oct 2023 16:30 UTC
32 points
0 comments10 min readLW link

We are already in a per­sua­sion-trans­formed world and must take precautions

trevor4 Nov 2023 15:53 UTC
36 points
14 comments6 min readLW link

An illus­tra­tive model of back­fire risks from paus­ing AI research

Maxime Riché6 Nov 2023 14:30 UTC
33 points
3 comments11 min readLW link

Con­crete pos­i­tive vi­sions for a fu­ture with­out AGI

Max H8 Nov 2023 3:12 UTC
41 points
28 comments8 min readLW link

Helpful ex­am­ples to get a sense of mod­ern au­to­mated manipulation

trevor12 Nov 2023 20:49 UTC
33 points
3 comments9 min readLW link

Are There Ex­am­ples of Over­hang for Other Tech­nolo­gies?

Jeffrey Heninger13 Dec 2023 21:48 UTC
59 points
50 comments11 min readLW link
(blog.aiimpacts.org)

OpenAI, Deep­Mind, An­thropic, etc. should shut down.

Tamsin Leake17 Dec 2023 20:01 UTC
36 points
48 comments3 min readLW link
(carado.moe)

Em­ployee In­cen­tives Make AGI Lab Pauses More Costly

nikola22 Dec 2023 5:04 UTC
28 points
12 comments3 min readLW link

Is prin­ci­pled mass-out­reach pos­si­ble, for AGI X-risk?

NicholasKross21 Jan 2024 17:45 UTC
9 points
5 comments3 min readLW link
No comments.