RSS

Ob­ject-Level AI Risk Skepticism

TagLast edit: 11 Mar 2023 18:29 UTC by Vladimir_Nesov

Posts that express object-level reasons to be skeptical of core AI X-Risk arguments.

My Ob­jec­tions to “We’re All Gonna Die with Eliezer Yud­kowsky”

Quintin Pope21 Mar 2023 0:06 UTC
325 points
183 comments33 min readLW link

Coun­ter­ar­gu­ments to the ba­sic AI x-risk case

KatjaGrace14 Oct 2022 13:00 UTC
350 points
122 comments34 min readLW link
(aiimpacts.org)

De­cep­tive Align­ment is <1% Likely by Default

DavidW21 Feb 2023 15:09 UTC
51 points
14 comments10 min readLW link

Linkpost: A tale of 2.5 or­thog­o­nal­ity theses

DavidW13 Mar 2023 14:19 UTC
9 points
3 comments1 min readLW link
(forum.effectivealtruism.org)

Blake Richards on Why he is Skep­ti­cal of Ex­is­ten­tial Risk from AI

Michaël Trazzi14 Jun 2022 19:09 UTC
41 points
12 comments4 min readLW link
(theinsideview.ai)

Linkpost: A Con­tra AI FOOM Read­ing List

DavidW13 Mar 2023 14:45 UTC
18 points
2 comments1 min readLW link
(magnusvinding.com)

[Question] What Do AI Safety Pitches Not Get About Your Field?

Aris22 Sep 2022 21:27 UTC
28 points
3 comments1 min readLW link

Linkpost: ‘Dis­solv­ing’ AI Risk – Pa­ram­e­ter Uncer­tainty in AI Fu­ture Forecasting

DavidW13 Mar 2023 16:52 UTC
0 points
0 comments1 min readLW link
(forum.effectivealtruism.org)
No comments.