RSS

AI Risk Skepticism

TagLast edit: 17 Jan 2025 22:23 UTC by Dakara

AI Risk Skepticism is the view that the potential risks posed by artificial intelligence (AI) are overstated or misunderstood, specifically regarding the direct, tangible dangers posed by the behavior of AI systems themselves. Skeptics of object-level AI risk argue that fears of highly autonomous, superintelligent AI leading to catastrophic outcomes are premature or unlikely.

My Ob­jec­tions to “We’re All Gonna Die with Eliezer Yud­kowsky”

Quintin Pope21 Mar 2023 0:06 UTC
358 points
232 comments39 min readLW link1 review

Coun­ter­ar­gu­ments to the ba­sic AI x-risk case

KatjaGrace14 Oct 2022 13:00 UTC
370 points
124 comments34 min readLW link1 review
(aiimpacts.org)

De­cep­tive Align­ment is <1% Likely by Default

DavidW21 Feb 2023 15:09 UTC
89 points
31 comments14 min readLW link1 review

Con­tra Yud­kowsky on AI Doom

jacob_cannell24 Apr 2023 0:20 UTC
89 points
111 comments9 min readLW link

Count­ing ar­gu­ments provide no ev­i­dence for AI doom

27 Feb 2024 23:03 UTC
97 points
188 comments14 min readLW link

Many ar­gu­ments for AI x-risk are wrong

TurnTrout5 Mar 2024 2:31 UTC
158 points
87 comments12 min readLW link

Evolu­tion is a bad anal­ogy for AGI: in­ner alignment

Quintin Pope13 Aug 2022 22:15 UTC
79 points
15 comments8 min readLW link

Ar­gu­ments for op­ti­mism on AI Align­ment (I don’t en­dorse this ver­sion, will re­u­pload a new ver­sion soon.)

Noosphere8915 Oct 2023 14:51 UTC
28 points
129 comments25 min readLW link

Order Mat­ters for De­cep­tive Alignment

DavidW15 Feb 2023 19:56 UTC
57 points
19 comments7 min readLW link

The Paris AI Anti-Safety Summit

Zvi12 Feb 2025 14:00 UTC
106 points
19 comments21 min readLW link
(thezvi.wordpress.com)

Two Tales of AI Takeover: My Doubts

Violet Hour5 Mar 2024 15:51 UTC
30 points
8 comments29 min readLW link

The bul­ls­eye frame­work: My case against AI doom

titotal30 May 2023 11:52 UTC
89 points
35 comments1 min readLW link

Evolu­tion pro­vides no ev­i­dence for the sharp left turn

Quintin Pope11 Apr 2023 18:43 UTC
206 points
65 comments15 min readLW link1 review

De­cep­tive Align­ment and Homuncularity

16 Jan 2025 13:55 UTC
25 points
12 comments22 min readLW link

Lan­guage Agents Re­duce the Risk of Ex­is­ten­tial Catastrophe

28 May 2023 19:10 UTC
39 points
14 comments26 min readLW link

A po­ten­tially high im­pact differ­en­tial tech­nolog­i­cal de­vel­op­ment area

Noosphere898 Jun 2023 14:33 UTC
5 points
2 comments2 min readLW link

Why I am not an AI ex­tinc­tion cautionista

Shmi18 Jun 2023 21:28 UTC
22 points
40 comments2 min readLW link

Why AGI sys­tems will not be fa­nat­i­cal max­imisers (un­less trained by fa­nat­i­cal hu­mans)

titotal17 May 2023 11:58 UTC
5 points
3 comments1 min readLW link

Linkpost: A tale of 2.5 or­thog­o­nal­ity theses

DavidW13 Mar 2023 14:19 UTC
9 points
3 comments1 min readLW link
(forum.effectivealtruism.org)

Blake Richards on Why he is Skep­ti­cal of Ex­is­ten­tial Risk from AI

Michaël Trazzi14 Jun 2022 19:09 UTC
41 points
12 comments4 min readLW link
(theinsideview.ai)

Linkpost: A Con­tra AI FOOM Read­ing List

DavidW13 Mar 2023 14:45 UTC
25 points
4 comments1 min readLW link
(magnusvinding.com)

[Question] What Do AI Safety Pitches Not Get About Your Field?

Aris22 Sep 2022 21:27 UTC
28 points
3 comments1 min readLW link

Linkpost: ‘Dis­solv­ing’ AI Risk – Pa­ram­e­ter Uncer­tainty in AI Fu­ture Forecasting

DavidW13 Mar 2023 16:52 UTC
6 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

BOUNTY AVAILABLE: AI ethi­cists, what are your ob­ject-level ar­gu­ments against AI notkil­lev­ery­oneism?

Peter Berggren6 Jul 2023 17:32 UTC
18 points
6 comments2 min readLW link

Get­tier Cases [re­post]

Antigone3 Feb 2025 18:12 UTC
−4 points
4 comments2 min readLW link

[Question] how do the CEOs re­spond to our con­cerns?

KvmanThinking11 Feb 2025 23:39 UTC
−7 points
3 comments1 min readLW link

De­con­struct­ing Bostrom’s Clas­sic Ar­gu­ment for AI Doom

Nora Belrose11 Mar 2024 5:58 UTC
16 points
14 comments1 min readLW link
(www.youtube.com)

[Link] Sarah Con­stantin: “Why I am Not An AI Doomer”

lbThingrb12 Apr 2023 1:52 UTC
61 points
13 comments1 min readLW link
(sarahconstantin.substack.com)
No comments.