A Pos­si­ble Fu­ture: De­cen­tral­ized AGI Proliferation

Dev.Errata23 Sep 2025 22:24 UTC
11 points
7 comments2 min readLW link

Mu­nich, Bavaria “If Any­one Builds It” read­ing group

hilll23 Sep 2025 22:03 UTC
11 points
0 comments1 min readLW link

Prague “If Any­one Builds It” read­ing group

Marek Dědič23 Sep 2025 21:49 UTC
14 points
0 comments1 min readLW link

Dra­co­nian mea­sures can in­crease the risk of ir­re­vo­ca­ble catastrophe

dsj23 Sep 2025 21:40 UTC
22 points
2 comments2 min readLW link
(thedavidsj.substack.com)

[Ques­tion] What the dis­con­ti­nu­ity is, if not FOOM?

TAG23 Sep 2025 19:30 UTC
18 points
14 comments3 min readLW link

Sa­muel Shadrach Interviewed

samuelshadrach23 Sep 2025 18:58 UTC
9 points
0 comments1 min readLW link

State­ment of Sup­port for “If Any­one Builds It, Every­one Dies”

Liron23 Sep 2025 17:51 UTC
67 points
34 comments1 min readLW link

Notes on fatal­ities from AI takeover

ryan_greenblatt23 Sep 2025 17:18 UTC
55 points
60 comments8 min readLW link

Zendo for large groups

philh23 Sep 2025 17:10 UTC
13 points
1 comment1 min readLW link
(reasonableapproximation.net)

Syn­the­siz­ing Stan­dalone World-Models, Part 1: Ab­strac­tion Hierarchies

Thane Ruthenis23 Sep 2025 17:01 UTC
23 points
10 comments23 min readLW link

A Com­pat­i­bil­ist Defi­ni­tion of Santa Claus

Shiva's Right Foot23 Sep 2025 16:57 UTC
18 points
9 comments1 min readLW link

Ethics-Based Re­fusals Without Ethics-Based Re­fusal Training

1a3orn23 Sep 2025 16:35 UTC
91 points
2 comments19 min readLW link

Why Smarter Doesn’t Mean Kin­der: Orthog­o­nal­ity and In­stru­men­tal Convergence

Alexander Müller23 Sep 2025 16:06 UTC
6 points
0 comments6 min readLW link

More Re­ac­tions to If Any­one Builds It, Every­one Dies

Zvi23 Sep 2025 16:00 UTC
33 points
20 comments20 min readLW link
(thezvi.wordpress.com)

On­tolog­i­cal Cluelessness

23 Sep 2025 14:31 UTC
14 points
12 comments4 min readLW link

We are likely in an AI over­hang, and this is bad.

Gabriel Alfour23 Sep 2025 14:15 UTC
55 points
16 comments1 min readLW link
(cognition.cafe)

Prompt op­ti­miza­tion can en­able AI con­trol research

23 Sep 2025 12:46 UTC
35 points
3 comments9 min readLW link

Two Math­e­mat­i­cal Per­spec­tives on AI Hal­lu­ci­na­tions and Uncertainty

LorenzoPacchiardi23 Sep 2025 11:06 UTC
0 points
1 comment3 min readLW link

Ac­celerando as a “Slow, Rea­son­ably Nice Take­off” Story

Raemon23 Sep 2025 2:15 UTC
71 points
19 comments30 min readLW link

On failure, and keep­ing doors open; clos­ing thoughts

jimmy23 Sep 2025 1:11 UTC
7 points
0 comments10 min readLW link

GPT-1 was a comedic genius

anaguma22 Sep 2025 22:19 UTC
5 points
3 comments4 min readLW link

D&D.Sci: Se­rial Heal­ers [Eval­u­a­tion & Rule­set]

abstractapplic22 Sep 2025 20:02 UTC
40 points
7 comments4 min readLW link

Re­search Agenda: Syn­the­siz­ing Stan­dalone World-Models

Thane Ruthenis22 Sep 2025 19:06 UTC
69 points
28 comments11 min readLW link

Global Call for AI Red Lines—Signed by No­bel Lau­re­ates, Former Heads of State, and 200+ Promi­nent Figures

Charbel-Raphaël22 Sep 2025 18:22 UTC
333 points
27 comments6 min readLW link

H1-B And The $100k Fee

Zvi22 Sep 2025 18:10 UTC
30 points
1 comment17 min readLW link
(thezvi.wordpress.com)

Why I don’t be­lieve Su­per­al­ign­ment will work

Simon Lermen22 Sep 2025 17:10 UTC
44 points
6 comments5 min readLW link

Video and tran­script of talk on giv­ing AIs safe motivations

Joe Carlsmith22 Sep 2025 16:43 UTC
12 points
0 comments50 min readLW link

Re­ject­ing Violence as an AI Safety Strategy

James_Miller22 Sep 2025 16:34 UTC
58 points
5 comments3 min readLW link

Fo­cus trans­parency on risk re­ports, not safety cases

ryan_greenblatt22 Sep 2025 15:27 UTC
47 points
3 comments6 min readLW link

The world’s first fron­tier AI reg­u­la­tion is sur­pris­ingly thought­ful: the EU’s Code of Practice

MKodama22 Sep 2025 15:23 UTC
75 points
0 comments15 min readLW link

Some of the ways the IABIED plan can backfire

mishka22 Sep 2025 15:02 UTC
19 points
16 comments2 min readLW link

Re­lat­ing to AI, Re­lat­ing to Ourselves

22 Sep 2025 8:18 UTC
2 points
1 comment2 min readLW link

Warmth, Light, Flame

Alice Blair22 Sep 2025 4:19 UTC
37 points
0 comments4 min readLW link

This is a re­view of the reviews

Recurrented22 Sep 2025 3:11 UTC
184 points
57 comments2 min readLW link

Incommensurability

Christopher James Hart22 Sep 2025 2:21 UTC
26 points
6 comments1 min readLW link

You Can’t Really Bet on Doom

Jack_S21 Sep 2025 23:27 UTC
8 points
1 comment7 min readLW link
(torchestogether.substack.com)

The Only Red Line

Jason Reid21 Sep 2025 22:40 UTC
13 points
1 comment1 min readLW link

Do LLMs Change Their Minds About Their Users… and Know It?

Ishaan Sinha21 Sep 2025 22:38 UTC
10 points
2 comments14 min readLW link

Me­tacrisis as a Frame­work for AI Governance

Jonah Wilberg21 Sep 2025 21:30 UTC
20 points
0 comments8 min readLW link

Is there not le­gi­t­i­mate dis­agree­ment about this premise of IABI,ED?

enfascination21 Sep 2025 20:47 UTC
5 points
7 comments1 min readLW link

Evals in the Age of Jarvis

Dinkar Juyal21 Sep 2025 19:27 UTC
3 points
2 comments3 min readLW link

[Question] Could China Unilat­er­ally Cause an AI Pause?

Maloew21 Sep 2025 18:37 UTC
22 points
2 comments1 min readLW link

What do peo­ple mean when they say that some­thing will be­come more like a util­ity max­i­mizer?

Nina Panickssery21 Sep 2025 16:03 UTC
40 points
7 comments2 min readLW link

And Yet, Defend your Thoughts from AI Writing

Michael Samoilov21 Sep 2025 15:52 UTC
60 points
17 comments6 min readLW link
(open.substack.com)

A parable of re­al­ism and relativism

kwang21 Sep 2025 14:47 UTC
−7 points
2 comments2 min readLW link
(kevw.substack.com)

ACX/​LW Oc­to­ber Paris Meetup

Lucie Philippon21 Sep 2025 11:37 UTC
5 points
0 comments1 min readLW link

Day #8 Hunger Strike, Protest Against Su­per­in­tel­li­gent AI

samuelshadrach21 Sep 2025 5:58 UTC
13 points
4 comments2 min readLW link

FTX, Golden Geese, and The Wi­dow’s Mite

Elizabeth20 Sep 2025 18:30 UTC
21 points
1 comment7 min readLW link
(acesounderglass.com)

The Case for a Pro-AI-Safety Poli­ti­cal Party in the US

Oliver Kuperman20 Sep 2025 16:35 UTC
11 points
2 comments21 min readLW link

Con­tra Col­lier on IABIED

Max Harms20 Sep 2025 15:55 UTC
227 points
51 comments20 min readLW link