State­ment on Su­per­in­tel­li­gence—FLI Open Letter

plex22 Oct 2025 22:26 UTC
59 points
0 comments1 min readLW link
(superintelligence-statement.org)

The Doomers Were Right

Algon22 Oct 2025 22:18 UTC
204 points
26 comments3 min readLW link

Tech­ni­cal Ac­cel­er­a­tion Meth­ods for AI Safety: Sum­mary from Oc­to­ber 2025 Symposium

Martin Leitgab22 Oct 2025 21:33 UTC
25 points
2 comments6 min readLW link

Why AI al­ign­ment mat­ters today

Mislav Jurić22 Oct 2025 21:27 UTC
6 points
0 comments4 min readLW link

Any cor­rigi­bil­ity naysay­ers out­side of MIRI?

Max Harms22 Oct 2025 21:26 UTC
28 points
24 comments1 min readLW link

Which side of the AI safety com­mu­nity are you in?

Max Tegmark22 Oct 2025 21:17 UTC
141 points
88 comments2 min readLW link

Ho­mo­mor­phi­cally en­crypted con­scious­ness and its implications

jessicata22 Oct 2025 20:27 UTC
35 points
48 comments12 min readLW link
(unstableontology.com)

Dead-switches as AI safety tools

Jesper L.22 Oct 2025 19:57 UTC
2 points
6 comments5 min readLW link

Con­sider donat­ing to AI safety cham­pion Scott Wiener

Eric Neyman22 Oct 2025 18:40 UTC
133 points
9 comments18 min readLW link
(ericneyman.wordpress.com)

Pos­tra­tional­ity: An Oral History

Gordon Seidoh Worley22 Oct 2025 16:10 UTC
44 points
4 comments30 min readLW link
(www.uncertainupdates.com)

Penny’s Hands

Tomás B.22 Oct 2025 16:09 UTC
70 points
7 comments16 min readLW link

Is 90% of code at An­thropic be­ing writ­ten by AIs?

ryan_greenblatt22 Oct 2025 14:50 UTC
91 points
14 comments5 min readLW link

How Well Does RL Scale?

Toby_Ord22 Oct 2025 13:16 UTC
131 points
22 comments7 min readLW link

LLM Self-Refer­ence Lan­guage in Mul­tilin­gual vs English-Cen­tric Models

dwmd22 Oct 2025 12:44 UTC
4 points
0 comments6 min readLW link

The Cloud in­dus­try ar­chi­tec­ture [In­fra-Plat­form-App] is un­likely to repli­cate for AI

Armchair Descending22 Oct 2025 8:28 UTC
1 point
0 comments2 min readLW link

The Per­pet­ual Tech­nolog­i­cal Cage

Hector Perez Arenas22 Oct 2025 8:15 UTC
6 points
2 comments1 min readLW link
(networksocieties.com)

Uto­pi­og­ra­phy Interview

plex22 Oct 2025 8:03 UTC
32 points
0 comments45 min readLW link

White House OSTP AI Dereg­u­la­tion Public Com­ment Pe­riod Ends Oct. 27

Zack_M_Davis22 Oct 2025 6:18 UTC
42 points
1 comment1 min readLW link

July-Oc­to­ber 2025 Progress in Guaran­teed Safe AI

Quinn22 Oct 2025 2:30 UTC
15 points
2 comments7 min readLW link
(gsai.substack.com)

In re­mem­brance of Son­net ‘3.6’

kromem22 Oct 2025 0:43 UTC
14 points
9 comments2 min readLW link

Strat­ified Utopia

Cleo Nardo21 Oct 2025 19:09 UTC
73 points
8 comments11 min readLW link

Early stage goal-directednesss

Raemon21 Oct 2025 17:41 UTC
20 points
8 comments3 min readLW link

On Dwarkesh Pa­tel’s Pod­cast With An­drej Karpathy

Zvi21 Oct 2025 16:00 UTC
38 points
6 comments31 min readLW link
(thezvi.wordpress.com)

Sa­muel x Bhishma—Su­per­in­tel­li­gence by 2030?

samuelshadrach21 Oct 2025 15:32 UTC
6 points
0 comments3 min readLW link
(youtu.be)

Re­marks on Bayesian stud­ies from 1963

dynomight21 Oct 2025 12:47 UTC
37 points
1 comment1 min readLW link

Why deep space pro­grams se­lect for calm agree­able in­tro­verted candidates

David Sun21 Oct 2025 10:22 UTC
−4 points
0 comments15 min readLW link

⿻ Sym­bio­ge­n­e­sis vs. Con­ver­gent Consequentialism

21 Oct 2025 10:10 UTC
60 points
5 comments20 min readLW link

How the Hu­man Lens Shapes Ma­chine Minds

21 Oct 2025 9:08 UTC
2 points
0 comments5 min readLW link

21st Cen­tury Civ­i­liza­tion curriculum

Richard_Ngo21 Oct 2025 7:43 UTC
35 points
10 comments1 min readLW link
(www.21civ.com)

Ram­blings on the Self Indi­ca­tion Assumption

Angela Pretorius21 Oct 2025 5:45 UTC
5 points
1 comment2 min readLW link

An epistemic the­ory of pop­ulism [link post to Joseph Heath]

Siebe21 Oct 2025 5:30 UTC
12 points
3 comments1 min readLW link
(open.substack.com)

EU ex­plained in 10 minutes

Martin Sustrik21 Oct 2025 4:40 UTC
244 points
49 comments8 min readLW link
(www.250bpm.com)

“Tilakkhana”, Gw­ern [poem]

gwern21 Oct 2025 2:39 UTC
22 points
0 comments1 min readLW link
(gwern.net)

At­tend­ing Your First Con­tra Dance in a Fra­grance-Com­pli­ant Manner

jefftk21 Oct 2025 0:40 UTC
4 points
1 comment3 min readLW link
(www.jefftk.com)

The If Any­one Builds It, Every­one Dies march as­surance con­tract should in­di­cate how many sig­na­tures it has received

Peter Berggren20 Oct 2025 19:38 UTC
11 points
0 comments1 min readLW link

[Thought Ex­per­i­ment] If Hu­man Ex­tinc­tion “Im­proves the World,” Should We Op­pose It? Species Bias and the Utili­tar­ian Challenge

satopi20 Oct 2025 17:38 UTC
−17 points
4 comments1 min readLW link

Can you find the stegano­graph­i­cally hid­den mes­sage?

Kei Nishimura-Gasparian20 Oct 2025 17:29 UTC
48 points
2 comments7 min readLW link

How cause-area spe­cific con­fer­ences can strengthen the EA community

MariusWenk20 Oct 2025 17:02 UTC
2 points
2 comments2 min readLW link

A Math­e­mat­i­cal Model of Al­cor’s Eco­nomic Survival

Syd Lonreiro_20 Oct 2025 16:24 UTC
1 point
3 comments3 min readLW link

How Stu­art Buck funded the repli­ca­tion crisis

Elizabeth20 Oct 2025 15:51 UTC
56 points
2 comments1 min readLW link
(goodscience.substack.com)

Sec­u­lar Sols­tice: Bre­men (Dec 13)

20 Oct 2025 15:49 UTC
4 points
4 comments1 min readLW link

Con­tra-Zom­bies? Con­tra-Zom­bies!: Chalmers as a par­allel to Hume

Shiva's Right Foot20 Oct 2025 14:56 UTC
−2 points
6 comments5 min readLW link

Con­sider donat­ing to Alex Bores, au­thor of the RAISE Act

Eric Neyman20 Oct 2025 14:50 UTC
259 points
20 comments18 min readLW link
(ericneyman.wordpress.com)

Bub­ble, Bub­ble, Toil and Trouble

Zvi20 Oct 2025 13:22 UTC
78 points
7 comments15 min readLW link
(thezvi.wordpress.com)

Con­sid­er­a­tions around ca­reer costs of poli­ti­cal donations

GradientDissenter20 Oct 2025 12:51 UTC
97 points
17 comments15 min readLW link

A Cup of Blue Tea

Rudaiba20 Oct 2025 11:22 UTC
−2 points
0 comments4 min readLW link

A Bayesian night­mare: In­sta­gram and Sam­pling bias

Abhay Chowdhry20 Oct 2025 5:00 UTC
8 points
0 comments5 min readLW link

Un­com­mon Utili­tar­i­anism #2: Pos­i­tive Utilitarianism

Alice Blair20 Oct 2025 4:17 UTC
6 points
1 comment2 min readLW link

[Question] Fi­nal-Exam-Tier Med­i­cal Prob­lem With Hand­wavy Rea­sons We Can’t Just Call A Li­censed M.D.

Lorec20 Oct 2025 1:01 UTC
25 points
10 comments3 min readLW link

Hu­man­ity Learned Al­most Noth­ing From COVID-19

niplav19 Oct 2025 21:24 UTC
163 points
38 comments4 min readLW link