The IABIED state­ment is not liter­ally true

David Matolcsi18 Oct 2025 23:15 UTC
20 points
27 comments8 min readLW link

Libraries need more books

Algon18 Oct 2025 22:53 UTC
26 points
7 comments3 min readLW link

Con­jec­ture: Emer­gent φ is prov­able in Large Lan­guage Models

BarnicleBarn18 Oct 2025 22:38 UTC
−3 points
0 comments10 min readLW link

In defense of the good­ness of ideas

Jordan Arel18 Oct 2025 21:59 UTC
6 points
2 comments4 min readLW link

Sam­ple In­ter­est­ing First

Tomáš Gavenčiak18 Oct 2025 20:09 UTC
7 points
2 comments3 min readLW link

Comma v0.1 con­verted to GGUF

Trevor Hill-Hand18 Oct 2025 15:54 UTC
9 points
0 comments6 min readLW link

Us­ing Bayes’ The­o­rem to de­ter­mine Op­ti­mal Protein Intake

neptuneio18 Oct 2025 14:58 UTC
4 points
4 comments2 min readLW link

Selected Graph­ics Show­ing Progress to­wards AGI

Chris_Leong18 Oct 2025 14:37 UTC
14 points
6 comments1 min readLW link

Net­work­ing for Spies: Trans­lat­ing a Cyrillic Text with Claude Code

Austin Morrissey18 Oct 2025 7:46 UTC
8 points
0 comments17 min readLW link

Space coloniza­tion and sci­en­tific dis­cov­ery could be manda­tory for suc­cess­ful defen­sive AI

otto.barten18 Oct 2025 4:57 UTC
16 points
0 comments1 min readLW link

Me­mory De­cod­ing Jour­nal Club: Func­tional con­nec­tomics re­veals gen­eral wiring rule in mouse vi­sual cortex

Devin Ward17 Oct 2025 23:33 UTC
4 points
0 comments1 min readLW link

Med­i­ta­tion is dangerous

Algon17 Oct 2025 22:52 UTC
155 points
40 comments4 min readLW link

I hand­bound a book of Janus’s es­says for my girlfriend

datawitch17 Oct 2025 17:38 UTC
22 points
1 comment1 min readLW link

The Dark Arts of To­k­eniza­tion or: How I learned to start wor­ry­ing and love LLMs’ un­de­coded outputs

Lovre17 Oct 2025 16:43 UTC
42 points
10 comments26 min readLW link

How To Vastly In­crease Your Char­i­ta­ble Impact

Bentham's Bulldog17 Oct 2025 15:46 UTC
1 point
3 comments2 min readLW link

Non­triv­ial pillars of IABIED

Cole Wyeth17 Oct 2025 15:21 UTC
23 points
3 comments3 min readLW link

What Suc­cess Might Look Like

Richard Juggins17 Oct 2025 14:17 UTC
22 points
6 comments15 min readLW link

I’m an EA who benefit­ted from rationality

juliawise17 Oct 2025 12:27 UTC
62 points
4 comments2 min readLW link

AI #138 Part 2: Watch Out For Documents

Zvi17 Oct 2025 11:50 UTC
40 points
8 comments45 min readLW link
(thezvi.wordpress.com)

Mess AI – de­liber­ate cor­rup­tion of the train­ing data to pre­vent superintelligence

avturchin17 Oct 2025 9:23 UTC
1 point
0 comments2 min readLW link

Ac­ti­va­tion Plateaus: Where and How They Emerge

17 Oct 2025 5:48 UTC
36 points
0 comments8 min readLW link

Can We Si­mu­late Meio­sis to Create Digi­tal Gametes — and Are the Re­sults Your Biolog­i­cal Offspring?

GJ17 Oct 2025 3:55 UTC
2 points
8 comments1 min readLW link

Steven Adler re­ports that NVIDIA is at­tempt­ing to stifle pro-ex­port-con­trol speach

Elizabeth17 Oct 2025 3:05 UTC
26 points
0 comments1 min readLW link
(stevenadler.substack.com)

Spec­tral Tax­on­omy of QK Cir­cuits in Trans­former Models

Shantanu Darveshi17 Oct 2025 2:18 UTC
7 points
0 comments5 min readLW link

Book Re­view: To Ex­plain the World

Algon16 Oct 2025 23:00 UTC
23 points
5 comments6 min readLW link

AISN#64: New AGI Defi­ni­tion and Se­nate Bill Would Estab­lish Li­a­bil­ity for AI Harms

16 Oct 2025 18:06 UTC
5 points
1 comment5 min readLW link
(aisafety.substack.com)

Find­ing Fea­tures in Neu­ral Net­works with the Em­piri­cal NTK

jylin0416 Oct 2025 18:04 UTC
35 points
1 comment5 min readLW link

Learn­ing from the Lud­dites: Im­pli­ca­tions for a mod­ern AI labour movement

JanWehner16 Oct 2025 17:11 UTC
12 points
0 comments8 min readLW link

Re­duc­ing risk from schem­ing by study­ing trained-in schem­ing behavior

ryan_greenblatt16 Oct 2025 16:16 UTC
32 points
0 comments11 min readLW link

Job Open­ings: SWE, PM, and Grants Co­or­di­na­tor to help im­prove grant-making

Ethan Ashkie16 Oct 2025 16:14 UTC
13 points
0 comments1 min readLW link
(survivalandflourishing.com)

AI #138 Part 1: The Peo­ple De­mand Erotic Sycophants

Zvi16 Oct 2025 15:41 UTC
25 points
7 comments46 min readLW link
(thezvi.wordpress.com)

Cheap Labour Every­where

Morpheus16 Oct 2025 13:15 UTC
136 points
34 comments2 min readLW link

Quan­tum im­mor­tal­ity and AI risk – the fate of a lonely survivor

avturchin16 Oct 2025 11:40 UTC
8 points
0 comments1 min readLW link

The Com­plex Uni­verse The­ory of AI Psychology

Andrew Tomazos16 Oct 2025 4:31 UTC
0 points
0 comments1 min readLW link
(www.tomazos.com)

[CS 2881r AI Safety] [Week 5] Con­tent Policies

16 Oct 2025 4:27 UTC
1 point
0 comments12 min readLW link

Halfhaven Digest #2

Taylor G. Lunt16 Oct 2025 3:18 UTC
6 points
0 comments3 min readLW link

Fra­grance Free Confusion

jefftk16 Oct 2025 2:50 UTC
17 points
13 comments3 min readLW link
(www.jefftk.com)

The Three Levels of Agency

Taylor G. Lunt16 Oct 2025 2:14 UTC
15 points
1 comment5 min readLW link

Me­mory De­cod­ing Jour­nal Club: Func­tional con­nec­tomics re­veals gen­eral wiring rule in mouse vi­sual cortex

Devin Ward16 Oct 2025 1:56 UTC
1 point
0 comments1 min readLW link

Elec­tron­ics Me­chanic → AI Safety Re­searcher: A 30-Month Jour­ney to Model Welfare

probablyjonah16 Oct 2025 0:43 UTC
2 points
0 comments3 min readLW link

Some as­tral en­ergy ex­trac­tion methods

Algon15 Oct 2025 23:22 UTC
24 points
3 comments2 min readLW link

AI-202X-slow­down: can CoT-based AIs be­come ca­pa­ble of al­ign­ing the ASI?

StanislavKrym15 Oct 2025 22:46 UTC
18 points
0 comments6 min readLW link

Monthly Roundup #35: Oc­to­ber 2025

Zvi15 Oct 2025 19:50 UTC
24 points
1 comment49 min readLW link
(thezvi.wordpress.com)

Rogue in­ter­nal de­ploy­ments via ex­ter­nal APIs

15 Oct 2025 19:34 UTC
34 points
4 comments6 min readLW link

Chem­i­cal Te­lescopes And The Pro­cess Of Science

sonicrocketman15 Oct 2025 18:05 UTC
5 points
0 comments4 min readLW link
(brianschrader.com)

Up­dat­ing the name of Open Philan­thropy’s AI program

lukeprog15 Oct 2025 17:45 UTC
7 points
0 comments2 min readLW link

Open Global In­vest­ment: Com­par­i­sons and Criticisms

Algon15 Oct 2025 17:20 UTC
15 points
0 comments4 min readLW link
(aisafety.info)

We are too com­fortable with AI “magic”

Baybar15 Oct 2025 17:00 UTC
−2 points
0 comments6 min readLW link

Un­til the stars burn out? Assess­ing the stakes of AGI lock-in

MattAlexander15 Oct 2025 16:38 UTC
6 points
0 comments6 min readLW link

Are calm in­tro­verts (like East Asi­ans) uniquely suited for space travel & Mars mis­sions?

David Sun15 Oct 2025 16:19 UTC
−4 points
2 comments1 min readLW link
(davidsun.substack.com)