Chaos Alone is No Bar to Superintelligence

Algon6 Oct 2025 22:45 UTC
11 points
0 comments2 min readLW link
(aisafety.info)

We won’t get AIs smart enough to solve al­ign­ment but too dumb to rebel

Joe Rogero6 Oct 2025 21:49 UTC
28 points
16 comments5 min readLW link

Notes on the need to lose

Algon6 Oct 2025 21:27 UTC
2 points
6 comments2 min readLW link

Ex­cerpts from my neu­ro­science to-do list

Steven Byrnes6 Oct 2025 21:05 UTC
26 points
1 comment4 min readLW link

Ex­pe­rience Re­port—ML4Good Boot­camp Sin­ga­pore, Sep′25

NurAlam6 Oct 2025 18:49 UTC
2 points
0 comments4 min readLW link

Which differ­ences be­tween sand­bag­ging eval­u­a­tions and sand­bag­ging safety re­search are im­por­tant for con­trol?

lennie6 Oct 2025 18:20 UTC
1 point
0 comments11 min readLW link

Grad­ual Disem­pow­er­ment Monthly Roundup

Raymond Douglas6 Oct 2025 15:36 UTC
93 points
7 comments6 min readLW link

Sublimi­nal Learn­ing, the Lot­tery-Ticket Hy­poth­e­sis, and Mode Connectivity

David Africa6 Oct 2025 15:26 UTC
16 points
3 comments7 min readLW link

The Origami Men

Tomás B.6 Oct 2025 15:25 UTC
138 points
9 comments16 min readLW link

Med­i­cal Roundup #5

Zvi6 Oct 2025 15:10 UTC
26 points
2 comments26 min readLW link
(thezvi.wordpress.com)

Sand­bag­ging: dis­t­in­guish­ing de­tec­tion of un­der­perfor­mance from in­crim­i­na­tion, and the im­pli­ca­tions for down­stream in­ter­ven­tions.

lennie6 Oct 2025 14:00 UTC
1 point
0 comments8 min readLW link

Why I think ECL shouldn’t make you up­date your cause prio

Jim Buhler6 Oct 2025 13:01 UTC
2 points
0 comments11 min readLW link

[Question] Did Tyler Robin­son carry his rifle as claimed by the gov­ern­ment?

ChristianKl6 Oct 2025 12:46 UTC
4 points
9 comments1 min readLW link

AI Science Com­pa­nies: Ev­i­dence AGI Is Near

Josh Snider6 Oct 2025 10:13 UTC
5 points
3 comments1 min readLW link
(www.joshuasnider.com)

LLMs one-box when in a “hos­tile telepath” ver­sion of New­comb’s Para­dox, ex­cept for the one that beat the predictor

Kaj_Sotala6 Oct 2025 8:44 UTC
47 points
6 comments17 min readLW link

Align­ment Fak­ing Demo for Con­gres­sional Staffers

Alice Blair6 Oct 2025 1:44 UTC
19 points
0 comments3 min readLW link

Do Things for as Many Rea­sons as Possible

Philipreal6 Oct 2025 0:28 UTC
35 points
1 comment2 min readLW link

One Does Not Sim­ply Walk Away from Omelas

Taylor G. Lunt6 Oct 2025 0:04 UTC
4 points
5 comments7 min readLW link

The quo­ta­tion mark

Maxwell Peterson5 Oct 2025 23:23 UTC
19 points
8 comments13 min readLW link

The Sadism Spec­trum and How to Ac­cess It

Dawn Drescher5 Oct 2025 23:09 UTC
13 points
2 comments20 min readLW link
(impartial-priorities.org)

Maybe so­cial me­dia al­gorithms don’t suck

Algon5 Oct 2025 18:47 UTC
64 points
18 comments3 min readLW link

Base64Bench: How good are LLMs at base64, and why care about it?

richbc5 Oct 2025 18:07 UTC
31 points
6 comments11 min readLW link

[Question] What can Cana­di­ans do to help end the AI arms race?

Tom9385 Oct 2025 18:03 UTC
8 points
7 comments2 min readLW link

17 years old, self-taught state con­trol—look­ing for peo­ple who ac­tu­ally get this

Cornelius Caspian5 Oct 2025 18:02 UTC
−3 points
3 comments1 min readLW link

Be­hav­ior Best-of-N achieves Near Hu­man Perfor­mance on Com­puter Tasks

Baybar5 Oct 2025 16:53 UTC
6 points
0 comments3 min readLW link

Ac­cel­er­at­ing AI Safety Progress via Tech­ni­cal Meth­ods- Cal­ling Re­searchers, Founders, and Funders

Martin Leitgab5 Oct 2025 16:40 UTC
1 point
0 comments1 min readLW link

Mini-Sym­po­sium on Ac­cel­er­at­ing AI Safety Progress via Tech­ni­cal Meth­ods—Hy­brid In-Per­son and Virtual

Martin Leitgab5 Oct 2025 16:05 UTC
1 point
0 comments1 min readLW link

[Question] How likely are “s-risks” (large-scale suffer­ing out­comes) from un­al­igned AI com­pared to ex­tinc­tion risks?

CanYouFeelTheBenefits5 Oct 2025 14:38 UTC
14 points
1 comment1 min readLW link

LLMs are badly misaligned

Joe Rogero5 Oct 2025 14:00 UTC
27 points
25 comments3 min readLW link

The Coun­ter­fac­tual Quiet AGI Timeline

Davidmanheim5 Oct 2025 9:09 UTC
64 points
5 comments9 min readLW link

AISafety.com Read­ing Group ses­sion 328

Søren Elverlin5 Oct 2025 7:51 UTC
5 points
0 comments1 min readLW link

How the NanoGPT Speedrun WR dropped by 20% in 3 months

larry-dial5 Oct 2025 1:05 UTC
26 points
9 comments9 min readLW link

a quick thought about AI alignment

foodforthought5 Oct 2025 0:51 UTC
10 points
4 comments1 min readLW link

Mak­ing Your Pain Worse can Get You What You Want

Logan Riggs5 Oct 2025 0:19 UTC
76 points
4 comments3 min readLW link

Mar­kets in Democ­racy: What hap­pens when you can sell your vote?

Mike Evron4 Oct 2025 23:59 UTC
4 points
20 comments3 min readLW link

$250 boun­ties for the best short sto­ries set in our near fu­ture world & Brook­lyn event to se­lect them

Ramon Gonzalez4 Oct 2025 22:49 UTC
10 points
0 comments2 min readLW link

What I’ve Learnt About How to Sleep

Algon4 Oct 2025 20:52 UTC
25 points
7 comments2 min readLW link

The ‘Magic’ of LLMs: The Func­tion of Language

Joseph Banks4 Oct 2025 17:45 UTC
13 points
0 comments7 min readLW link

Open Philan­thropy’s Biose­cu­rity and Pan­demic Pre­pared­ness Team Is Hiring and Seek­ing New Grantees

miriam.hinthorn4 Oct 2025 17:42 UTC
3 points
0 comments1 min readLW link

Con­sider Small Walks at Work

Morpheus4 Oct 2025 11:53 UTC
10 points
0 comments3 min readLW link

Where does Son­net 4.5′s de­sire to “not get too com­fortable” come from?

Kaj_Sotala4 Oct 2025 10:19 UTC
91 points
16 comments64 min readLW link

A Work­flow for Sys­tem Prompted Model Organisms

michaelwaves3 Oct 2025 21:39 UTC
1 point
0 comments3 min readLW link

Good­ness is harder to achieve than competence

Joe Rogero3 Oct 2025 21:32 UTC
22 points
0 comments3 min readLW link

Me­mory De­cod­ing Jour­nal Club: Con­nec­tomic traces of Heb­bian plas­tic­ity in the en­torhi­nal-hip­pocam­pal system

Devin Ward3 Oct 2025 21:24 UTC
1 point
0 comments1 min readLW link

Good is a smaller tar­get than smart

Joe Rogero3 Oct 2025 21:04 UTC
21 points
0 comments2 min readLW link

Mak­ing Sense of Con­scious­ness Part 6: Per­cep­tions of Disembodiment

sarahconstantin3 Oct 2025 20:40 UTC
27 points
0 comments8 min readLW link
(sarahconstantin.substack.com)

Re­cent AI Experiences

abramdemski3 Oct 2025 19:32 UTC
54 points
1 comment6 min readLW link

Our Ex­pe­rience Run­ning In­de­pen­dent Eval­u­a­tions on LLMs: What Have We Learned?

MAlvarado3 Oct 2025 18:26 UTC
7 points
1 comment5 min readLW link

Do One New Thing A Day To Solve Your Problems

Algon3 Oct 2025 17:08 UTC
102 points
5 comments2 min readLW link

ENAIS is look­ing for an Ex­ec­u­tive Direc­tor (ap­ply by 20th Oc­to­ber)

3 Oct 2025 15:29 UTC
11 points
0 comments2 min readLW link