It’s Okay to Feel Bad for a Bit

moridinamael10 May 2025 23:24 UTC
141 points
34 comments3 min readLW link

G.D. as Cap­i­tal­ist Evolu­tion, and the claim for hu­man­ity’s (tem­po­rary) up­per hand

Martin Vlach10 May 2025 21:18 UTC
8 points
3 comments1 min readLW link

Book Re­view: “En­coun­ters with Ein­stein” by Heisenberg

Baram Sosis10 May 2025 20:55 UTC
31 points
6 comments7 min readLW link

Where is the YIMBY move­ment for health­care?

jasoncrawford10 May 2025 20:36 UTC
20 points
10 comments2 min readLW link
(newsletter.rootsofprogress.org)

Be­come a Su­per­in­tel­li­gence Yourself

Yaroslav Granowski10 May 2025 20:20 UTC
2 points
1 comment5 min readLW link

A Look In­side a Frequentist

Eggs10 May 2025 15:18 UTC
5 points
10 comments3 min readLW link

Open-source weaponry

samuelshadrach10 May 2025 13:11 UTC
3 points
0 comments3 min readLW link
(samuelshadrach.com)

Glass box learn­ers want to be black box

Cole Wyeth10 May 2025 11:05 UTC
49 points
10 comments4 min readLW link

Takes and loose pre­dic­tions on AI progress and some key problems

zef10 May 2025 10:11 UTC
5 points
0 comments5 min readLW link
(halcyoncyborg.substack.com)

Cor­bent – A Master Plan for Next‑Gen­er­a­tion Direct Air Capture

Rudaiba10 May 2025 4:09 UTC
11 points
15 comments19 min readLW link

What if we just…didn’t build AGI? An Ar­gu­ment Against Inevitability

Nate Sharpe10 May 2025 3:37 UTC
8 points
7 comments14 min readLW link
(natezsharpe.substack.com)

Mind the Co­her­ence Gap: Les­sons from Steer­ing Llama with Goodfire

eitan sprejer9 May 2025 21:29 UTC
4 points
1 comment6 min readLW link

My Ex­pe­rience With EMDR

Sable9 May 2025 21:25 UTC
22 points
0 comments11 min readLW link
(affablyevil.substack.com)

AI’s Hid­den Game: Un­der­stand­ing Strate­gic De­cep­tion in AI and Why It Mat­ters for Our Future

EmilyinAI9 May 2025 20:01 UTC
4 points
0 comments6 min readLW link

Mud­dling Through Some Thoughts on the Na­ture of Historiography

E.G. Blee-Goldman9 May 2025 19:04 UTC
2 points
0 comments4 min readLW link

A Guide to AI 2027

koenrane9 May 2025 17:14 UTC
0 points
1 comment28 min readLW link

Let’s stop mak­ing “In­tel­li­gence scale” graphs with hu­mans and AI

Expertium9 May 2025 16:01 UTC
3 points
15 comments1 min readLW link

Slow cor­po­ra­tions as an in­tu­ition pump for AI R&D automation

9 May 2025 14:49 UTC
91 points
23 comments9 min readLW link

Cheaters Gonna Cheat Cheat Cheat Cheat Cheat

Zvi9 May 2025 14:30 UTC
55 points
4 comments22 min readLW link
(thezvi.wordpress.com)

Hu­mans vs LLM, memes as theorems

Yaroslav Granowski9 May 2025 13:26 UTC
1 point
0 comments1 min readLW link

Mov­ing to­wards a ques­tion-based plan­ning frame­work, in­stead of task lists

casualphysicsenjoyer9 May 2025 12:18 UTC
4 points
1 comment8 min readLW link
(substack.com)

Jim Bab­cock’s Main­line Doom Sce­nario: Hu­man-Level AI Can’t Con­trol Its Successor

9 May 2025 5:20 UTC
30 points
4 comments62 min readLW link
(www.youtube.com)

At­tend the 2025 Re­pro­duc­tive Fron­tiers Sum­mit, June 10-12

9 May 2025 5:17 UTC
59 points
0 comments3 min readLW link

In­ter­est In Con­flict Is In­stru­men­tally Convergent

Screwtape9 May 2025 2:16 UTC
66 points
58 comments10 min readLW link

Is ChatGPT ac­tu­ally fixed now?

sjadler8 May 2025 23:34 UTC
17 points
0 comments1 min readLW link
(stevenadler.substack.com)

Post EAG Lon­don AI x-Safety Co-work­ing Retreat

plex8 May 2025 23:00 UTC
10 points
0 comments1 min readLW link

a brief cri­tique of reduction

Vadim Golub8 May 2025 22:43 UTC
−17 points
4 comments2 min readLW link

Video & tran­script: Challenges for Safe & Benefi­cial Brain-Like AGI

Steven Byrnes8 May 2025 21:11 UTC
26 points
0 comments18 min readLW link

Ap­pendix: In­ter­pretable by De­sign—Con­straint Sets with Disjoint Limit Points

Ronak_Mehta8 May 2025 21:09 UTC
2 points
0 comments2 min readLW link

In­ter­pretable by De­sign—Con­straint Sets with Disjoint Limit Points

Ronak_Mehta8 May 2025 21:08 UTC
24 points
2 comments9 min readLW link
(ronakrm.github.io)

Is there a Half-Life for the Suc­cess Rates of AI Agents?

Matrice Jacobine8 May 2025 20:10 UTC
8 points
0 comments1 min readLW link
(www.tobyord.com)

Misal­ign­ment and Strate­gic Un­der­perfor­mance: An Anal­y­sis of Sand­bag­ging and Ex­plo­ra­tion Hacking

8 May 2025 19:06 UTC
77 points
3 comments15 min readLW link

Be­hold the Pale Child (es­cap­ing Moloch’s Mad Maze)

rogersbacon8 May 2025 16:36 UTC
8 points
16 comments11 min readLW link
(www.secretorum.life)

An al­ign­ment safety case sketch based on debate

8 May 2025 15:02 UTC
57 points
21 comments25 min readLW link
(arxiv.org)

Mechanis­tic In­ter­pretabil­ity Via Learn­ing Differ­en­tial Equa­tions: AI Safety Camp Pro­ject In­ter­me­di­ate Re­port.

8 May 2025 14:45 UTC
8 points
0 comments7 min readLW link

AI #115: The Evil Ap­pli­ca­tions Division

Zvi8 May 2025 13:40 UTC
32 points
3 comments62 min readLW link
(thezvi.wordpress.com)

The Stegano­graphic Po­ten­tials of Lan­guage Models

8 May 2025 11:23 UTC
9 points
0 comments1 min readLW link

Our bet on whether the AI mar­ket will crash

8 May 2025 9:56 UTC
23 points
2 comments1 min readLW link

Con­cept-an­chored rep­re­sen­ta­tion en­g­ineer­ing for alignment

Sandy Fraser8 May 2025 8:59 UTC
5 points
0 comments3 min readLW link

Orthog­o­nal­ity Th­e­sis in lay­man’s terms.

Michael (@lethal_ai)8 May 2025 8:31 UTC
1 point
0 comments2 min readLW link

Arkose may be clos­ing, but you can help

Victoria Brook8 May 2025 7:28 UTC
8 points
0 comments2 min readLW link

Heal­ing pow­ers of med­i­ta­tion or the role of at­ten­tion in hu­moral reg­u­la­tion.

Yaroslav Granowski8 May 2025 6:48 UTC
7 points
0 comments1 min readLW link

Ori­ent­ing Toward Wizard Power

johnswentworth8 May 2025 5:23 UTC
564 points
147 comments5 min readLW link

Re­la­tional Align­ment: Trust, Re­pair, and the Emo­tional Work of AI

Priyanka Bharadwaj8 May 2025 2:44 UTC
3 points
0 comments3 min readLW link

There’s more low-hang­ing fruit in in­ter­dis­ci­plinary work thanks to LLMs

ChristianKl7 May 2025 19:48 UTC
26 points
2 comments1 min readLW link

OpenAI Claims Non­profit Will Re­tain Nom­i­nal Control

Zvi7 May 2025 19:40 UTC
65 points
4 comments11 min readLW link
(thezvi.wordpress.com)

So­cial sta­tus games might have “com­pute weight class” in the future

Raemon7 May 2025 18:56 UTC
34 points
7 comments2 min readLW link

Events of Low Prob­a­bil­ity: Buri­dan’s Principle

Nikita Gladkov7 May 2025 18:46 UTC
12 points
0 comments10 min readLW link

[Question] Which jour­nal­ists would you give quotes to? [one jour­nal­ist per com­ment, agree vote for trust­wor­thy]

Nathan Young7 May 2025 18:39 UTC
12 points
26 comments1 min readLW link

Please Donate to CAIP (Post 1 of 7 on AI Gover­nance)

Mass_Driver7 May 2025 17:13 UTC
119 points
20 comments33 min readLW link