So­cial Dilem­mas — pub­lic goods, free rid­ers, and exploitation

James Stephen BrownMar 5, 2025, 11:31 PM
6 points
0 comments3 min readLW link
(nonzerosum.games)

In­tro­duc­ing MASK: A Bench­mark for Mea­sur­ing Hon­esty in AI Systems

Mar 5, 2025, 10:56 PM
35 points
5 comments2 min readLW link
(www.mask-benchmark.ai)

The Hard­ware-Soft­ware Frame­work: A New Per­spec­tive on Eco­nomic Growth with AI

Jakub GrowiecMar 5, 2025, 7:59 PM
3 points
0 comments3 min readLW link

NY State Has a New Fron­tier Model Bill (+quick takes)

henryjMar 5, 2025, 7:29 PM
9 points
0 comments1 min readLW link
(www.henryjosephson.com)

The old mem­o­ries tree

Yair HalberstadtMar 5, 2025, 7:03 PM
7 points
1 comment1 min readLW link

Re­ply to Vi­talik on d/​acc

samuelshadrachMar 5, 2025, 6:55 PM
8 points
0 comments3 min readLW link
(samuelshadrach.com)

A Bear Case: My Pre­dic­tions Re­gard­ing AI Progress

Thane RuthenisMar 5, 2025, 4:41 PM
362 points
157 comments9 min readLW link

On the Ra­tion­al­ity of Deter­ring ASI

Dan HMar 5, 2025, 4:11 PM
166 points
34 comments4 min readLW link
(nationalsecurity.ai)

On OpenAI’s Safety and Align­ment Philosophy

ZviMar 5, 2025, 2:00 PM
58 points
5 comments17 min readLW link
(thezvi.wordpress.com)

The Align­ment Im­per­a­tive: Act Now or Lose Every­thing

racinkc1Mar 5, 2025, 5:49 AM
−14 points
0 comments1 min readLW link

Con­tra Dance Pay and Inflation

jefftkMar 5, 2025, 2:40 AM
12 points
0 comments2 min readLW link
(www.jefftk.com)

*NYT Op-Ed* The Govern­ment Knows A.G.I. Is Coming

worseMar 5, 2025, 1:53 AM
11 points
12 comments2 min readLW link
(www.nytimes.com)

Could this be an un­usu­ally good time to Earn To Give?

TomGardinerMar 4, 2025, 9:51 PM
−1 points
0 comments3 min readLW link
(forum.effectivealtruism.org)

What is the best /​ most proper defi­ni­tion of “Feel­ing the AGI” there is?

AnnapurnaMar 4, 2025, 8:13 PM
8 points
5 comments1 min readLW link

En­ergy Mar­kets Tem­po­ral Ar­bi­trage with Batteries

NickyPMar 4, 2025, 5:37 PM
21 points
3 comments16 min readLW link

Distil­la­tion of Meta’s Large Con­cept Models Paper

NickyPMar 4, 2025, 5:33 PM
19 points
3 comments4 min readLW link

Top AI safety newslet­ters, books, pod­casts, etc – new AISafety.com resource

Mar 4, 2025, 5:01 PM
32 points
2 comments1 min readLW link

2028 Should Not Be AI Safety’s First Fo­ray Into Politics

Jesse RichardsonMar 4, 2025, 4:46 PM
5 points
0 comments2 min readLW link

[Question] How Much Are LLMs Ac­tu­ally Boost­ing Real-World Pro­gram­mer Pro­duc­tivity?

Thane RuthenisMar 4, 2025, 4:23 PM
137 points
52 comments3 min readLW link

Val­i­dat­ing against a mis­al­ign­ment de­tec­tor is very differ­ent to train­ing against one

mattmacdermottMar 4, 2025, 3:41 PM
33 points
4 comments4 min readLW link

For schem­ing, we should first fo­cus on de­tec­tion and then on prevention

Marius HobbhahnMar 4, 2025, 3:22 PM
47 points
7 comments5 min readLW link

Progress links and short notes, 2025-03-03

jasoncrawfordMar 4, 2025, 3:20 PM
8 points
0 comments6 min readLW link
(newsletter.rootsofprogress.org)

For­ma­tion Re­search: Or­gani­sa­tion Overview

alamertonMar 4, 2025, 3:03 PM
5 points
0 comments11 min readLW link

On Writ­ing #1

ZviMar 4, 2025, 1:30 PM
37 points
2 comments15 min readLW link
(thezvi.wordpress.com)

The Semi-Ra­tional Mili­tar Firefighter

P. JoãoMar 4, 2025, 12:23 PM
72 points
10 comments2 min readLW link

Ob­ser­va­tions About LLM In­fer­ence Pricing

Aaron_ScherMar 4, 2025, 3:03 AM
28 points
2 comments9 min readLW link
(techgov.intelligence.org)

[Question] How much should I worry about the At­lanta Fed’s GDP es­ti­mates?

Brendan LongMar 4, 2025, 2:03 AM
16 points
2 comments1 min readLW link

[Question] shouldn’t we try to get me­dia at­ten­tion?

KvmanThinkingMar 4, 2025, 1:39 AM
6 points
1 comment1 min readLW link

The Mil­ton Fried­man Model of Policy Change

JohnofCharlestonMar 4, 2025, 12:38 AM
136 points
17 comments4 min readLW link

The Com­pli­ment Sand­wich 🥪 aka: How to crit­i­cize a normie with­out mak­ing them up­set.

keltanMar 3, 2025, 11:15 PM
13 points
10 comments1 min readLW link

AI Safety at the Fron­tier: Paper High­lights, Fe­bru­ary ’25

gasteigerjoMar 3, 2025, 10:09 PM
7 points
0 comments7 min readLW link
(aisafetyfrontier.substack.com)

What goals will AIs have? A list of hypotheses

Daniel KokotajloMar 3, 2025, 8:08 PM
87 points
19 comments18 min readLW link

Take­aways From Our Re­cent Work on SAE Probing

Mar 3, 2025, 7:50 PM
30 points
0 comments5 min readLW link

Why Peo­ple Com­mit White Col­lar Fraud (Ozy linkpost)

sapphireMar 3, 2025, 7:33 PM
22 points
1 comment1 min readLW link
(thingofthings.substack.com)

[Question] Ask Me Any­thing—Samuel

samuelshadrachMar 3, 2025, 7:24 PM
0 points
0 comments1 min readLW link

Ex­pand­ing Har­mBench: In­ves­ti­gat­ing Gaps & Ex­tend­ing Ad­ver­sar­ial LLM Test­ing

racinkc1Mar 3, 2025, 7:23 PM
1 point
0 comments1 min readLW link

Could Ad­vanced AI Ac­cel­er­ate the Pace of AI Progress? In­ter­views with AI Researchers

Mar 3, 2025, 7:05 PM
43 points
1 comment1 min readLW link
(papers.ssrn.com)

Mid­dle School Choice

jefftkMar 3, 2025, 4:10 PM
27 points
10 comments4 min readLW link
(www.jefftk.com)

On GPT-4.5

ZviMar 3, 2025, 1:40 PM
44 points
12 comments22 min readLW link
(thezvi.wordpress.com)

Co­a­les­cence—Deter­minism In Ways We Care About

vitaliyaMar 3, 2025, 1:20 PM
12 points
0 comments11 min readLW link

Meth­ods for strong hu­man germline en­g­ineer­ing

TsviBTMar 3, 2025, 8:13 AM
149 points
28 comments108 min readLW link

[Question] Ex­am­ples of self-fulfilling prophe­cies in AI al­ign­ment?

Chris LakinMar 3, 2025, 2:45 AM
22 points
6 comments1 min readLW link

[Question] Re­quest for Com­ments on AI-re­lated Pre­dic­tion Mar­ket Ideas

PeterMcCluskeyMar 2, 2025, 8:52 PM
17 points
1 comment3 min readLW link

Statis­ti­cal Challenges with Mak­ing Su­per IQ babies

Jan Christian RefsgaardMar 2, 2025, 8:26 PM
154 points
26 comments9 min readLW link

Cau­tions about LLMs in Hu­man Cog­ni­tive Loops

Alice BlairMar 2, 2025, 7:53 PM
39 points
11 comments7 min readLW link

Self-fulfilling mis­al­ign­ment data might be poi­son­ing our AI models

TurnTroutMar 2, 2025, 7:51 PM
153 points
28 comments1 min readLW link
(turntrout.com)

Spencer Green­berg hiring a per­sonal/​pro­fes­sional/​re­search re­mote as­sis­tant for 5-10 hours per week

spencergMar 2, 2025, 6:01 PM
13 points
0 commentsLW link

[Question] Will LLM agents be­come the first takeover-ca­pa­ble AGIs?

Seth HerdMar 2, 2025, 5:15 PM
36 points
10 comments1 min readLW link

Not-yet-falsifi­able be­liefs?

Benjamin HendricksMar 2, 2025, 2:11 PM
6 points
4 comments1 min readLW link

Sav­ing Zest

jefftkMar 2, 2025, 12:00 PM
24 points
1 comment1 min readLW link
(www.jefftk.com)