Im­proved vi­su­al­iza­tions of METR Time Hori­zons pa­per.

LDJ19 Mar 2025 23:36 UTC
30 points
4 comments2 min readLW link

The case against “The case against AI al­ign­ment”

KvmanThinking19 Mar 2025 22:40 UTC
1 point
0 comments1 min readLW link

[Question] Su­per­in­tel­li­gence Strat­egy: A Prag­matic Path to… Doom?

Mr Beastly19 Mar 2025 22:30 UTC
8 points
0 comments3 min readLW link

SHIFT re­lies on to­ken-level fea­tures to de-bias Bias in Bios probes

Tim Hua19 Mar 2025 21:29 UTC
39 points
2 comments6 min readLW link

Janet must die

Shmi19 Mar 2025 20:35 UTC
12 points
3 comments2 min readLW link

[Question] Why am I get­ting down­voted on Less­wrong?

Oxidize19 Mar 2025 18:32 UTC
7 points
14 comments1 min readLW link

Fore­cast­ing AI Fu­tures Re­source Hub

Alvin Ånestrand19 Mar 2025 17:26 UTC
2 points
0 comments2 min readLW link
(forecastingaifutures.substack.com)

TBC epi­sode w Dave Kas­ten from Con­trol AI on AI Policy

Eneasz19 Mar 2025 17:09 UTC
14 points
0 comments1 min readLW link
(www.thebayesianconspiracy.com)

Pri­ori­tiz­ing threats for AI control

ryan_greenblatt19 Mar 2025 17:09 UTC
59 points
2 comments10 min readLW link

The Illu­sion of Trans­parency as a Trust-Build­ing Mechanism

Priyanka Bharadwaj19 Mar 2025 17:09 UTC
2 points
0 comments1 min readLW link

How Do We Govern AI Well?

kaime19 Mar 2025 17:08 UTC
2 points
0 comments25 min readLW link

METR: Mea­sur­ing AI Abil­ity to Com­plete Long Tasks

Zach Stein-Perlman19 Mar 2025 16:00 UTC
242 points
106 comments5 min readLW link
(metr.org)

Why I think AI will go poorly for humanity

Alek Westover19 Mar 2025 15:52 UTC
14 points
0 comments30 min readLW link

The prin­ci­ple of ge­nomic liberty

TsviBT19 Mar 2025 14:27 UTC
76 points
51 comments17 min readLW link

Go­ing Nova

Zvi19 Mar 2025 13:30 UTC
69 points
27 comments15 min readLW link
(thezvi.wordpress.com)

Equa­tions Mean Things

abstractapplic19 Mar 2025 8:16 UTC
56 points
10 comments3 min readLW link

Elite Co­or­di­na­tion via the Con­sen­sus of Power

Richard_Ngo19 Mar 2025 6:56 UTC
92 points
15 comments12 min readLW link
(www.mindthefuture.info)

What I am work­ing on right now and why: rep­re­sen­ta­tion en­g­ineer­ing edition

Lukasz G Bartoszcze18 Mar 2025 22:37 UTC
3 points
0 comments3 min readLW link

Boots the­ory and Sy­bil Ramkin

philh18 Mar 2025 22:10 UTC
37 points
18 comments11 min readLW link
(reasonableapproximation.net)

Sch­midt Sciences Tech­ni­cal AI Safety RFP on In­fer­ence-Time Com­pute – Dead­line: April 30

Ryan Gajarawala18 Mar 2025 18:05 UTC
18 points
0 comments2 min readLW link
(www.schmidtsciences.org)

PRISM: Per­spec­tive Rea­son­ing for In­te­grated Syn­the­sis and Me­di­a­tion (In­ter­ac­tive Demo)

Anthony Diamond18 Mar 2025 18:03 UTC
10 points
2 comments1 min readLW link

Sub­space Rer­out­ing: Us­ing Mechanis­tic In­ter­pretabil­ity to Craft Ad­ver­sar­ial At­tacks against Large Lan­guage Models

Le magicien quantique18 Mar 2025 17:55 UTC
6 points
1 comment10 min readLW link

Progress links and short notes, 2025-03-18

jasoncrawford18 Mar 2025 17:14 UTC
8 points
0 comments3 min readLW link
(newsletter.rootsofprogress.org)

The Con­ver­gent Path to the Stars

Maxime Riché18 Mar 2025 17:09 UTC
6 points
0 comments20 min readLW link

Sapir-Whorf Ego Death

Jonathan Moregård18 Mar 2025 16:57 UTC
8 points
7 comments2 min readLW link
(honestliving.substack.com)

Smel­ling Nice is Good, Actually

Gordon Seidoh Worley18 Mar 2025 16:54 UTC
29 points
8 comments3 min readLW link
(uncertainupdates.substack.com)

A Tax­on­omy of Jobs Deeply Re­sis­tant to TAI Automation

Deric Cheng18 Mar 2025 16:25 UTC
9 points
0 comments12 min readLW link
(www.convergenceanalysis.org)

Why Are The Hu­man Sciences Hard? Two New Hypotheses

18 Mar 2025 15:45 UTC
39 points
14 comments9 min readLW link

Go home GPT-4o, you’re drunk: emer­gent mis­al­ign­ment as low­ered inhibitions

18 Mar 2025 14:48 UTC
81 points
12 comments5 min readLW link

[Question] What is the the­ory of change be­hind writ­ing pa­pers about AI safety?

Kajus18 Mar 2025 12:51 UTC
7 points
1 comment1 min readLW link

OpenAI #11: Amer­ica Ac­tion Plan

Zvi18 Mar 2025 12:50 UTC
83 points
3 comments6 min readLW link
(thezvi.wordpress.com)

I changed my mind about orca intelligence

Towards_Keeperhood18 Mar 2025 10:15 UTC
54 points
24 comments5 min readLW link

[Question] Is Peano ar­ith­metic try­ing to kill us? Do we care?

Q Home18 Mar 2025 8:22 UTC
17 points
2 comments2 min readLW link

Do What the Mam­mals Do

CrimsonChin18 Mar 2025 3:57 UTC
2 points
6 comments4 min readLW link

What Ac­tu­ally Mat­ters Un­til We Reach the Singularity

Lexius18 Mar 2025 2:17 UTC
−1 points
0 comments9 min readLW link

Mean­ing as a cog­ni­tive sub­sti­tute for sur­vival in­stincts: A thought experiment

Ovidijus Šimkus18 Mar 2025 1:53 UTC
0 points
0 comments2 min readLW link

Against Yud­kowsky’s evolu­tion anal­ogy for AI x-risk [un­finished]

Fiora Sunshine18 Mar 2025 1:41 UTC
52 points
18 comments11 min readLW link

An “AI re­searcher” has writ­ten a pa­per on op­ti­miz­ing AI ar­chi­tec­ture and op­ti­mized a lan­guage model to sev­eral or­ders of mag­ni­tude more effi­ciency.

Y B18 Mar 2025 1:15 UTC
3 points
1 comment1 min readLW link

LessOn­line 2025: Early Bird Tick­ets On Sale

Ben Pace18 Mar 2025 0:22 UTC
37 points
5 comments5 min readLW link

Feed­back loops for ex­er­cise (VO2Max)

Elizabeth18 Mar 2025 0:10 UTC
65 points
13 comments8 min readLW link
(acesounderglass.com)

Fron­tierMath Score of o3-mini Much Lower Than Claimed

YafahEdelman17 Mar 2025 22:41 UTC
61 points
7 comments1 min readLW link

Proof-of-Con­cept De­bug­ger for a Small LLM

17 Mar 2025 22:27 UTC
27 points
0 comments11 min readLW link

Effec­tively Com­mu­ni­cat­ing with DC Policymakers

PolicyTakes17 Mar 2025 22:11 UTC
14 points
0 comments2 min readLW link

EIS XV: A New Proof of Con­cept for Use­ful Interpretability

scasper17 Mar 2025 20:05 UTC
30 points
2 comments3 min readLW link

Sen­tinel’s Global Risks Weekly Roundup #11/​2025. Trump in­vokes Alien Ene­mies Act, Chi­nese in­va­sion barges de­ployed in ex­er­cise.

NunoSempere17 Mar 2025 19:34 UTC
59 points
3 comments6 min readLW link
(blog.sentinel-team.org)

Claude Son­net 3.7 (of­ten) knows when it’s in al­ign­ment evaluations

17 Mar 2025 19:11 UTC
189 points
9 comments6 min readLW link

Three Types of In­tel­li­gence Explosion

17 Mar 2025 14:47 UTC
40 points
8 comments3 min readLW link
(www.forethought.org)

An Ad­vent of Thought

Kaarel17 Mar 2025 14:21 UTC
57 points
13 comments48 min readLW link

In­ter­ested in work­ing from a new Bos­ton AI Safety Hub?

17 Mar 2025 13:42 UTC
17 points
0 comments2 min readLW link

Other Civ­i­liza­tions Would Re­cover 84+% of Our Cos­mic Re­sources—A Challenge to Ex­tinc­tion Risk Prioritization

Maxime Riché17 Mar 2025 13:12 UTC
5 points
0 comments12 min readLW link