Ve­gans need to eat just enough Meat—em­per­i­cally eval­u­ate the min­i­mum am­mount of meat that max­i­mizes utility

Johannes C. MayerDec 22, 2024, 10:08 PM
55 points
35 comments3 min readLW link

We are in a New Paradigm of AI Progress—OpenAI’s o3 model makes huge gains on the tough­est AI bench­marks in the world

garrisonDec 22, 2024, 9:45 PM
17 points
3 commentsLW link
(garrisonlovely.substack.com)

My AI timelines

samuelshadrachDec 22, 2024, 9:06 PM
12 points
2 comments5 min readLW link
(samuelshadrach.com)

A break­down of AI ca­pa­bil­ity lev­els fo­cused on AI R&D la­bor acceleration

ryan_greenblattDec 22, 2024, 8:56 PM
104 points
6 comments6 min readLW link

How I saved 1 hu­man life (in ex­pec­ta­tion) with­out over­think­ing it

Christopher KingDec 22, 2024, 8:53 PM
19 points
0 comments4 min readLW link

Towards mu­tu­ally as­sured cooperation

mikkoDec 22, 2024, 8:46 PM
5 points
0 comments1 min readLW link

Check­ing in on Scott’s com­po­si­tion image bet with ima­gen 3

Dave OrrDec 22, 2024, 7:04 PM
65 points
0 comments1 min readLW link

Woloch & Wosatan

JackOfAllTradesDec 22, 2024, 3:46 PM
−11 points
0 comments2 min readLW link

A primer on ma­chine learn­ing in cryo-elec­tron microscopy (cryo-EM)

Abhishaike MahajanDec 22, 2024, 3:11 PM
18 points
0 comments25 min readLW link
(www.owlposting.com)

Notes from Copen­hagen Sec­u­lar Sols­tice 2024

Søren ElverlinDec 22, 2024, 3:08 PM
9 points
0 comments3 min readLW link

Proof Ex­plained for “Ro­bust Agents Learn Causal World Model”

DalcyDec 22, 2024, 3:06 PM
25 points
0 comments15 min readLW link

sub­func­tional over­laps in at­ten­tional se­lec­tion his­tory im­plies mo­men­tum for de­ci­sion-trajectories

EmrikDec 22, 2024, 2:12 PM
19 points
1 comment2 min readLW link

It looks like there are some good fund­ing op­por­tu­ni­ties in AI safety right now

Benjamin_ToddDec 22, 2024, 12:41 PM
20 points
0 comments4 min readLW link
(benjamintodd.substack.com)

What o3 Be­comes by 2028

Vladimir_NesovDec 22, 2024, 12:37 PM
147 points
15 comments5 min readLW link

The Align­ment Simulator

Yair HalberstadtDec 22, 2024, 11:45 AM
28 points
3 comments2 min readLW link
(yairhalberstadt.github.io)

The­o­ret­i­cal Align­ment’s Se­cond Chance

lunatic_at_largeDec 22, 2024, 5:03 AM
27 points
3 comments2 min readLW link

Ori­ent­ing to 3 year AGI timelines

Nikola JurkovicDec 22, 2024, 1:15 AM
282 points
51 comments8 min readLW link

ARC-AGI is a gen­uine AGI test but o3 cheated :(

Knight LeeDec 22, 2024, 12:58 AM
3 points
6 comments2 min readLW link

When AI 10x’s AI R&D, What Do We Do?

Logan RiggsDec 21, 2024, 11:56 PM
72 points
16 comments4 min readLW link

AI as sys­tems, not just models

Andy ArditiDec 21, 2024, 11:19 PM
28 points
0 comments7 min readLW link
(andyrdt.com)

Towards a Unified In­ter­pretabil­ity of Ar­tifi­cial and Biolog­i­cal Neu­ral Networks

jan_bauerDec 21, 2024, 11:10 PM
2 points
0 comments1 min readLW link

Rob­bin’s Farm Sled­ding Route

jefftkDec 21, 2024, 10:10 PM
13 points
1 comment1 min readLW link
(www.jefftk.com)

AGI with RL is Bad News for Safety

Nadav BrandesDec 21, 2024, 7:36 PM
19 points
22 comments2 min readLW link

Bet­ter differ­ence-mak­ing views

MichaelStJulesDec 21, 2024, 6:27 PM
7 points
0 commentsLW link

Re­view: Good Strat­egy, Bad Strategy

L Rudolf LDec 21, 2024, 5:17 PM
43 points
0 comments23 min readLW link
(nosetgauge.substack.com)

Last Line of Defense: Min­i­mum Vi­able Shelters for Mir­ror Bacteria

Ulrik HornDec 21, 2024, 8:28 AM
12 points
26 comments21 min readLW link

Elon Musk and So­lar Futurism

transhumanist_atom_understanderDec 21, 2024, 2:55 AM
32 points
27 comments5 min readLW link

Good Rea­sons for Alts

jefftkDec 21, 2024, 1:30 AM
24 points
2 comments1 min readLW link
(www.jefftk.com)

Up­dat­ing on Bad Arguments

GuiveDec 21, 2024, 1:19 AM
11 points
2 comments2 min readLW link
(guive.substack.com)

Bird’s eye view: An in­ter­ac­tive rep­re­sen­ta­tion to see large col­lec­tion of text “from above”.

Alexandre VariengienDec 21, 2024, 12:15 AM
10 points
4 comments5 min readLW link
(alexandrevariengien.com)

The nihilism of NeurIPS

charlieoneillDec 20, 2024, 11:58 PM
107 points
6 comments4 min readLW link

Fore­cast 2025 With Vox’s Fu­ture Perfect Team — $2,500 Prize Pool

ChristianWilliamsDec 20, 2024, 11:00 PM
19 points
0 commentsLW link
(www.metaculus.com)

[Question] How do we quan­tify non-philan­thropic con­tri­bu­tions from Buffet and Soros?

PhilosophistryDec 20, 2024, 10:50 PM
3 points
0 comments1 min readLW link

An­thropic lead­er­ship conversation

Zach Stein-PerlmanDec 20, 2024, 10:00 PM
67 points
17 comments6 min readLW link
(www.youtube.com)

As We May Align

Gilbert CDec 20, 2024, 7:02 PM
−1 points
0 comments6 min readLW link

o3 is not be­ing re­leased to the pub­lic. First they are only giv­ing ac­cess to ex­ter­nal safety testers. You can ap­ply to get early ac­cess to do safety testing

KatWoodsDec 20, 2024, 6:30 PM
16 points
0 comments1 min readLW link
(openai.com)

o3

Zach Stein-PerlmanDec 20, 2024, 6:30 PM
154 points
164 comments1 min readLW link

What Goes Without Saying

sarahconstantinDec 20, 2024, 6:00 PM
334 points
28 comments5 min readLW link
(sarahconstantin.substack.com)

Ret­ro­spec­tive: PIBBSS Fel­low­ship 2024

Dec 20, 2024, 3:55 PM
64 points
1 comment4 min readLW link

Com­po­si­tion­al­ity and Am­bi­guity: La­tent Co-oc­cur­rence and In­ter­pretable Subspaces

Dec 20, 2024, 3:16 PM
32 points
0 comments37 min readLW link

🇫🇷 An­nounc­ing CeSIA: The French Cen­ter for AI Safety

Charbel-RaphaëlDec 20, 2024, 2:17 PM
90 points
2 comments8 min readLW link

Moder­ately Skep­ti­cal of “Risks of Mir­ror Biol­ogy”

DavidmanheimDec 20, 2024, 12:57 PM
31 points
3 comments9 min readLW link
(substack.com)

Do­ing Sport Reli­ably via Dancing

Johannes C. MayerDec 20, 2024, 12:06 PM
16 points
0 comments2 min readLW link

You can val­idly be seen and val­i­dated by a chatbot

Kaj_SotalaDec 20, 2024, 12:00 PM
30 points
3 comments8 min readLW link
(kajsotala.fi)

What I ex­pected from this site: A LessWrong review

Nathan YoungDec 20, 2024, 11:27 AM
31 points
5 comments3 min readLW link
(nathanpmyoung.substack.com)

Al­go­phobes and Al­go­v­er­ses: The New Ene­mies of Progress

Wenitte ApiouDec 20, 2024, 10:01 AM
−24 points
0 comments2 min readLW link

“Align­ment Fak­ing” frame is some­what fake

Jan_KulveitDec 20, 2024, 9:51 AM
153 points
13 comments6 min readLW link

No In­ter­nally-Crispy Mac and Cheese

jefftkDec 20, 2024, 3:20 AM
12 points
5 comments1 min readLW link
(www.jefftk.com)

Ap­ply to be a TA for TARA

yanni kyriacosDec 20, 2024, 2:25 AM
10 points
0 comments1 min readLW link

An­nounc­ing the Q1 2025 Long-Term Fu­ture Fund grant round

Dec 20, 2024, 2:20 AM
36 points
2 comments2 min readLW link
(forum.effectivealtruism.org)