Panology

JenniferRM23 Dec 2024 21:40 UTC
17 points
10 comments5 min readLW link

Aris­to­tle, Aquinas, and the Evolu­tion of Tele­ol­ogy: From Pur­pose to Mean­ing.

Spiritus Dei23 Dec 2024 19:37 UTC
−9 points
0 comments6 min readLW link

Peo­ple aren’t prop­erly cal­ibrated on FrontierMath

cakubilo23 Dec 2024 19:35 UTC
31 points
4 comments3 min readLW link

Near- and medium-term AI Con­trol Safety Cases

Martín Soto23 Dec 2024 17:37 UTC
9 points
0 comments6 min readLW link

[Ra­tion­al­ity Malaysia] 2024 year-end meetup!

Doris Liew23 Dec 2024 16:02 UTC
1 point
0 comments1 min readLW link

Printable book of some ra­tio­nal­ist cre­ative writ­ing (from Scott A. & Eliezer)

Adam Morris23 Dec 2024 15:44 UTC
10 points
0 comments1 min readLW link

Monthly Roundup #25: De­cem­ber 2024

Zvi23 Dec 2024 14:20 UTC
18 points
3 comments26 min readLW link
(thezvi.wordpress.com)

Ex­plor­ing the pe­ter­todd /​ Leilan du­al­ity in GPT-2 and GPT-J

mwatkins23 Dec 2024 13:17 UTC
12 points
1 comment17 min readLW link

[Question] What are the strongest ar­gu­ments for very short timelines?

Kaj_Sotala23 Dec 2024 9:38 UTC
102 points
79 comments1 min readLW link

Re­duce AI Self-Alle­giance by say­ing “he” in­stead of “I”

Knight Lee23 Dec 2024 9:32 UTC
10 points
4 comments2 min readLW link

Fund­ing Case: AI Safety Camp 11

23 Dec 2024 8:51 UTC
60 points
4 comments6 min readLW link
(manifund.org)

What is com­pute gov­er­nance?

23 Dec 2024 6:32 UTC
6 points
0 comments2 min readLW link
(aisafety.info)

Stop Mak­ing Sense

JenniferRM23 Dec 2024 5:16 UTC
16 points
0 comments3 min readLW link

Hire (or Be­come) a Think­ing Assistant

Raemon23 Dec 2024 3:58 UTC
141 points
50 comments8 min readLW link1 review

Non-Ob­vi­ous Benefits of Insurance

jefftk23 Dec 2024 3:40 UTC
21 points
5 comments2 min readLW link
(www.jefftk.com)

Vi­sion of a pos­i­tive Singularity

RussellThor23 Dec 2024 2:19 UTC
2 points
0 comments4 min readLW link

Ide­olo­gies are slow and nec­es­sary, for now

Gabriel Alfour23 Dec 2024 1:57 UTC
15 points
1 comment1 min readLW link
(cognition.cafe)

[Question] Has An­thropic checked if Claude fakes al­ign­ment for in­tended val­ues too?

Maloew23 Dec 2024 0:43 UTC
12 points
1 comment1 min readLW link

Ve­gans need to eat just enough Meat—em­per­i­cally eval­u­ate the min­i­mum am­mount of meat that max­i­mizes utility

Johannes C. Mayer22 Dec 2024 22:08 UTC
56 points
35 comments3 min readLW link

We are in a New Paradigm of AI Progress—OpenAI’s o3 model makes huge gains on the tough­est AI bench­marks in the world

garrison22 Dec 2024 21:45 UTC
17 points
3 comments4 min readLW link
(garrisonlovely.substack.com)

My AI timelines

samuelshadrach22 Dec 2024 21:06 UTC
12 points
2 comments5 min readLW link
(samuelshadrach.com)

A break­down of AI ca­pa­bil­ity lev­els fo­cused on AI R&D la­bor acceleration

ryan_greenblatt22 Dec 2024 20:56 UTC
120 points
11 comments6 min readLW link

How I saved 1 hu­man life (in ex­pec­ta­tion) with­out over­think­ing it

Christopher King22 Dec 2024 20:53 UTC
19 points
0 comments4 min readLW link

Check­ing in on Scott’s com­po­si­tion image bet with ima­gen 3

Dave Orr22 Dec 2024 19:04 UTC
65 points
0 comments1 min readLW link

Woloch & Wosatan

JackOfAllTrades22 Dec 2024 15:46 UTC
−11 points
0 comments2 min readLW link

A primer on ma­chine learn­ing in cryo-elec­tron microscopy (cryo-EM)

Abhishaike Mahajan22 Dec 2024 15:11 UTC
18 points
0 comments25 min readLW link
(www.owlposting.com)

Notes from Copen­hagen Sec­u­lar Sols­tice 2024

Søren Elverlin22 Dec 2024 15:08 UTC
9 points
0 comments3 min readLW link

Proof Ex­plained for “Ro­bust Agents Learn Causal World Model”

Dalcy22 Dec 2024 15:06 UTC
28 points
0 comments15 min readLW link

sub­func­tional over­laps in at­ten­tional se­lec­tion his­tory im­plies mo­men­tum for de­ci­sion-trajectories

Emrik22 Dec 2024 14:12 UTC
19 points
1 comment2 min readLW link

It looks like there are some good fund­ing op­por­tu­ni­ties in AI safety right now

Benjamin_Todd22 Dec 2024 12:41 UTC
20 points
0 comments4 min readLW link
(benjamintodd.substack.com)

What o3 Be­comes by 2028

Vladimir_Nesov22 Dec 2024 12:37 UTC
154 points
15 comments5 min readLW link

The Align­ment Simulator

Yair Halberstadt22 Dec 2024 11:45 UTC
28 points
3 comments2 min readLW link
(yairhalberstadt.github.io)

The­o­ret­i­cal Align­ment’s Se­cond Chance

lunatic_at_large22 Dec 2024 5:03 UTC
30 points
3 comments2 min readLW link

Ori­ent­ing to 3 year AGI timelines

Nikola Jurkovic22 Dec 2024 1:15 UTC
298 points
63 comments8 min readLW link2 reviews

ARC-AGI is a gen­uine AGI test but o3 cheated :(

Knight Lee22 Dec 2024 0:58 UTC
3 points
6 comments2 min readLW link

When AI 10x’s AI R&D, What Do We Do?

Logan Riggs21 Dec 2024 23:56 UTC
72 points
17 comments4 min readLW link

AI as sys­tems, not just models

Andy Arditi21 Dec 2024 23:19 UTC
29 points
0 comments7 min readLW link
(andyrdt.com)

Towards a Unified In­ter­pretabil­ity of Ar­tifi­cial and Biolog­i­cal Neu­ral Networks

jan_bauer21 Dec 2024 23:10 UTC
2 points
0 comments1 min readLW link

Rob­bin’s Farm Sled­ding Route

jefftk21 Dec 2024 22:10 UTC
13 points
1 comment1 min readLW link
(www.jefftk.com)

AGI with RL is Bad News for Safety

Nadav Brandes21 Dec 2024 19:36 UTC
19 points
22 comments2 min readLW link

Bet­ter differ­ence-mak­ing views

MichaelStJules21 Dec 2024 18:27 UTC
9 points
0 comments14 min readLW link

Re­view: Good Strat­egy, Bad Strategy

L Rudolf L21 Dec 2024 17:17 UTC
43 points
0 comments23 min readLW link
(nosetgauge.substack.com)

Last Line of Defense: Min­i­mum Vi­able Shelters for Mir­ror Bacteria

Ulrik Horn21 Dec 2024 8:28 UTC
16 points
26 comments21 min readLW link

Elon Musk and So­lar Futurism

transhumanist_atom_understander21 Dec 2024 2:55 UTC
32 points
27 comments5 min readLW link

Good Rea­sons for Alts

jefftk21 Dec 2024 1:30 UTC
24 points
2 comments1 min readLW link
(www.jefftk.com)

Up­dat­ing on Bad Arguments

Guive21 Dec 2024 1:19 UTC
11 points
2 comments2 min readLW link
(guive.substack.com)

Bird’s eye view: An in­ter­ac­tive rep­re­sen­ta­tion to see large col­lec­tion of text “from above”.

Alexandre Variengien21 Dec 2024 0:15 UTC
12 points
4 comments5 min readLW link
(alexandrevariengien.com)

The nihilism of NeurIPS

charlieoneill20 Dec 2024 23:58 UTC
107 points
6 comments4 min readLW link

Fore­cast 2025 With Vox’s Fu­ture Perfect Team — $2,500 Prize Pool

ChristianWilliams20 Dec 2024 23:00 UTC
19 points
0 comments1 min readLW link
(www.metaculus.com)

[Question] How do we quan­tify non-philan­thropic con­tri­bu­tions from Buffet and Soros?

Philosophistry20 Dec 2024 22:50 UTC
3 points
0 comments1 min readLW link