RSS

Panology

JenniferRM23 Dec 2024 21:40 UTC
17 points
9 comments5 min readLW link

Eu­logy to the Obits

21 Apr 2025 14:10 UTC
2 points
1 comment10 min readLW link

A short in­tro­duc­tion to ma­chine learning

Richard_Ngo30 Aug 2021 14:31 UTC
95 points
8 comments8 min readLW link

Eval­u­at­ing “What 2026 Looks Like” So Far

Jonny Spicer24 Feb 2025 18:55 UTC
77 points
4 comments7 min readLW link

Is Gem­ini now bet­ter than Claude at Poké­mon?

Julian Bradshaw19 Apr 2025 23:34 UTC
81 points
11 comments5 min readLW link

Why Should I As­sume CCP AGI is Worse Than USG AGI?

Tomás B.19 Apr 2025 14:47 UTC
180 points
59 comments1 min readLW link

Train­ing AGI in Se­cret would be Un­safe and Unethical

Daniel Kokotajlo18 Apr 2025 12:27 UTC
130 points
11 comments6 min readLW link

Im­pli­ca­tions for the like­li­hood of hu­man ex­tinc­tion from the re­cent dis­cov­ery of pos­si­ble micro­bial life

Mvolz21 Apr 2025 19:15 UTC
0 points
2 comments1 min readLW link

Re­search Notes: Run­ning Claude 3.7, Gem­ini 2.5 Pro, and o3 on Poké­mon Red

Julian Bradshaw21 Apr 2025 3:52 UTC
88 points
10 comments14 min readLW link

Down­stream ap­pli­ca­tions as val­i­da­tion of in­ter­pretabil­ity progress

Sam Marks31 Mar 2025 1:35 UTC
111 points
2 comments7 min readLW link

The Uses of Complacency

sarahconstantin21 Apr 2025 18:50 UTC
50 points
3 comments8 min readLW link
(sarahconstantin.substack.com)

[Question] To what ethics is an AGI ac­tu­ally safely al­ignable?

StanislavKrym20 Apr 2025 17:09 UTC
1 point
6 comments4 min readLW link

$500 Bounty Prob­lem: Are (Ap­prox­i­mately) Deter­minis­tic Nat­u­ral La­tents All You Need?

21 Apr 2025 20:19 UTC
67 points
1 comment3 min readLW link

AI 2027 is a Bet Against Am­dahl’s Law

snewman21 Apr 2025 3:09 UTC
101 points
26 comments9 min readLW link

SIA > SSA, part 1: Learn­ing from the fact that you exist

Joe Carlsmith1 Oct 2021 5:43 UTC
46 points
17 comments16 min readLW link

Why Have Sen­tence Lengths De­creased?

Arjun Panickssery3 Apr 2025 17:50 UTC
240 points
62 comments4 min readLW link

Ex­ist­ing Safety Frame­works Im­ply Un­rea­son­able Confidence

10 Apr 2025 16:31 UTC
37 points
2 comments15 min readLW link
(intelligence.org)

Align­ment Fak­ing Re­vis­ited: Im­proved Clas­sifiers and Open Source Extensions

8 Apr 2025 17:32 UTC
144 points
19 comments12 min readLW link

Crime and Pu­n­ish­ment #1

Zvi21 Apr 2025 15:30 UTC
37 points
4 comments39 min readLW link
(thezvi.wordpress.com)

AI 2027: What Su­per­in­tel­li­gence Looks Like

3 Apr 2025 16:23 UTC
615 points
194 comments41 min readLW link
(ai-2027.com)