RSS

ryan_greenblatt

Karma: 22,531

I’m the chief scientist at Redwood Research.

The inau­gu­ral Red­wood Re­search podcast

4 Jan 2026 22:11 UTC
137 points
9 comments142 min readLW link

Re­cent LLMs can do 2-hop and 3-hop la­tent (no-CoT) rea­son­ing on nat­u­ral facts

ryan_greenblatt1 Jan 2026 13:36 UTC
124 points
11 comments3 min readLW link

Mea­sur­ing no CoT math time hori­zon (sin­gle for­ward pass)

ryan_greenblatt26 Dec 2025 16:37 UTC
212 points
18 comments3 min readLW link

Re­cent LLMs can use filler to­kens or prob­lem re­peats to im­prove (no-CoT) math performance

ryan_greenblatt22 Dec 2025 17:21 UTC
152 points
18 comments7 min readLW link

What’s up with An­thropic pre­dict­ing AGI by early 2027?

ryan_greenblatt3 Nov 2025 16:45 UTC
159 points
16 comments20 min readLW link

Son­net 4.5′s eval gam­ing se­ri­ously un­der­mines al­ign­ment evals, and this seems caused by train­ing on al­ign­ment evals

30 Oct 2025 15:34 UTC
144 points
21 comments14 min readLW link

Is 90% of code at An­thropic be­ing writ­ten by AIs?

ryan_greenblatt22 Oct 2025 14:50 UTC
92 points
14 comments5 min readLW link

Re­duc­ing risk from schem­ing by study­ing trained-in schem­ing behavior

ryan_greenblatt16 Oct 2025 16:16 UTC
32 points
0 comments11 min readLW link

Iter­ated Devel­op­ment and Study of Schemers (IDSS)

ryan_greenblatt10 Oct 2025 14:17 UTC
41 points
1 comment8 min readLW link

Plans A, B, C, and D for mis­al­ign­ment risk

ryan_greenblatt8 Oct 2025 17:18 UTC
131 points
75 comments6 min readLW link

Rea­sons to sell fron­tier lab equity to donate now rather than later

26 Sep 2025 23:07 UTC
245 points
34 comments12 min readLW link

Notes on fatal­ities from AI takeover

ryan_greenblatt23 Sep 2025 17:18 UTC
56 points
61 comments8 min readLW link

Fo­cus trans­parency on risk re­ports, not safety cases

ryan_greenblatt22 Sep 2025 15:27 UTC
48 points
3 comments6 min readLW link

Prospects for study­ing ac­tual schemers

19 Sep 2025 14:11 UTC
40 points
2 comments58 min readLW link

AIs will greatly change en­g­ineer­ing in AI com­pa­nies well be­fore AGI

ryan_greenblatt9 Sep 2025 16:58 UTC
52 points
9 comments11 min readLW link

Trust me bro, just one more RL scale up, this one will be the real scale up with the good en­vi­ron­ments, the ac­tu­ally le­git one, trust me bro

ryan_greenblatt3 Sep 2025 13:21 UTC
155 points
32 comments8 min readLW link

At­tach­ing re­quire­ments to model re­leases has se­ri­ous down­sides (rel­a­tive to a differ­ent dead­line for these re­quire­ments)

ryan_greenblatt27 Aug 2025 17:04 UTC
99 points
2 comments3 min readLW link

My AGI timeline up­dates from GPT-5 (and 2025 so far)

ryan_greenblatt20 Aug 2025 16:11 UTC
169 points
14 comments4 min readLW link

Re­cent Red­wood Re­search pro­ject proposals

14 Jul 2025 22:27 UTC
91 points
0 comments3 min readLW link

Jankily con­trol­ling superintelligence

ryan_greenblatt27 Jun 2025 14:05 UTC
70 points
4 comments7 min readLW link