Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
ryan_greenblatt
Karma:
22,531
I’m the chief scientist at Redwood Research.
All
Posts
Comments
New
Top
Old
Page
1
The inaugural Redwood Research podcast
Buck
and
ryan_greenblatt
4 Jan 2026 22:11 UTC
137
points
9
comments
142
min read
LW
link
Recent LLMs can do 2-hop and 3-hop latent (no-CoT) reasoning on natural facts
ryan_greenblatt
1 Jan 2026 13:36 UTC
124
points
11
comments
3
min read
LW
link
Measuring no CoT math time horizon (single forward pass)
ryan_greenblatt
26 Dec 2025 16:37 UTC
212
points
18
comments
3
min read
LW
link
Recent LLMs can use filler tokens or problem repeats to improve (no-CoT) math performance
ryan_greenblatt
22 Dec 2025 17:21 UTC
152
points
18
comments
7
min read
LW
link
What’s up with Anthropic predicting AGI by early 2027?
ryan_greenblatt
3 Nov 2025 16:45 UTC
159
points
16
comments
20
min read
LW
link
Sonnet 4.5′s eval gaming seriously undermines alignment evals, and this seems caused by training on alignment evals
Alexa Pan
and
ryan_greenblatt
30 Oct 2025 15:34 UTC
144
points
21
comments
14
min read
LW
link
Is 90% of code at Anthropic being written by AIs?
ryan_greenblatt
22 Oct 2025 14:50 UTC
92
points
14
comments
5
min read
LW
link
Reducing risk from scheming by studying trained-in scheming behavior
ryan_greenblatt
16 Oct 2025 16:16 UTC
32
points
0
comments
11
min read
LW
link
Iterated Development and Study of Schemers (IDSS)
ryan_greenblatt
10 Oct 2025 14:17 UTC
41
points
1
comment
8
min read
LW
link
Plans A, B, C, and D for misalignment risk
ryan_greenblatt
8 Oct 2025 17:18 UTC
131
points
75
comments
6
min read
LW
link
Reasons to sell frontier lab equity to donate now rather than later
Daniel_Eth
,
Ethan Perez
and
ryan_greenblatt
26 Sep 2025 23:07 UTC
245
points
34
comments
12
min read
LW
link
Notes on fatalities from AI takeover
ryan_greenblatt
23 Sep 2025 17:18 UTC
56
points
61
comments
8
min read
LW
link
Focus transparency on risk reports, not safety cases
ryan_greenblatt
22 Sep 2025 15:27 UTC
48
points
3
comments
6
min read
LW
link
Prospects for studying actual schemers
ryan_greenblatt
and
Julian Stastny
19 Sep 2025 14:11 UTC
40
points
2
comments
58
min read
LW
link
AIs will greatly change engineering in AI companies well before AGI
ryan_greenblatt
9 Sep 2025 16:58 UTC
52
points
9
comments
11
min read
LW
link
Trust me bro, just one more RL scale up, this one will be the real scale up with the good environments, the actually legit one, trust me bro
ryan_greenblatt
3 Sep 2025 13:21 UTC
155
points
32
comments
8
min read
LW
link
Attaching requirements to model releases has serious downsides (relative to a different deadline for these requirements)
ryan_greenblatt
27 Aug 2025 17:04 UTC
99
points
2
comments
3
min read
LW
link
My AGI timeline updates from GPT-5 (and 2025 so far)
ryan_greenblatt
20 Aug 2025 16:11 UTC
169
points
14
comments
4
min read
LW
link
Recent Redwood Research project proposals
ryan_greenblatt
,
Buck
,
Julian Stastny
,
joshc
,
Alex Mallen
,
Adam Kaufman
,
Tyler Tracy
,
Aryan Bhatt
and
Joey Yudelson
14 Jul 2025 22:27 UTC
91
points
0
comments
3
min read
LW
link
Jankily controlling superintelligence
ryan_greenblatt
27 Jun 2025 14:05 UTC
70
points
4
comments
7
min read
LW
link
Back to top
Next