Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
ryan_greenblatt
Karma:
23,275
I’m the chief scientist at Redwood Research.
All
Posts
Comments
New
Top
Old
Page
1
How do we (more) safely defer to AIs?
ryan_greenblatt
and
Julian Stastny
12 Feb 2026 16:55 UTC
81
points
5
comments
72
min read
LW
link
Distinguish between inference scaling and “larger tasks use more compute”
ryan_greenblatt
11 Feb 2026 18:37 UTC
87
points
5
comments
2
min read
LW
link
The inaugural Redwood Research podcast
Buck
and
ryan_greenblatt
4 Jan 2026 22:11 UTC
140
points
10
comments
142
min read
LW
link
Recent LLMs can do 2-hop and 3-hop latent (no-CoT) reasoning on natural facts
ryan_greenblatt
1 Jan 2026 13:36 UTC
127
points
11
comments
3
min read
LW
link
Measuring no CoT math time horizon (single forward pass)
ryan_greenblatt
26 Dec 2025 16:37 UTC
213
points
18
comments
3
min read
LW
link
Recent LLMs can use filler tokens or problem repeats to improve (no-CoT) math performance
ryan_greenblatt
22 Dec 2025 17:21 UTC
153
points
19
comments
7
min read
LW
link
What’s up with Anthropic predicting AGI by early 2027?
ryan_greenblatt
3 Nov 2025 16:45 UTC
160
points
16
comments
20
min read
LW
link
Sonnet 4.5′s eval gaming seriously undermines alignment evals, and this seems caused by training on alignment evals
Alexa Pan
and
ryan_greenblatt
30 Oct 2025 15:34 UTC
144
points
22
comments
14
min read
LW
link
Is 90% of code at Anthropic being written by AIs?
ryan_greenblatt
22 Oct 2025 14:50 UTC
92
points
15
comments
5
min read
LW
link
Reducing risk from scheming by studying trained-in scheming behavior
ryan_greenblatt
16 Oct 2025 16:16 UTC
32
points
0
comments
11
min read
LW
link
Iterated Development and Study of Schemers (IDSS)
ryan_greenblatt
10 Oct 2025 14:17 UTC
41
points
1
comment
8
min read
LW
link
Plans A, B, C, and D for misalignment risk
ryan_greenblatt
8 Oct 2025 17:18 UTC
137
points
77
comments
6
min read
LW
link
Reasons to sell frontier lab equity to donate now rather than later
Daniel_Eth
,
Ethan Perez
and
ryan_greenblatt
26 Sep 2025 23:07 UTC
246
points
34
comments
12
min read
LW
link
Notes on fatalities from AI takeover
ryan_greenblatt
23 Sep 2025 17:18 UTC
56
points
61
comments
8
min read
LW
link
Focus transparency on risk reports, not safety cases
ryan_greenblatt
22 Sep 2025 15:27 UTC
48
points
3
comments
6
min read
LW
link
Prospects for studying actual schemers
ryan_greenblatt
and
Julian Stastny
19 Sep 2025 14:11 UTC
40
points
2
comments
58
min read
LW
link
AIs will greatly change engineering in AI companies well before AGI
ryan_greenblatt
9 Sep 2025 16:58 UTC
52
points
9
comments
11
min read
LW
link
Trust me bro, just one more RL scale up, this one will be the real scale up with the good environments, the actually legit one, trust me bro
ryan_greenblatt
3 Sep 2025 13:21 UTC
156
points
32
comments
8
min read
LW
link
Attaching requirements to model releases has serious downsides (relative to a different deadline for these requirements)
ryan_greenblatt
27 Aug 2025 17:04 UTC
99
points
2
comments
3
min read
LW
link
My AGI timeline updates from GPT-5 (and 2025 so far)
ryan_greenblatt
20 Aug 2025 16:11 UTC
169
points
14
comments
4
min read
LW
link
Back to top
Next