Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
ryan_greenblatt
Karma:
19,553
I’m the chief scientist at Redwood Research.
All
Posts
Comments
New
Top
Old
Page
1
AIs will greatly change engineering in AI companies well before AGI
ryan_greenblatt
9 Sep 2025 16:58 UTC
45
points
9
comments
11
min read
LW
link
Trust me bro, just one more RL scale up, this one will be the real scale up with the good environments, the actually legit one, trust me bro
ryan_greenblatt
3 Sep 2025 13:21 UTC
151
points
30
comments
8
min read
LW
link
Attaching requirements to model releases has serious downsides (relative to a different deadline for these requirements)
ryan_greenblatt
27 Aug 2025 17:04 UTC
98
points
2
comments
3
min read
LW
link
My AGI timeline updates from GPT-5 (and 2025 so far)
ryan_greenblatt
20 Aug 2025 16:11 UTC
162
points
14
comments
4
min read
LW
link
Recent Redwood Research project proposals
ryan_greenblatt
,
Buck
,
Julian Stastny
,
joshc
,
Alex Mallen
,
Adam Kaufman
,
Tyler Tracy
,
Aryan Bhatt
and
Joey Yudelson
14 Jul 2025 22:27 UTC
91
points
0
comments
3
min read
LW
link
Jankily controlling superintelligence
ryan_greenblatt
27 Jun 2025 14:05 UTC
69
points
4
comments
7
min read
LW
link
What does 10x-ing effective compute get you?
ryan_greenblatt
24 Jun 2025 18:33 UTC
55
points
10
comments
12
min read
LW
link
Prefix cache untrusted monitors: a method to apply after you catch your AI
ryan_greenblatt
20 Jun 2025 15:56 UTC
32
points
1
comment
7
min read
LW
link
AI safety techniques leveraging distillation
ryan_greenblatt
19 Jun 2025 14:31 UTC
61
points
0
comments
12
min read
LW
link
When does training a model change its goals?
Vivek Hebbar
and
ryan_greenblatt
12 Jun 2025 18:43 UTC
71
points
2
comments
15
min read
LW
link
OpenAI now has an RL API which is broadly accessible
ryan_greenblatt
11 Jun 2025 23:39 UTC
43
points
1
comment
5
min read
LW
link
When is it important that open-weight models aren’t released? My thoughts on the benefits and dangers of open-weight models in response to developments in CBRN capabilities.
ryan_greenblatt
9 Jun 2025 19:19 UTC
63
points
11
comments
9
min read
LW
link
The best approaches for mitigating “the intelligence curse” (or gradual disempowerment); my quick guesses at the best object-level interventions
ryan_greenblatt
31 May 2025 18:20 UTC
71
points
19
comments
5
min read
LW
link
AIs at the current capability level may be important for future safety work
ryan_greenblatt
12 May 2025 14:06 UTC
82
points
2
comments
4
min read
LW
link
Slow corporations as an intuition pump for AI R&D automation
ryan_greenblatt
and
elifland
9 May 2025 14:49 UTC
91
points
23
comments
9
min read
LW
link
What’s going on with AI progress and trends? (As of 5/2025)
ryan_greenblatt
2 May 2025 19:00 UTC
75
points
8
comments
8
min read
LW
link
7+ tractable directions in AI control
Julian Stastny
and
ryan_greenblatt
28 Apr 2025 17:12 UTC
93
points
1
comment
13
min read
LW
link
To be legible, evidence of misalignment probably has to be behavioral
ryan_greenblatt
15 Apr 2025 18:14 UTC
57
points
19
comments
3
min read
LW
link
Why do misalignment risks increase as AIs get more capable?
ryan_greenblatt
11 Apr 2025 3:06 UTC
33
points
6
comments
3
min read
LW
link
An overview of areas of control work
ryan_greenblatt
25 Mar 2025 22:02 UTC
32
points
0
comments
28
min read
LW
link
Back to top
Next