Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
axioman
Karma:
138
All
Posts
Comments
New
Top
Old
Proposal: Scaling laws for RL generalization
axioman
1 Oct 2021 21:32 UTC
14
points
12
comments
11
min read
LW
link
Forecasting AI Progress: A Research Agenda
rossg
and
axioman
10 Aug 2020 1:04 UTC
39
points
4
comments
1
min read
LW
link
How can Interpretability help Alignment?
RobertKirk
,
Tomáš Gavenčiak
and
axioman
23 May 2020 16:16 UTC
37
points
3
comments
9
min read
LW
link
Back to top