Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
AI alignment researchers don’t (seem to) stack
So8res
21 Feb 2023 0:48 UTC
194
points
40
comments
3
min read
LW
link
AI
Post permalink
Link without comments
Link without top nav bars
Link without comments or top nav bars
What links here?
The Learning-Theoretic Agenda: Status 2023 by
Vanessa Kosoy
(
19 Apr 2023 5:21 UTC
; 144 points)
AI Safety − 7 months of discussion in 17 minutes by
Zoe Williams
(EA Forum;
15 Mar 2023 23:41 UTC
; 90 points)
How I learned to stop worrying and love skill trees by
junk heap homotopy
(
23 May 2023 4:08 UTC
; 83 points)
AI #2 by
Zvi
(
2 Mar 2023 14:50 UTC
; 66 points)
Try to solve the hard parts of the alignment problem by
Mikhail Samin
(
18 Mar 2023 14:55 UTC
; 54 points)
PonPonPon
's comment on A rough and incomplete review of some of John Wentworth’s research by
So8res
(
28 Mar 2023 22:31 UTC
; 43 points)
United We Align: Harnessing Collective Human Intelligence for AI Alignment Progress by
Shoshannah Tekofsky
(
20 Apr 2023 23:19 UTC
; 41 points)
EA & LW Forum Weekly Summary (20th − 26th Feb 2023) by
Zoe Williams
(EA Forum;
27 Feb 2023 3:46 UTC
; 29 points)
An open letter to SERI MATS program organisers by
Roman Leventov
(
20 Apr 2023 16:34 UTC
; 26 points)
AI Safety − 7 months of discussion in 17 minutes by
Zoe Williams
(
15 Mar 2023 23:41 UTC
; 25 points)
How I learned to stop worrying and love skill trees by
Clark Urzo
(EA Forum;
23 May 2023 8:03 UTC
; 22 points)
Part 3: A Proposed Approach for AI Safety Movement Building: Projects, Professions, Skills, and Ideas for the Future [long post][bounty for feedback] by
PeterSlattery
(EA Forum;
22 Mar 2023 0:54 UTC
; 22 points)
Why is LW not about winning? by
azergante
(
13 Jul 2025 22:36 UTC
; 21 points)
trevor
's comment on The Kids are Not Okay by
Zvi
(
8 Mar 2023 18:24 UTC
; 20 points)
A Proposed Approach for AI Safety Movement Building: Projects, Professions, Skills, and Ideas for the Future [long post][bounty for feedback] by
peterslattery
(
22 Mar 2023 1:11 UTC
; 14 points)
Try to solve the hard parts of the alignment problem by
MikhailSamin
(EA Forum;
11 Jul 2023 17:02 UTC
; 8 points)
Should you increase AI alignment funding, or increase AI regulation? by
Knight Lee
(
26 Nov 2024 9:17 UTC
; 7 points)
trevor
's comment on On “Geeks, MOPs, and Sociopaths” by
alkjash
(
20 Jan 2024 0:47 UTC
; 7 points)
Contrapositive Natural Abstraction—Project Intro by
Elliot Callender
(
24 Jun 2024 18:37 UTC
; 4 points)
EA & LW Forum Weekly Summary (20th − 26th Feb 2023) by
Zoe Williams
(
27 Feb 2023 3:46 UTC
; 4 points)
Nicholas Kross
's comment on Why Not Just… Build Weak AI Tools For AI Alignment Research? by
johnswentworth
(
5 Mar 2023 17:05 UTC
; 3 points)
Adam Zerner
's comment on adamzerner’s Shortform by
Adam Zerner
(
3 Mar 2023 6:24 UTC
; 2 points)
The humanity’s biggest mistake by
RomanS
(
10 Mar 2023 16:30 UTC
; 0 points)
Crossposted to EA Forum (47
points
, 3
comments
)
Back to top
AI alignment researchers don’t (seem to) stack