Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Vael Gates
(Vael Gates)
Karma:
682
All
Posts
Comments
New
Top
Old
Retrospective on the AI Safety Field Building Hub
Vael Gates
2 Feb 2023 2:06 UTC
30
points
0
comments
1
min read
LW
link
Interviews with 97 AI Researchers: Quantitative Analysis
Maheen Shermohammed
and
Vael Gates
2 Feb 2023 1:01 UTC
22
points
0
comments
7
min read
LW
link
“AI Risk Discussions” website: Exploring interviews from 97 AI Researchers
Vael Gates
,
Lukas Trötzmüller
,
Maheen Shermohammed
,
michaelkeenan
and
zchuang
2 Feb 2023 1:00 UTC
43
points
1
comment
1
min read
LW
link
Predicting researcher interest in AI alignment
Vael Gates
2 Feb 2023 0:58 UTC
25
points
0
comments
1
min read
LW
link
What AI Safety Materials Do ML Researchers Find Compelling?
Vael Gates
and
Collin
28 Dec 2022 2:03 UTC
174
points
34
comments
2
min read
LW
link
Announcing the AI Safety Field Building Hub, a new effort to provide AISFB projects, mentorship, and funding
Vael Gates
28 Jul 2022 21:29 UTC
49
points
3
comments
6
min read
LW
link
Resources I send to AI researchers about AI safety
Vael Gates
14 Jun 2022 2:24 UTC
69
points
12
comments
11
min read
LW
link
Vael Gates: Risks from Advanced AI (June 2022)
Vael Gates
14 Jun 2022 0:54 UTC
38
points
2
comments
30
min read
LW
link
Transcripts of interviews with AI researchers
Vael Gates
9 May 2022 5:57 UTC
169
points
9
comments
2
min read
LW
link
Self-studying to develop an inside-view model of AI alignment; co-studiers welcome!
Vael Gates
30 Nov 2021 9:25 UTC
13
points
0
comments
4
min read
LW
link
Back to top