RSS

Vael Gates(Vael Gates)

Karma: 743

Offer­ing AI safety sup­port calls for ML professionals

Vael Gates15 Feb 2024 23:48 UTC
61 points
1 comment1 min readLW link

Ret­ro­spec­tive on the AI Safety Field Build­ing Hub

Vael Gates2 Feb 2023 2:06 UTC
30 points
0 comments1 min readLW link

In­ter­views with 97 AI Re­searchers: Quan­ti­ta­tive Analysis

2 Feb 2023 1:01 UTC
23 points
0 comments7 min readLW link

“AI Risk Dis­cus­sions” web­site: Ex­plor­ing in­ter­views from 97 AI Researchers

2 Feb 2023 1:00 UTC
43 points
1 comment1 min readLW link

Pre­dict­ing re­searcher in­ter­est in AI alignment

Vael Gates2 Feb 2023 0:58 UTC
25 points
0 comments1 min readLW link

What AI Safety Ma­te­ri­als Do ML Re­searchers Find Com­pel­ling?

28 Dec 2022 2:03 UTC
175 points
34 comments2 min readLW link

An­nounc­ing the AI Safety Field Build­ing Hub, a new effort to provide AISFB pro­jects, men­tor­ship, and funding

Vael Gates28 Jul 2022 21:29 UTC
49 points
3 comments6 min readLW link

Re­sources I send to AI re­searchers about AI safety

Vael Gates14 Jun 2022 2:24 UTC
69 points
12 comments1 min readLW link

Vael Gates: Risks from Ad­vanced AI (June 2022)

Vael Gates14 Jun 2022 0:54 UTC
38 points
2 comments30 min readLW link

Tran­scripts of in­ter­views with AI researchers

Vael Gates9 May 2022 5:57 UTC
169 points
9 comments2 min readLW link

Self-study­ing to de­velop an in­side-view model of AI al­ign­ment; co-studiers wel­come!

Vael Gates30 Nov 2021 9:25 UTC
13 points
0 comments4 min readLW link