RSS

AI Align­ment In­tro Materials

TagLast edit: 4 Nov 2022 21:50 UTC by Raemon

AI Alignment Intro Materials. Posts that help someone get oriented and skill up. Distinct from AI Public Materials is that they are more “inward facing” than “outward facing”, i.e. for people who are already sold AI risk is a problem and want to upskill.

A new­comer’s guide to the tech­ni­cal AI safety field

zeshen4 Nov 2022 14:29 UTC
34 points
3 comments10 min readLW link

12 ca­reer-re­lated ques­tions that may (or may not) be helpful for peo­ple in­ter­ested in al­ign­ment research

Akash12 Dec 2022 22:36 UTC
20 points
0 comments2 min readLW link

My first year in AI alignment

Alex_Altair2 Jan 2023 1:28 UTC
56 points
10 comments7 min readLW link

List of links for get­ting into AI safety

zef4 Jan 2023 19:45 UTC
5 points
0 comments1 min readLW link

Align­ment Org Cheat Sheet

20 Sep 2022 17:36 UTC
63 points
6 comments4 min readLW link

How to pur­sue a ca­reer in tech­ni­cal AI alignment

charlie.rs4 Jun 2022 21:11 UTC
64 points
0 comments39 min readLW link

Lev­el­ling Up in AI Safety Re­search Engineering

Gabriel Mukobi2 Sep 2022 4:59 UTC
44 points
9 comments17 min readLW link

The Align­ment Prob­lem from a Deep Learn­ing Per­spec­tive (ma­jor rewrite)

10 Jan 2023 16:06 UTC
75 points
8 comments39 min readLW link
(arxiv.org)

AGI doesn’t need un­der­stand­ing, in­ten­tion, or con­scious­ness in or­der to kill us, only intelligence

James Blaha20 Feb 2023 0:55 UTC
10 points
2 comments18 min readLW link

[Question] Best re­sources to learn philos­o­phy of mind and AI?

Sky Moo27 Mar 2023 18:22 UTC
1 point
0 comments1 min readLW link
No comments.