RSS

Orpheus16

Karma: 6,666

“Sta­tus” can be cor­ro­sive; here’s how I han­dle it

Orpheus16Jan 24, 2023, 1:25 AM
71 points
8 comments6 min readLW link

[Linkpost] TIME ar­ti­cle: Deep­Mind’s CEO Helped Take AI Main­stream. Now He’s Urg­ing Caution

Orpheus16Jan 21, 2023, 4:51 PM
58 points
2 comments3 min readLW link
(time.com)

Went­worth and Larsen on buy­ing time

Jan 9, 2023, 9:31 PM
74 points
6 comments12 min readLW link

[Linkpost] Jan Leike on three kinds of al­ign­ment taxes

Orpheus16Jan 6, 2023, 11:57 PM
27 points
2 comments3 min readLW link
(aligned.substack.com)

My thoughts on OpenAI’s al­ign­ment plan

Orpheus16Dec 30, 2022, 7:33 PM
55 points
3 comments20 min readLW link

An overview of some promis­ing work by ju­nior al­ign­ment researchers

Orpheus16Dec 26, 2022, 5:23 PM
34 points
0 comments4 min readLW link

Pod­cast: Tam­era Lan­ham on AI risk, threat mod­els, al­ign­ment pro­pos­als, ex­ter­nal­ized rea­son­ing over­sight, and work­ing at Anthropic

Orpheus16Dec 20, 2022, 9:39 PM
18 points
2 comments11 min readLW link

12 ca­reer-re­lated ques­tions that may (or may not) be helpful for peo­ple in­ter­ested in al­ign­ment research

Orpheus16Dec 12, 2022, 10:36 PM
20 points
0 comments2 min readLW link

Pod­cast: Shoshan­nah Tekofsky on skil­ling up in AI safety, vis­it­ing Berkeley, and de­vel­op­ing novel re­search ideas

Orpheus16Nov 25, 2022, 8:47 PM
37 points
2 comments9 min readLW link

An­nounc­ing AI Align­ment Awards: $100k re­search con­tests about goal mis­gen­er­al­iza­tion & corrigibility

Nov 22, 2022, 10:19 PM
74 points
20 comments4 min readLW link

Ways to buy time

Nov 12, 2022, 7:31 PM
34 points
23 comments12 min readLW link

In­stead of tech­ni­cal re­search, more peo­ple should fo­cus on buy­ing time

Nov 5, 2022, 8:43 PM
101 points
45 comments14 min readLW link

Re­sources that (I think) new al­ign­ment re­searchers should know about

Orpheus16Oct 28, 2022, 10:13 PM
70 points
9 comments4 min readLW link

Con­sider try­ing Vivek Heb­bar’s al­ign­ment exercises

Orpheus16Oct 24, 2022, 7:46 PM
38 points
1 comment4 min readLW link

Pos­si­ble miracles

Oct 9, 2022, 6:17 PM
64 points
34 comments8 min readLW link

7 traps that (we think) new al­ign­ment re­searchers of­ten fall into

Sep 27, 2022, 11:13 PM
177 points
10 comments4 min readLW link

Align­ment Org Cheat Sheet

Sep 20, 2022, 5:36 PM
70 points
8 comments4 min readLW link

Ap­ply for men­tor­ship in AI Safety field-building

Orpheus16Sep 17, 2022, 7:06 PM
9 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

Un­der­stand­ing Con­jec­ture: Notes from Con­nor Leahy interview

Orpheus16Sep 15, 2022, 6:37 PM
107 points
23 comments15 min readLW link

AI Safety field-build­ing pro­jects I’d like to see

Orpheus16Sep 11, 2022, 11:43 PM
46 points
8 comments6 min readLW link