RSS

Akash

Karma: 4,486

Time-Time Tradeoffs

Akash10 Apr 2022 2:33 UTC
17 points
1 comment3 min readLW link
(forum.effectivealtruism.org)

Lifeguards

Akash15 Jun 2022 23:03 UTC
12 points
3 comments2 min readLW link
(forum.effectivealtruism.org)

Con­ver­sa­tion with Eliezer: What do you want the sys­tem to do?

Akash25 Jun 2022 17:36 UTC
120 points
38 comments2 min readLW link

A sum­mary of ev­ery Re­plac­ing Guilt post

Akash30 Jun 2022 0:46 UTC
31 points
3 comments10 min readLW link
(forum.effectivealtruism.org)

$500 bounty for al­ign­ment con­test ideas

Akash30 Jun 2022 1:56 UTC
29 points
5 comments2 min readLW link

A sum­mary of ev­ery “High­lights from the Se­quences” post

Akash15 Jul 2022 23:01 UTC
94 points
7 comments17 min readLW link

Four ques­tions I ask AI safety researchers

Akash17 Jul 2022 17:25 UTC
17 points
0 comments1 min readLW link

An un­offi­cial “High­lights from the Se­quences” tier list

Akash5 Sep 2022 14:07 UTC
29 points
1 comment5 min readLW link

AI Safety field-build­ing pro­jects I’d like to see

Akash11 Sep 2022 23:43 UTC
44 points
7 comments6 min readLW link

Un­der­stand­ing Con­jec­ture: Notes from Con­nor Leahy interview

Akash15 Sep 2022 18:37 UTC
107 points
23 comments15 min readLW link

Ap­ply for men­tor­ship in AI Safety field-building

Akash17 Sep 2022 19:06 UTC
9 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

Align­ment Org Cheat Sheet

20 Sep 2022 17:36 UTC
69 points
8 comments4 min readLW link

7 traps that (we think) new al­ign­ment re­searchers of­ten fall into

27 Sep 2022 23:13 UTC
174 points
10 comments4 min readLW link

Pos­si­ble miracles

9 Oct 2022 18:17 UTC
64 points
33 comments8 min readLW link

Con­sider try­ing Vivek Heb­bar’s al­ign­ment exercises

Akash24 Oct 2022 19:46 UTC
38 points
1 comment4 min readLW link

Re­sources that (I think) new al­ign­ment re­searchers should know about

Akash28 Oct 2022 22:13 UTC
77 points
9 comments4 min readLW link

In­stead of tech­ni­cal re­search, more peo­ple should fo­cus on buy­ing time

5 Nov 2022 20:43 UTC
100 points
45 comments14 min readLW link

Ways to buy time

12 Nov 2022 19:31 UTC
34 points
23 comments12 min readLW link

An­nounc­ing AI Align­ment Awards: $100k re­search con­tests about goal mis­gen­er­al­iza­tion & corrigibility

22 Nov 2022 22:19 UTC
73 points
20 comments4 min readLW link

Pod­cast: Shoshan­nah Tekofsky on skil­ling up in AI safety, vis­it­ing Berkeley, and de­vel­op­ing novel re­search ideas

Akash25 Nov 2022 20:47 UTC
37 points
2 comments9 min readLW link