AGI safety from first prin­ci­ples: Control

Richard_Ngo2 Oct 2020 21:51 UTC
60 points
6 comments9 min readLW link

Mut­ing on Group Calls

jefftk2 Oct 2020 20:30 UTC
12 points
2 comments1 min readLW link
(www.jefftk.com)

At­ten­tion to snakes not fear of snakes: evolu­tion en­cod­ing en­vi­ron­men­tal knowl­edge in periph­eral systems

Kaj_Sotala2 Oct 2020 11:50 UTC
46 points
1 comment3 min readLW link
(kajsotala.fi)

Math That Clicks: Look for Two-Way Correspondences

TurnTrout2 Oct 2020 1:22 UTC
35 points
4 comments3 min readLW link

A sim­ple de­vice for in­door air management

Richard Korzekwa 2 Oct 2020 1:02 UTC
49 points
10 comments3 min readLW link

Sun­day Meetup: Work­shop on On­line Sur­veys with Spencer Green­berg

Raemon2 Oct 2020 0:34 UTC
27 points
5 comments1 min readLW link

Linkpost: Choice Ex­plains Pos­i­tivity and Con­fir­ma­tion Bias

Gunnar_Zarncke1 Oct 2020 21:46 UTC
8 points
0 comments1 min readLW link

Open & Wel­come Thread – Oc­to­ber 2020

Ben Pace1 Oct 2020 19:06 UTC
14 points
54 comments1 min readLW link

Hiring en­g­ineers and re­searchers to help al­ign GPT-3

paulfchristiano1 Oct 2020 18:54 UTC
206 points
13 comments3 min readLW link

Covid 10/​1: The Long Haul

Zvi1 Oct 2020 18:00 UTC
95 points
22 comments9 min readLW link
(thezvi.wordpress.com)

Words and Implications

johnswentworth1 Oct 2020 17:37 UTC
61 points
25 comments8 min readLW link

Your Stan­dards are Too High

Neel Nanda1 Oct 2020 17:03 UTC
23 points
2 comments14 min readLW link
(neelnanda.io)

Three car seats?

jefftk1 Oct 2020 14:30 UTC
18 points
9 comments1 min readLW link
(www.jefftk.com)

Fore­cast­ing Newslet­ter: Septem­ber 2020.

NunoSempere1 Oct 2020 11:00 UTC
21 points
3 comments11 min readLW link

[Question] Bab­ble challenge: 50 ways of send­ing some­thing to the moon

jacobjacob1 Oct 2020 4:20 UTC
94 points
114 comments2 min readLW link1 review

AGI safety from first prin­ci­ples: Alignment

Richard_Ngo1 Oct 2020 3:13 UTC
59 points
3 comments13 min readLW link

How to not be an alarmist

DirectedEvolution30 Sep 2020 21:35 UTC
8 points
2 comments2 min readLW link

[Question] Com­pe­tence vs Alignment

Ariel Kwiatkowski30 Sep 2020 21:03 UTC
7 points
4 comments1 min readLW link

“Zero Sum” is a mis­nomer.

abramdemski30 Sep 2020 18:25 UTC
113 points
34 comments6 min readLW link

Eval­u­at­ing Life Ex­ten­sion Ad­vo­cacy Foundation

emanuele ascani30 Sep 2020 18:04 UTC
7 points
7 comments5 min readLW link

[AN #119]: AI safety when agents are shaped by en­vi­ron­ments, not rewards

Rohin Shah30 Sep 2020 17:10 UTC
11 points
0 comments11 min readLW link
(mailchi.mp)

Learn­ing how to learn

Neel Nanda30 Sep 2020 16:50 UTC
38 points
0 comments15 min readLW link
(www.neelnanda.io)

In­dus­trial literacy

jasoncrawford30 Sep 2020 16:39 UTC
301 points
130 comments3 min readLW link
(rootsofprogress.org)

Ja­son Crawford on the non-lin­ear model of in­no­va­tion: SSC On­line Meetup

JoshuaFox30 Sep 2020 10:13 UTC
7 points
1 comment1 min readLW link

Holy Grails of Chemistry

chemslug30 Sep 2020 2:03 UTC
34 points
2 comments1 min readLW link

“Un­su­per­vised” trans­la­tion as an (in­tent) al­ign­ment problem

paulfchristiano30 Sep 2020 0:50 UTC
61 points
15 comments4 min readLW link
(ai-alignment.com)

[Question] Ex­am­ples of self-gov­er­nance to re­duce tech­nol­ogy risk?

Jia29 Sep 2020 19:31 UTC
10 points
4 comments1 min readLW link

AGI safety from first prin­ci­ples: Goals and Agency

Richard_Ngo29 Sep 2020 19:06 UTC
76 points
15 comments15 min readLW link

Seek Up­side Risk

Neel Nanda29 Sep 2020 16:47 UTC
20 points
6 comments9 min readLW link
(www.neelnanda.io)

Do­ing dis­course bet­ter: Stuff I wish I knew

dynomight29 Sep 2020 14:34 UTC
27 points
11 comments1 min readLW link
(dyno-might.github.io)

David Fried­man on Le­gal Sys­tems Very Differ­ent from Ours: SlateS­tarCodex On­line Meetup

JoshuaFox29 Sep 2020 11:18 UTC
10 points
1 comment1 min readLW link

Read­ing Dis­cus­sion Group

NoSignalNoNoise29 Sep 2020 3:59 UTC
6 points
0 comments1 min readLW link

Cambridge Vir­tual LW/​SSC Meetup

NoSignalNoNoise29 Sep 2020 3:42 UTC
6 points
0 comments1 min readLW link

AGI safety from first prin­ci­ples: Superintelligence

Richard_Ngo28 Sep 2020 19:53 UTC
86 points
8 comments9 min readLW link

AGI safety from first prin­ci­ples: Introduction

Richard_Ngo28 Sep 2020 19:53 UTC
121 points
18 comments2 min readLW link1 review

[Question] is scope in­sen­si­tivity re­ally a brain er­ror?

Kaarlo Tuomi28 Sep 2020 18:37 UTC
4 points
15 comments1 min readLW link

[Question] What De­ci­sion The­ory is Im­plied By Pre­dic­tive Pro­cess­ing?

johnswentworth28 Sep 2020 17:20 UTC
56 points
17 comments1 min readLW link

[Question] What are ex­am­ples of Ra­tion­al­ist fable-like sto­ries?

Mati_Roy28 Sep 2020 16:52 UTC
19 points
42 comments1 min readLW link

Macro-Procrastination

Neel Nanda28 Sep 2020 16:07 UTC
9 points
0 comments9 min readLW link
(www.neelnanda.io)

[Question] What are good ice breaker ques­tions for meet­ing peo­ple in this com­mu­nity?

Mati_Roy28 Sep 2020 15:07 UTC
9 points
2 comments1 min readLW link

On De­stroy­ing the World

Chris_Leong28 Sep 2020 7:38 UTC
78 points
86 comments5 min readLW link

“Win First” vs “Chill First”

lionhearted (Sebastian Marshall)28 Sep 2020 6:48 UTC
101 points
20 comments3 min readLW link

On “Not Screw­ing Up Ri­tual Can­dles”

Raemon27 Sep 2020 21:55 UTC
48 points
7 comments3 min readLW link

[Question] What to do with imi­ta­tion hu­mans, other than ask­ing them what the right thing to do is?

Charlie Steiner27 Sep 2020 21:51 UTC
10 points
6 comments1 min readLW link

[Question] What are good ra­tio­nal­ity ex­er­cises?

Ben Pace27 Sep 2020 21:25 UTC
54 points
25 comments1 min readLW link1 review

Puz­zle Games

Scott Garrabrant27 Sep 2020 21:14 UTC
56 points
69 comments7 min readLW link

[Question] What hard sci­ence fic­tion sto­ries also got the so­cial sci­ences right?

Mati_Roy27 Sep 2020 20:37 UTC
15 points
30 comments1 min readLW link

Tips for the most im­mer­sive video calls

benkuhn27 Sep 2020 20:36 UTC
60 points
9 comments15 min readLW link
(www.benkuhn.net)

A long re­ply to Ben Garfinkel on Scru­ti­niz­ing Clas­sic AI Risk Arguments

Søren Elverlin27 Sep 2020 17:51 UTC
17 points
6 comments1 min readLW link

Not all com­mu­ni­ca­tion is ma­nipu­la­tion: Chap­er­ones don’t ma­nipu­late proteins

ChristianKl27 Sep 2020 16:45 UTC
35 points
14 comments2 min readLW link