RSS
Page 1

A Primer on Ma­trix Calcu­lus, Part 3: The Chain Rule

Matthew Barnett
17 Aug 2019 1:50 UTC
5 points
2 comments6 min readLW link

Beliefs Are For True Things

Davis_Kingsley
15 Aug 2019 23:23 UTC
7 points
4 comments3 min readLW link

[Question] What ex­per­i­ments would demon­strate “up­per limits of aug­mented work­ing mem­ory?”

Raemon
15 Aug 2019 22:09 UTC
29 points
1 comment2 min readLW link

Clar­ify­ing some key hy­pothe­ses in AI alignment

Ben Cottier
15 Aug 2019 21:29 UTC
57 points
1 comment9 min readLW link

A Primer on Ma­trix Calcu­lus, Part 2: Ja­co­bi­ans and other fun

Matthew Barnett
15 Aug 2019 1:13 UTC
16 points
3 comments6 min readLW link

Pre­dicted AI al­ign­ment event/​meet­ing calendar

rmoehn
14 Aug 2019 7:14 UTC
22 points
6 comments1 min readLW link

“De­sign­ing agent in­cen­tives to avoid re­ward tam­per­ing”, DeepMind

gwern
14 Aug 2019 16:57 UTC
22 points
7 comments1 min readLW link
(medium.com)

Subagents, trauma and rationality

Kaj_Sotala
14 Aug 2019 13:14 UTC
48 points
1 comment19 min readLW link

Nat­u­ral laws should be ex­plicit con­straints on strat­egy space

ryan_b
13 Aug 2019 20:22 UTC
7 points
2 comments1 min readLW link

Dis­tance Func­tions are Hard

Grue_Slinky
13 Aug 2019 17:33 UTC
40 points
13 comments6 min readLW link

Book Re­view: Sec­u­lar Cycles

Scott Alexander
13 Aug 2019 4:10 UTC
51 points
1 comment16 min readLW link
(slatestarcodex.com)

A Primer on Ma­trix Calcu­lus, Part 1: Ba­sic review

Matthew Barnett
12 Aug 2019 23:44 UTC
18 points
1 comment7 min readLW link

[Question] What ex­plana­tory power does Kah­ne­man’s Sys­tem 2 pos­sess?

ricraz
12 Aug 2019 15:23 UTC
34 points
3 comments1 min readLW link

Ad­jec­tives from the Fu­ture: The Dangers of Re­sult-based Descriptions

Pradeep_Kumar
11 Aug 2019 19:19 UTC
16 points
6 comments11 min readLW link

[Question] Could we solve this email mess if we all moved to paid emails?

jacobjacob
11 Aug 2019 16:31 UTC
31 points
46 comments4 min readLW link

[Question] Does hu­man choice have to be tran­si­tive in or­der to be ra­tio­nal/​con­sis­tent?

jmh
11 Aug 2019 1:49 UTC
9 points
6 comments1 min readLW link

AI Safety Read­ing Group

Søren Elverlin
11 Aug 2019 9:01 UTC
16 points
6 comments1 min readLW link

Di­ana Fleischman and Ge­offrey Miller—Au­di­ence Q&A

Jacobian
10 Aug 2019 22:37 UTC
37 points
14 comments9 min readLW link

In­tran­si­tive Prefer­ences You Can’t Pump

zulupineapple
9 Aug 2019 23:10 UTC
2 points
2 comments1 min readLW link

Cat­e­go­rial prefer­ences and util­ity functions

DavidHolmes
9 Aug 2019 21:36 UTC
7 points
4 comments4 min readLW link