RSS

Dutch-Book­ing CDT: Re­vised Argument

abramdemski27 Oct 2020 4:31 UTC
46 points
8 comments16 min readLW link

Se­cu­rity Mind­set and Take­off Speeds

DanielFilan27 Oct 2020 3:20 UTC
43 points
4 comments8 min readLW link
(danielfilan.com)

A Cor­re­spon­dence Theorem

johnswentworth26 Oct 2020 23:28 UTC
18 points
2 comments9 min readLW link

Ad­di­tive Oper­a­tions on Carte­sian Frames

Scott Garrabrant26 Oct 2020 15:12 UTC
51 points
3 comments12 min readLW link

Su­per­vised learn­ing of out­puts in the brain

steve215226 Oct 2020 14:32 UTC
19 points
0 comments10 min readLW link

Re­ply to Je­bari and Lund­borg on Ar­tifi­cial Superintelligence

Richard_Ngo25 Oct 2020 13:50 UTC
26 points
1 comment5 min readLW link
(thinkingcomplete.blogspot.com)

Hu­mans are stun­ningly ra­tio­nal and stun­ningly irrational

Stuart_Armstrong23 Oct 2020 14:13 UTC
21 points
4 comments2 min readLW link

In­tro­duc­tion to Carte­sian Frames

Scott Garrabrant22 Oct 2020 13:00 UTC
117 points
15 comments22 min readLW link

The date of AI Takeover is not the day the AI takes over

Daniel Kokotajlo22 Oct 2020 10:41 UTC
86 points
18 comments2 min readLW link

[AN #122]: Ar­gu­ing for AGI-driven ex­is­ten­tial risk from first principles

rohinmshah21 Oct 2020 17:10 UTC
28 points
0 comments9 min readLW link
(mailchi.mp)

[Question] Prob­lems In­volv­ing Ab­strac­tion?

johnswentworth20 Oct 2020 16:49 UTC
31 points
12 comments1 min readLW link

Box in­ver­sion hypothesis

Jan Kulveit20 Oct 2020 16:20 UTC
51 points
4 comments3 min readLW link

[AN #121]: Fore­cast­ing trans­for­ma­tive AI timelines us­ing biolog­i­cal anchors

rohinmshah14 Oct 2020 17:20 UTC
22 points
5 comments14 min readLW link
(mailchi.mp)

The Solomonoff Prior is Malign

Mark Xu14 Oct 2020 1:33 UTC
120 points
34 comments16 min readLW link

Toy Prob­lem: De­tec­tive Story Alignment

johnswentworth13 Oct 2020 21:02 UTC
32 points
4 comments2 min readLW link

Knowl­edge, ma­nipu­la­tion, and free will

Stuart_Armstrong13 Oct 2020 17:47 UTC
31 points
15 comments3 min readLW link

On­line AI Safety Dis­cus­sion Day

Linda Linsefors8 Oct 2020 12:11 UTC
5 points
0 comments1 min readLW link

[AN #120]: Trac­ing the in­tel­lec­tual roots of AI and AI alignment

rohinmshah7 Oct 2020 17:10 UTC
13 points
4 comments10 min readLW link
(mailchi.mp)

The Align­ment Prob­lem: Ma­chine Learn­ing and Hu­man Values

rohinmshah6 Oct 2020 17:41 UTC
92 points
5 comments6 min readLW link
(www.amazon.com)

AGI safety from first prin­ci­ples: Conclusion

Richard_Ngo4 Oct 2020 23:06 UTC
44 points
1 comment3 min readLW link