RSS

Stuart_Armstrong

Karma: 20,222
AllPostsComments
NewTop
Page 1

In SIA, refer­ence classes (al­most) don’t matter

Stuart_Armstrong
17 Jan 2019 11:29 UTC
17 points
12 comments1 min readLW link

An­throp­ics is pretty normal

Stuart_Armstrong
17 Jan 2019 13:26 UTC
15 points
3 comments8 min readLW link

The ques­tions and classes of SSA

Stuart_Armstrong
17 Jan 2019 11:50 UTC
9 points
0 comments3 min readLW link

Solv­ing the Dooms­day argument

Stuart_Armstrong
17 Jan 2019 12:32 UTC
8 points
6 comments1 min readLW link

An­throp­ics: Full Non-in­dex­i­cal Con­di­tion­ing (FNC) is inconsistent

Stuart_Armstrong
14 Jan 2019 15:03 UTC
22 points
4 comments2 min readLW link

An­thropic prob­a­bil­ities: an­swer­ing differ­ent questions

Stuart_Armstrong
14 Jan 2019 18:50 UTC
16 points
1 comment2 min readLW link

Hier­ar­chi­cal sys­tem prefer­ences and sub­agent preferences

Stuart_Armstrong
11 Jan 2019 18:47 UTC
19 points
2 comments4 min readLW link

What emo­tions would AIs need to feel?

Stuart_Armstrong
8 Jan 2019 15:09 UTC
15 points
6 comments2 min readLW link

No sur­jec­tion onto func­tion space for man­i­fold X

Stuart_Armstrong
9 Jan 2019 18:07 UTC
19 points
0 comments6 min readLW link

[Question] La­tex rendering

Stuart_Armstrong
9 Jan 2019 22:32 UTC
10 points
6 comments1 min readLW link

Why we need a *the­ory* of hu­man values

Stuart_Armstrong
5 Dec 2018 16:00 UTC
53 points
7 comments4 min readLW link

As­sum­ing we’ve solved X, could we do Y...

Stuart_Armstrong
11 Dec 2018 18:13 UTC
34 points
15 comments2 min readLW link

A hun­dred Shakespeares

Stuart_Armstrong
11 Dec 2018 23:11 UTC
31 points
4 comments2 min readLW link

An­thropic para­doxes trans­posed into An­thropic De­ci­sion Theory

Stuart_Armstrong
19 Dec 2018 18:07 UTC
19 points
23 comments4 min readLW link

An­thropic prob­a­bil­ities and cost functions

Stuart_Armstrong
21 Dec 2018 17:54 UTC
16 points
1 comment1 min readLW link

Hu­mans can be as­signed any val­ues what­so­ever…

Stuart_Armstrong
5 Nov 2018 14:26 UTC
43 points
8 comments4 min readLW link

Bounded ra­tio­nal­ity abounds in mod­els, not ex­plic­itly defined

Stuart_Armstrong
11 Dec 2018 19:34 UTC
12 points
9 comments1 min readLW link

Figur­ing out what Alice wants: non-hu­man Alice

Stuart_Armstrong
11 Dec 2018 19:31 UTC
12 points
16 comments2 min readLW link

Disagree­ment with Paul: al­ign­ment induction

Stuart_Armstrong
10 Sep 2018 13:54 UTC
33 points
6 comments1 min readLW link

Us­ing ex­pected util­ity for Good(hart)

Stuart_Armstrong
27 Aug 2018 3:32 UTC
39 points
5 comments7 min readLW link