RSS

Charlie Steiner(Charlie Steiner)

Karma: 6,678

If you want to chat, message me!

LW1.0 username Manfred. PhD in condensed matter physics. I am independently thinking and writing about value learning.

Limited agents need ap­prox­i­mate induction

Charlie Steiner24 Apr 2015 21:22 UTC
2 points
0 comments1 min readLW link
(lesswrong.com)

Philos­o­phy of Num­bers (part 1)

Charlie Steiner2 Dec 2017 18:20 UTC
11 points
14 comments3 min readLW link

Philos­o­phy of Num­bers (part 2)

Charlie Steiner19 Dec 2017 13:57 UTC
3 points
10 comments5 min readLW link

Dan Den­nett on Stances

Charlie Steiner27 Dec 2017 8:15 UTC
5 points
0 comments1 min readLW link
(ase.tufts.edu)

Em­piri­cal philos­o­phy and inversions

Charlie Steiner29 Dec 2017 12:12 UTC
3 points
0 comments2 min readLW link

Ex­pla­na­tions: Ig­no­rance vs. Con­fu­sion

Charlie Steiner16 Jan 2018 10:44 UTC
7 points
2 comments2 min readLW link

A use­ful level distinction

Charlie Steiner24 Feb 2018 6:39 UTC
8 points
4 comments2 min readLW link

Book Re­view: Con­scious­ness Explained

Charlie Steiner6 Mar 2018 3:32 UTC
48 points
20 comments21 min readLW link

Is this what FAI out­reach suc­cess looks like?

Charlie Steiner9 Mar 2018 13:12 UTC
17 points
3 comments1 min readLW link
(www.youtube.com)

Boltz­mann Brains and Within-model vs. Between-mod­els Probability

Charlie Steiner14 Jul 2018 9:52 UTC
15 points
12 comments3 min readLW link

Can few-shot learn­ing teach AI right from wrong?

Charlie Steiner20 Jul 2018 7:45 UTC
13 points
3 comments6 min readLW link

Philos­o­phy as low-en­ergy approximation

Charlie Steiner5 Feb 2019 19:34 UTC
40 points
20 comments3 min readLW link

How to get value learn­ing and refer­ence wrong

Charlie Steiner26 Feb 2019 20:22 UTC
37 points
2 comments6 min readLW link

Hu­mans aren’t agents—what then for value learn­ing?

Charlie Steiner15 Mar 2019 22:01 UTC
21 points
14 comments3 min readLW link

Value learn­ing for moral essentialists

Charlie Steiner6 May 2019 9:05 UTC
11 points
3 comments3 min readLW link

Train­ing hu­man mod­els is an un­solved problem

Charlie Steiner10 May 2019 7:17 UTC
13 points
3 comments4 min readLW link

Some Com­ments on Stu­art Arm­strong’s “Re­search Agenda v0.9”

Charlie Steiner8 Jul 2019 19:03 UTC
21 points
12 comments4 min readLW link

The Ar­tifi­cial In­ten­tional Stance

Charlie Steiner27 Jul 2019 7:00 UTC
12 points
0 comments4 min readLW link

Can we make peace with moral in­de­ter­mi­nacy?

Charlie Steiner3 Oct 2019 12:56 UTC
16 points
8 comments3 min readLW link

The AI is the model

Charlie Steiner4 Oct 2019 8:11 UTC
14 points
1 comment3 min readLW link