RSS

Charlie Steiner

Karma: 8,003

If you want to chat, message me!

LW1.0 username Manfred. PhD in condensed matter physics. I am independently thinking and writing about value learning.

Model­ing hu­mans: what’s the point?

Charlie SteinerNov 10, 2020, 1:30 AM
10 points
1 comment3 min readLW link

[Question] What to do with imi­ta­tion hu­mans, other than ask­ing them what the right thing to do is?

Charlie SteinerSep 27, 2020, 9:51 PM
10 points
6 comments1 min readLW link

Char­lie Steiner’s Shortform

Charlie SteinerAug 4, 2020, 6:28 AM
6 points
54 commentsLW link

Con­straints from nat­u­ral­ized ethics.

Charlie SteinerJul 25, 2020, 2:54 PM
21 points
0 comments3 min readLW link

Meta-prefer­ences are weird

Jul 16, 2020, 11:03 PM
13 points
2 comments5 min readLW link

Down with Solomonoff In­duc­tion, up with the Pre­sump­tu­ous Philosopher

Charlie SteinerJun 12, 2020, 9:44 AM
13 points
10 comments2 min readLW link

The Pre­sump­tu­ous Philoso­pher, self-lo­cat­ing in­for­ma­tion, and Solomonoff induction

Charlie SteinerMay 31, 2020, 4:35 PM
58 points
28 comments3 min readLW link

Life as metaphor for ev­ery­thing else.

Charlie SteinerApr 5, 2020, 7:21 AM
29 points
11 comments4 min readLW link

Meta-prefer­ences two ways: gen­er­a­tor vs. patch

Charlie SteinerApr 1, 2020, 12:51 AM
18 points
0 comments2 min readLW link

Gricean com­mu­ni­ca­tion and meta-preferences

Charlie SteinerFeb 10, 2020, 5:05 AM
24 points
0 comments3 min readLW link

Im­pos­si­ble moral prob­lems and moral authority

Charlie SteinerNov 18, 2019, 9:28 AM
22 points
8 comments3 min readLW link

What’s the dream for giv­ing nat­u­ral lan­guage com­mands to AI?

Charlie SteinerOct 8, 2019, 1:42 PM
14 points
8 comments7 min readLW link

The AI is the model

Charlie SteinerOct 4, 2019, 8:11 AM
14 points
1 comment3 min readLW link

Can we make peace with moral in­de­ter­mi­nacy?

Charlie SteinerOct 3, 2019, 12:56 PM
16 points
8 comments4 min readLW link

The Ar­tifi­cial In­ten­tional Stance

Charlie SteinerJul 27, 2019, 7:00 AM
12 points
0 comments4 min readLW link

Some Com­ments on Stu­art Arm­strong’s “Re­search Agenda v0.9”

Charlie SteinerJul 8, 2019, 7:03 PM
21 points
12 comments4 min readLW link

Train­ing hu­man mod­els is an un­solved problem

Charlie SteinerMay 10, 2019, 7:17 AM
13 points
3 comments4 min readLW link

Value learn­ing for moral essentialists

Charlie SteinerMay 6, 2019, 9:05 AM
11 points
3 comments3 min readLW link

Hu­mans aren’t agents—what then for value learn­ing?

Charlie SteinerMar 15, 2019, 10:01 PM
28 points
16 comments3 min readLW link

How to get value learn­ing and refer­ence wrong

Charlie SteinerFeb 26, 2019, 8:22 PM
40 points
2 comments6 min readLW link