Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Ben Smith
Karma:
192
All
Posts
Comments
New
Top
Old
Who Aligns the Alignment Researchers?
Ben Smith
5 Mar 2023 23:22 UTC
40
points
0
comments
11
min read
LW
link
Grant-making in EA should consider peer-reviewing grant applications along the public-sector model
Ben Smith
24 Jan 2023 15:01 UTC
0
points
3
comments
1
min read
LW
link
Sets of objectives for a multi-objective RL agent to optimize
Ben Smith
and
Roland Pihlakas
23 Nov 2022 6:49 UTC
11
points
0
comments
8
min read
LW
link
AMC’s animated series “Pantheon” is relevant to our interests
Ben Smith
10 Oct 2022 5:59 UTC
13
points
3
comments
1
min read
LW
link
That-time-of-year Astral Codex Ten Meetup
Ben Smith
17 Aug 2022 0:02 UTC
3
points
2
comments
1
min read
LW
link
Can we achieve AGI Alignment by balancing multiple human objectives?
Ben Smith
3 Jul 2022 2:51 UTC
11
points
1
comment
4
min read
LW
link
A brief review of the reasons multi-objective RL could be important in AI Safety Research
Ben Smith
29 Sep 2021 17:09 UTC
30
points
7
comments
10
min read
LW
link
Signaling Virtuous Victimhood as Indicators of Dark Triad Personalities
Ben Smith
26 Aug 2021 19:18 UTC
18
points
3
comments
1
min read
LW
link
(mlpol.net)
Back to top