Ben Smith

Karma: 192

Who Aligns the Align­ment Re­searchers?

Ben Smith5 Mar 2023 23:22 UTC
40 points
0 comments11 min readLW link

Grant-mak­ing in EA should con­sider peer-re­view­ing grant ap­pli­ca­tions along the pub­lic-sec­tor model

Ben Smith24 Jan 2023 15:01 UTC
0 points
3 comments1 min readLW link

Sets of ob­jec­tives for a multi-ob­jec­tive RL agent to optimize

23 Nov 2022 6:49 UTC
11 points
0 comments8 min readLW link

AMC’s an­i­mated se­ries “Pan­theon” is rele­vant to our interests

Ben Smith10 Oct 2022 5:59 UTC
13 points
3 comments1 min readLW link

That-time-of-year As­tral Codex Ten Meetup

Ben Smith17 Aug 2022 0:02 UTC
3 points
2 comments1 min readLW link

Can we achieve AGI Align­ment by bal­anc­ing mul­ti­ple hu­man ob­jec­tives?

Ben Smith3 Jul 2022 2:51 UTC
11 points
1 comment4 min readLW link

A brief re­view of the rea­sons multi-ob­jec­tive RL could be im­por­tant in AI Safety Research

Ben Smith29 Sep 2021 17:09 UTC
30 points
7 comments10 min readLW link

Sig­nal­ing Vir­tu­ous Vic­tim­hood as Indi­ca­tors of Dark Triad Personalities

Ben Smith26 Aug 2021 19:18 UTC
18 points
3 comments1 min readLW link