RSS

Ben Smith

Karma: 314

Who Aligns the Align­ment Re­searchers?

Ben Smith5 Mar 2023 23:22 UTC
46 points
0 comments11 min readLW link

Na­ture: “Stop talk­ing about to­mor­row’s AI dooms­day when AI poses risks to­day”

Ben Smith28 Jun 2023 5:59 UTC
40 points
8 comments2 min readLW link
(www.nature.com)

A brief re­view of the rea­sons multi-ob­jec­tive RL could be im­por­tant in AI Safety Research

Ben Smith29 Sep 2021 17:09 UTC
30 points
7 comments10 min readLW link

Bi­den-Har­ris Ad­minis­tra­tion An­nounces First-Ever Con­sor­tium Ded­i­cated to AI Safety

Ben Smith9 Feb 2024 6:40 UTC
22 points
0 comments1 min readLW link
(www.nist.gov)

Sig­nal­ing Vir­tu­ous Vic­tim­hood as Indi­ca­tors of Dark Triad Personalities

Ben Smith26 Aug 2021 19:18 UTC
18 points
3 comments1 min readLW link
(mlpol.net)

The in­tel­li­gence-sen­tience or­thog­o­nal­ity thesis

Ben Smith13 Jul 2023 6:55 UTC
18 points
9 comments9 min readLW link

AMC’s an­i­mated se­ries “Pan­theon” is rele­vant to our interests

Ben Smith10 Oct 2022 5:59 UTC
14 points
3 comments1 min readLW link

Sets of ob­jec­tives for a multi-ob­jec­tive RL agent to optimize

23 Nov 2022 6:49 UTC
11 points
0 comments8 min readLW link

Can we achieve AGI Align­ment by bal­anc­ing mul­ti­ple hu­man ob­jec­tives?

Ben Smith3 Jul 2022 2:51 UTC
11 points
1 comment4 min readLW link

That-time-of-year As­tral Codex Ten Meetup

Ben Smith17 Aug 2022 0:02 UTC
3 points
2 comments1 min readLW link

Grant-mak­ing in EA should con­sider peer-re­view­ing grant ap­pli­ca­tions along the pub­lic-sec­tor model

Ben Smith24 Jan 2023 15:01 UTC
0 points
3 comments1 min readLW link