RSS

Anthony DiGiovanni

Karma: 540

(Formerly “antimonyanthony.”) I’m an s-risk-focused AI safety researcher at the Center on Long-Term Risk. I (occasionally) write about altruism-relevant topics on my Substack. All opinions my own.

In defense of an­throp­i­cally up­dat­ing EDT

Anthony DiGiovanni5 Mar 2024 6:21 UTC
17 points
16 comments15 min readLW link

Mak­ing AIs less likely to be spiteful

26 Sep 2023 14:12 UTC
89 points
2 comments10 min readLW link

Re­sponses to ap­par­ent ra­tio­nal­ist con­fu­sions about game /​ de­ci­sion theory

Anthony DiGiovanni30 Aug 2023 22:02 UTC
140 points
14 comments12 min readLW link

an­ti­monyan­thony’s Shortform

Anthony DiGiovanni11 Apr 2023 13:10 UTC
3 points
3 comments1 min readLW link

When is in­tent al­ign­ment suffi­cient or nec­es­sary to re­duce AGI con­flict?

14 Sep 2022 19:39 UTC
40 points
0 comments9 min readLW link

When would AGIs en­gage in con­flict?

14 Sep 2022 19:38 UTC
52 points
5 comments13 min readLW link

When does tech­ni­cal work to re­duce AGI con­flict make a differ­ence?: Introduction

14 Sep 2022 19:38 UTC
52 points
3 comments6 min readLW link