Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Anthony DiGiovanni
Karma:
1,061
Researcher at the Center on Long-Term Risk. All opinions my own.
All
Posts
Comments
New
Top
Old
Resource guide: Unawareness, indeterminacy, and cluelessness
Anthony DiGiovanni
7 Jul 2025 9:54 UTC
20
points
0
comments
7
min read
LW
link
Clarifying “wisdom”: Foundational topics for aligned AIs to prioritize before irreversible decisions
Anthony DiGiovanni
20 Jun 2025 21:55 UTC
37
points
2
comments
12
min read
LW
link
4. Why existing approaches to cause prioritization are not robust to unawareness
Anthony DiGiovanni
13 Jun 2025 8:55 UTC
26
points
0
comments
17
min read
LW
link
3. Why impartial altruists should suspend judgment under unawareness
Anthony DiGiovanni
8 Jun 2025 15:06 UTC
24
points
0
comments
16
min read
LW
link
2. Why intuitive comparisons of large-scale impact are unjustified
Anthony DiGiovanni
4 Jun 2025 20:30 UTC
25
points
0
comments
16
min read
LW
link
1. The challenge of unawareness for impartial altruist action guidance: Introduction
Anthony DiGiovanni
2 Jun 2025 8:54 UTC
47
points
6
comments
17
min read
LW
link
Should you go with your best guess?: Against precise Bayesianism and related views
Anthony DiGiovanni
27 Jan 2025 20:25 UTC
65
points
15
comments
22
min read
LW
link
Winning isn’t enough
JesseClifton
and
Anthony DiGiovanni
5 Nov 2024 11:37 UTC
44
points
30
comments
9
min read
LW
link
[Question]
What are your cruxes for imprecise probabilities / decision rules?
Anthony DiGiovanni
31 Jul 2024 15:42 UTC
36
points
33
comments
1
min read
LW
link
Individually incentivized safe Pareto improvements in open-source bargaining
Nicolas Macé
,
Anthony DiGiovanni
and
JesseClifton
17 Jul 2024 18:26 UTC
41
points
2
comments
17
min read
LW
link
In defense of anthropically updating EDT
Anthony DiGiovanni
5 Mar 2024 6:21 UTC
17
points
17
comments
13
min read
LW
link
Making AIs less likely to be spiteful
Nicolas Macé
,
Anthony DiGiovanni
and
JesseClifton
26 Sep 2023 14:12 UTC
118
points
7
comments
10
min read
LW
link
Responses to apparent rationalist confusions about game / decision theory
Anthony DiGiovanni
30 Aug 2023 22:02 UTC
142
points
20
comments
12
min read
LW
link
1
review
Anthony DiGiovanni’s Shortform
Anthony DiGiovanni
11 Apr 2023 13:10 UTC
3
points
31
comments
1
min read
LW
link
When is intent alignment sufficient or necessary to reduce AGI conflict?
JesseClifton
,
Sammy Martin
and
Anthony DiGiovanni
14 Sep 2022 19:39 UTC
40
points
0
comments
9
min read
LW
link
When would AGIs engage in conflict?
JesseClifton
,
Sammy Martin
and
Anthony DiGiovanni
14 Sep 2022 19:38 UTC
52
points
5
comments
13
min read
LW
link
When does technical work to reduce AGI conflict make a difference?: Introduction
JesseClifton
,
Sammy Martin
and
Anthony DiGiovanni
14 Sep 2022 19:38 UTC
52
points
3
comments
6
min read
LW
link
Back to top