RSS

Anthony DiGiovanni

Karma: 1,061

Researcher at the Center on Long-Term Risk. All opinions my own.

Re­source guide: Unaware­ness, in­de­ter­mi­nacy, and cluelessness

Anthony DiGiovanni7 Jul 2025 9:54 UTC
20 points
0 comments7 min readLW link

Clar­ify­ing “wis­dom”: Foun­da­tional top­ics for al­igned AIs to pri­ori­tize be­fore ir­re­versible decisions

Anthony DiGiovanni20 Jun 2025 21:55 UTC
37 points
2 comments12 min readLW link

4. Why ex­ist­ing ap­proaches to cause pri­ori­ti­za­tion are not ro­bust to unawareness

Anthony DiGiovanni13 Jun 2025 8:55 UTC
26 points
0 comments17 min readLW link

3. Why im­par­tial al­tru­ists should sus­pend judg­ment un­der unawareness

Anthony DiGiovanni8 Jun 2025 15:06 UTC
24 points
0 comments16 min readLW link

2. Why in­tu­itive com­par­i­sons of large-scale im­pact are unjustified

Anthony DiGiovanni4 Jun 2025 20:30 UTC
25 points
0 comments16 min readLW link

1. The challenge of un­aware­ness for im­par­tial al­tru­ist ac­tion guidance: Introduction

Anthony DiGiovanni2 Jun 2025 8:54 UTC
47 points
6 comments17 min readLW link

Should you go with your best guess?: Against pre­cise Bayesi­anism and re­lated views

Anthony DiGiovanni27 Jan 2025 20:25 UTC
65 points
15 comments22 min readLW link

Win­ning isn’t enough

5 Nov 2024 11:37 UTC
44 points
30 comments9 min readLW link

[Question] What are your cruxes for im­pre­cise prob­a­bil­ities /​ de­ci­sion rules?

Anthony DiGiovanni31 Jul 2024 15:42 UTC
36 points
33 comments1 min readLW link

In­di­vi­d­u­ally in­cen­tivized safe Pareto im­prove­ments in open-source bargaining

17 Jul 2024 18:26 UTC
41 points
2 comments17 min readLW link

In defense of an­throp­i­cally up­dat­ing EDT

Anthony DiGiovanni5 Mar 2024 6:21 UTC
17 points
17 comments13 min readLW link

Mak­ing AIs less likely to be spiteful

26 Sep 2023 14:12 UTC
118 points
7 comments10 min readLW link

Re­sponses to ap­par­ent ra­tio­nal­ist con­fu­sions about game /​ de­ci­sion theory

Anthony DiGiovanni30 Aug 2023 22:02 UTC
142 points
20 comments12 min readLW link1 review

An­thony DiGio­vanni’s Shortform

Anthony DiGiovanni11 Apr 2023 13:10 UTC
3 points
31 comments1 min readLW link

When is in­tent al­ign­ment suffi­cient or nec­es­sary to re­duce AGI con­flict?

14 Sep 2022 19:39 UTC
40 points
0 comments9 min readLW link

When would AGIs en­gage in con­flict?

14 Sep 2022 19:38 UTC
52 points
5 comments13 min readLW link

When does tech­ni­cal work to re­duce AGI con­flict make a differ­ence?: Introduction

14 Sep 2022 19:38 UTC
52 points
3 comments6 min readLW link