RSS

Dan H

Karma: 2,848

newsletter.safe.ai

newsletter.mlsafety.org

State­ment on AI Ex­tinc­tion—Signed by AGI Labs, Top Aca­demics, and Many Other Notable Figures

Dan H30 May 2023 9:05 UTC
372 points
77 comments1 min readLW link
(www.safe.ai)

$20 Million in NSF Grants for Safety Research

Dan H28 Feb 2023 4:44 UTC
165 points
12 comments1 min readLW link

A Bird’s Eye View of the ML Field [Prag­matic AI Safety #2]

9 May 2022 17:18 UTC
163 points
6 comments35 min readLW link

There are no co­her­ence theorems

20 Feb 2023 21:25 UTC
121 points
114 comments19 min readLW link

In­tro­duc­tion to Prag­matic AI Safety [Prag­matic AI Safety #1]

9 May 2022 17:06 UTC
80 points
3 comments6 min readLW link

[$20K in Prizes] AI Safety Ar­gu­ments Competition

26 Apr 2022 16:13 UTC
75 points
518 comments3 min readLW link

In­tro­duc­ing the ML Safety Schol­ars Program

4 May 2022 16:01 UTC
74 points
3 comments3 min readLW link

An­nounc­ing the In­tro­duc­tion to ML Safety course

6 Aug 2022 2:46 UTC
73 points
6 comments7 min readLW link

NeurIPS ML Safety Work­shop 2022

Dan H26 Jul 2022 15:28 UTC
72 points
2 comments1 min readLW link
(neurips2022.mlsafety.org)

$20K In Boun­ties for AI Safety Public Materials

5 Aug 2022 2:52 UTC
71 points
9 comments6 min readLW link

[MLSN #1]: ICLR Safety Paper Roundup

Dan H18 Oct 2021 15:19 UTC
59 points
1 comment2 min readLW link

Open Prob­lems in AI X-Risk [PAIS #5]

10 Jun 2022 2:08 UTC
59 points
6 comments36 min readLW link

Com­plex Sys­tems for AI Safety [Prag­matic AI Safety #3]

24 May 2022 0:00 UTC
57 points
2 comments21 min readLW link

En­vi­ron­ments for Mea­sur­ing De­cep­tion, Re­source Ac­qui­si­tion, and Eth­i­cal Violations

Dan H7 Apr 2023 18:40 UTC
51 points
2 comments2 min readLW link
(arxiv.org)

Perform Tractable Re­search While Avoid­ing Ca­pa­bil­ities Ex­ter­nal­ities [Prag­matic AI Safety #4]

30 May 2022 20:25 UTC
51 points
3 comments25 min readLW link