RSS

mattmacdermott

Karma: 647

Ben­gio’s Align­ment Pro­posal: “Towards a Cau­tious Scien­tist AI with Con­ver­gent Safety Bounds”

mattmacdermott29 Feb 2024 13:59 UTC
75 points
19 comments14 min readLW link
(yoshuabengio.org)

mattmac­der­mott’s Shortform

mattmacdermott3 Jan 2024 9:08 UTC
4 points
13 comments1 min readLW link

What’s next for the field of Agent Foun­da­tions?

30 Nov 2023 17:55 UTC
59 points
21 comments10 min readLW link

Op­ti­mi­sa­tion Mea­sures: Desider­ata, Im­pos­si­bil­ity, Proposals

7 Aug 2023 15:52 UTC
35 points
9 comments1 min readLW link

Re­ward Hack­ing from a Causal Perspective

21 Jul 2023 18:27 UTC
29 points
5 comments7 min readLW link

In­cen­tives from a causal perspective

10 Jul 2023 17:16 UTC
27 points
0 comments6 min readLW link

Agency from a causal perspective

30 Jun 2023 17:37 UTC
38 points
5 comments6 min readLW link

In­tro­duc­tion to Towards Causal Foun­da­tions of Safe AGI

12 Jun 2023 17:55 UTC
67 points
6 comments4 min readLW link

Some Sum­maries of Agent Foun­da­tions Work

mattmacdermott15 May 2023 16:09 UTC
56 points
1 comment13 min readLW link

Towards Mea­sures of Optimisation

12 May 2023 15:29 UTC
53 points
37 comments4 min readLW link

Nor­ma­tive vs De­scrip­tive Models of Agency

mattmacdermott2 Feb 2023 20:28 UTC
26 points
5 comments4 min readLW link