RSS

Alex Flint

Karma: 3,804

Independent AI alignment researcher

The ground of optimization

Alex Flint20 Jun 2020 0:38 UTC
245 points
80 comments27 min readLW link1 review

Log­i­cal in­duc­tion for soft­ware engineers

Alex Flint3 Dec 2022 19:55 UTC
160 points
8 comments27 min readLW link1 review

Our take on CHAI’s re­search agenda in un­der 1500 words

Alex Flint17 Jun 2020 12:24 UTC
112 points
18 comments5 min readLW link

Agency in Con­way’s Game of Life

Alex Flint13 May 2021 1:07 UTC
110 points
93 comments9 min readLW link2 reviews

Search ver­sus design

Alex Flint16 Aug 2020 16:53 UTC
100 points
40 comments36 min readLW link1 review

Align­ment ver­sus AI Alignment

Alex Flint4 Feb 2022 22:59 UTC
87 points
15 comments22 min readLW link

Reflec­tions on Larks’ 2020 AI al­ign­ment liter­a­ture review

Alex Flint1 Jan 2021 22:53 UTC
79 points
7 comments6 min readLW link

Re­ply to Paul Chris­ti­ano on Inac­cessible Information

Alex Flint5 Jun 2020 9:10 UTC
77 points
15 comments6 min readLW link

Im­pli­ca­tions of au­to­mated on­tol­ogy identification

18 Feb 2022 3:30 UTC
69 points
27 comments23 min readLW link

Pars­ing Chris Min­gard on Neu­ral Networks

Alex Flint6 May 2021 22:16 UTC
68 points
26 comments6 min readLW link

My take on Michael Littman on “The HCI of HAI”

Alex Flint2 Apr 2021 19:51 UTC
59 points
4 comments7 min readLW link

AI Risk for Epistemic Minimalists

Alex Flint22 Aug 2021 15:39 UTC
58 points
12 comments13 min readLW link1 review

Three enig­mas at the heart of our reasoning

Alex Flint21 Sep 2021 16:52 UTC
56 points
66 comments9 min readLW link1 review

Con­cern­ing not get­ting lost

Alex Flint14 May 2021 19:38 UTC
50 points
9 comments4 min readLW link

Un­der­stand­ing the Lot­tery Ticket Hy­poth­e­sis

Alex Flint14 May 2021 0:25 UTC
50 points
9 comments8 min readLW link

Life and ex­pand­ing steer­able consequences

Alex Flint7 May 2021 18:33 UTC
46 points
3 comments4 min readLW link

Where are in­ten­tions to be found?

Alex Flint21 Apr 2021 0:51 UTC
44 points
12 comments9 min readLW link

Prob­a­bil­ity the­ory and log­i­cal in­duc­tion as lenses

Alex Flint23 Apr 2021 2:41 UTC
43 points
7 comments6 min readLW link

The Black­well or­der as a for­mal­iza­tion of knowledge

Alex Flint10 Sep 2021 2:51 UTC
41 points
10 comments11 min readLW link