RSS

alexflint

Karma: 2,238

Independent AI safety researcher

Knowl­edge is not just pre­cip­i­ta­tion of action

alexflint18 Jun 2021 23:26 UTC
15 points
1 comment7 min readLW link

Knowl­edge is not just digi­tal ab­strac­tion layers

alexflint15 Jun 2021 3:49 UTC
16 points
4 comments5 min readLW link

Knowl­edge is not just mu­tual information

alexflint10 Jun 2021 1:01 UTC
16 points
2 comments4 min readLW link

Knowl­edge is not just map/​ter­ri­tory resemblance

alexflint25 May 2021 17:58 UTC
28 points
4 comments3 min readLW link

Prob­lems fac­ing a cor­re­spon­dence the­ory of knowledge

alexflint24 May 2021 16:02 UTC
25 points
12 comments6 min readLW link

Con­cern­ing not get­ting lost

alexflint14 May 2021 19:38 UTC
49 points
9 comments4 min readLW link

Un­der­stand­ing the Lot­tery Ticket Hy­poth­e­sis

alexflint14 May 2021 0:25 UTC
47 points
9 comments8 min readLW link

Agency in Con­way’s Game of Life

alexflint13 May 2021 1:07 UTC
61 points
66 comments9 min readLW link

Life and ex­pand­ing steer­able consequences

alexflint7 May 2021 18:33 UTC
46 points
2 comments4 min readLW link

Pars­ing Chris Min­gard on Neu­ral Networks

alexflint6 May 2021 22:16 UTC
62 points
26 comments6 min readLW link

Pars­ing Abram on Gra­da­tions of In­ner Align­ment Obstacles

alexflint4 May 2021 17:44 UTC
19 points
4 comments6 min readLW link

Fol­low-up to Ju­lia Wise on “Don’t Shoot The Dog”

alexflint1 May 2021 19:07 UTC
18 points
4 comments8 min readLW link

Pit­falls of the agent model

alexflint27 Apr 2021 22:19 UTC
17 points
4 comments20 min readLW link

Be­ware over-use of the agent model

alexflint25 Apr 2021 22:19 UTC
28 points
9 comments5 min readLW link

Prob­a­bil­ity the­ory and log­i­cal in­duc­tion as lenses

alexflint23 Apr 2021 2:41 UTC
37 points
7 comments6 min readLW link

Where are in­ten­tions to be found?

alexflint21 Apr 2021 0:51 UTC
44 points
12 comments9 min readLW link

My take on Michael Littman on “The HCI of HAI”

alexflint2 Apr 2021 19:51 UTC
57 points
4 comments7 min readLW link

Thoughts on Ia­son Gabriel’s Ar­tifi­cial In­tel­li­gence, Values, and Alignment

alexflint14 Jan 2021 12:58 UTC
35 points
14 comments4 min readLW link

Reflec­tions on Larks’ 2020 AI al­ign­ment liter­a­ture review

alexflint1 Jan 2021 22:53 UTC
77 points
8 comments6 min readLW link

Search ver­sus design

alexflint16 Aug 2020 16:53 UTC
83 points
39 comments36 min readLW link