RSS

EJT

Karma: 571

I’m a Postdoctoral Research Fellow at Oxford University’s Global Priorities Institute.

Previously, I was a Philosophy Fellow at the Center for AI Safety.

So far, my work has mostly been about the moral importance of future generations. Going forward, it will mostly be about AI.

You can email me at elliott.thornley@philosophy.ox.ac.uk.

The Shut­down Prob­lem: In­com­plete Prefer­ences as a Solution

EJT23 Feb 2024 16:01 UTC
50 points
21 comments41 min readLW link

The Shut­down Prob­lem: An AI Eng­ineer­ing Puz­zle for De­ci­sion Theorists

EJT23 Oct 2023 21:00 UTC
71 points
22 comments1 min readLW link
(philpapers.org)

The price is right

EJT16 Oct 2023 16:34 UTC
39 points
3 comments1 min readLW link
(openairopensea.substack.com)

[Question] What are some ex­am­ples of AIs in­stan­ti­at­ing the ‘near­est un­blocked strat­egy prob­lem’?

EJT4 Oct 2023 11:05 UTC
6 points
4 comments1 min readLW link

EJT’s Shortform

EJT26 Sep 2023 15:19 UTC
4 points
16 comments1 min readLW link

There are no co­her­ence theorems

20 Feb 2023 21:25 UTC
121 points
114 comments19 min readLW link