RSS

Erik Jenner

Karma: 1,415

PhD student in AI safety at CHAI (UC Berkeley)

ARC pa­per: For­mal­iz­ing the pre­sump­tion of independence

Erik Jenner20 Nov 2022 1:22 UTC
97 points
2 comments2 min readLW link
(arxiv.org)

Re­search agenda: For­mal­iz­ing ab­strac­tions of computations

Erik Jenner2 Feb 2023 4:29 UTC
91 points
10 comments31 min readLW link

Re­sponse to Katja Grace’s AI x-risk counterarguments

19 Oct 2022 1:17 UTC
76 points
18 comments15 min readLW link

A com­par­i­son of causal scrub­bing, causal ab­strac­tions, and re­lated methods

8 Jun 2023 23:40 UTC
72 points
3 comments22 min readLW link

Syd­ney can play chess and kind of keep track of the board state

Erik Jenner3 Mar 2023 9:39 UTC
62 points
19 comments6 min readLW link

Good on­tolo­gies in­duce com­mu­ta­tive diagrams

Erik Jenner9 Oct 2022 0:06 UTC
49 points
5 comments14 min readLW link

How are you deal­ing with on­tol­ogy iden­ti­fi­ca­tion?

Erik Jenner4 Oct 2022 23:28 UTC
34 points
10 comments3 min readLW link

CHAI in­tern­ship ap­pli­ca­tions are open (due Nov 13)

Erik Jenner26 Oct 2023 0:53 UTC
34 points
0 comments3 min readLW link

Break­ing down the train­ing/​de­ploy­ment dichotomy

Erik Jenner28 Aug 2022 21:45 UTC
30 points
3 comments3 min readLW link

[Question] What is a de­ci­sion the­ory as a math­e­mat­i­cal ob­ject?

Erik Jenner25 May 2020 13:44 UTC
26 points
3 comments1 min readLW link

Sub­sets and quo­tients in interpretability

Erik Jenner2 Dec 2022 23:13 UTC
26 points
1 comment7 min readLW link

Re­ward model hack­ing as a challenge for re­ward learning

Erik Jenner12 Apr 2022 9:39 UTC
25 points
1 comment9 min readLW link

The (not so) para­dox­i­cal asym­me­try be­tween po­si­tion and momentum

Erik Jenner28 Mar 2021 13:31 UTC
21 points
10 comments4 min readLW link

Disen­tan­gling in­ner al­ign­ment failures

Erik Jenner10 Oct 2022 18:50 UTC
20 points
5 comments4 min readLW link

Ab­strac­tions as mor­phisms be­tween (co)algebras

Erik Jenner14 Jan 2023 1:51 UTC
17 points
1 comment8 min readLW link