RSS

Josh Engels

Karma: 542

Brief Ex­plo­ra­tions in LLM Value Rankings

12 Jan 2026 18:16 UTC
36 points
1 comment11 min readLW link

Steer­ing RL Train­ing: Bench­mark­ing In­ter­ven­tions Against Re­ward Hacking

29 Dec 2025 21:55 UTC
47 points
10 comments19 min readLW link

Can we in­ter­pret la­tent rea­son­ing us­ing cur­rent mechanis­tic in­ter­pretabil­ity tools?

22 Dec 2025 16:56 UTC
33 points
0 comments9 min readLW link

Prompt­ing Models to Obfus­cate Their CoT

8 Dec 2025 21:00 UTC
15 points
4 comments7 min readLW link

How Can In­ter­pretabil­ity Re­searchers Help AGI Go Well?

1 Dec 2025 13:05 UTC
66 points
1 comment14 min readLW link

A Prag­matic Vi­sion for Interpretability

1 Dec 2025 13:05 UTC
131 points
39 comments27 min readLW link

Cur­rent LLMs seem to rarely de­tect CoT tampering

19 Nov 2025 15:27 UTC
53 points
0 comments20 min readLW link

Nega­tive Re­sults on Group SAEs

Josh Engels6 May 2025 21:49 UTC
76 points
3 comments8 min readLW link

In­terim Re­search Re­port: Mechanisms of Awareness

2 May 2025 20:29 UTC
43 points
6 comments8 min readLW link

Scal­ing Laws for Scal­able Oversight

30 Apr 2025 12:13 UTC
37 points
1 comment9 min readLW link

Josh En­gels’s Shortform

Josh Engels30 Apr 2025 10:58 UTC
4 points
4 comments1 min readLW link

Take­aways From Our Re­cent Work on SAE Probing

3 Mar 2025 19:50 UTC
30 points
4 comments5 min readLW link