RSS

Josh Engels

Karma: 695

Research scientist on the DeepMind interp team. Thoughts are my own and do not reflect views of my employer.

Test your best meth­ods on our hard CoT in­terp tasks

26 Mar 2026 19:24 UTC
42 points
1 comment19 min readLW link

Train­ing on Doc­u­ments About Mon­i­tor­ing Leads To CoT Obfuscation

18 Mar 2026 20:37 UTC
63 points
5 comments16 min readLW link

Thought Edit­ing: Steer­ing Models by Edit­ing Their Chain of Thought

3 Feb 2026 9:51 UTC
20 points
0 comments5 min readLW link

Brief Ex­plo­ra­tions in LLM Value Rankings

12 Jan 2026 18:16 UTC
37 points
1 comment11 min readLW link

Steer­ing RL Train­ing: Bench­mark­ing In­ter­ven­tions Against Re­ward Hacking

29 Dec 2025 21:55 UTC
58 points
10 comments19 min readLW link

Can we in­ter­pret la­tent rea­son­ing us­ing cur­rent mechanis­tic in­ter­pretabil­ity tools?

22 Dec 2025 16:56 UTC
37 points
0 comments9 min readLW link

Prompt­ing Models to Obfus­cate Their CoT

8 Dec 2025 21:00 UTC
16 points
4 comments7 min readLW link

How Can In­ter­pretabil­ity Re­searchers Help AGI Go Well?

1 Dec 2025 13:05 UTC
66 points
1 comment14 min readLW link

A Prag­matic Vi­sion for Interpretability

1 Dec 2025 13:05 UTC
131 points
39 comments27 min readLW link

Cur­rent LLMs seem to rarely de­tect CoT tampering

19 Nov 2025 15:27 UTC
56 points
0 comments20 min readLW link

Nega­tive Re­sults on Group SAEs

Josh Engels6 May 2025 21:49 UTC
76 points
3 comments8 min readLW link

In­terim Re­search Re­port: Mechanisms of Awareness

2 May 2025 20:29 UTC
43 points
6 comments8 min readLW link

Scal­ing Laws for Scal­able Oversight

30 Apr 2025 12:13 UTC
38 points
1 comment9 min readLW link

Josh En­gels’s Shortform

Josh Engels30 Apr 2025 10:58 UTC
4 points
5 comments1 min readLW link