RSS

Richard Juggins

Karma: 100

I am building a research agenda tackling catastrophic risks from AI, which I am documenting on my substack Working Through AI. Where it feels suited, I’m crossposting my work to LessWrong.

All tech­ni­cal al­ign­ment plans are steps in the dark

Richard Juggins12 Mar 2026 22:22 UTC
13 points
5 comments8 min readLW link
(www.workingthroughai.com)

What Suc­cess Might Look Like

Richard Juggins17 Oct 2025 14:17 UTC
23 points
6 comments15 min readLW link

The Ice­berg The­ory of Meaning

Richard Juggins26 Jun 2025 12:13 UTC
10 points
9 comments5 min readLW link

How to spec­ify an al­ign­ment target

Richard Juggins1 May 2025 21:11 UTC
14 points
2 comments12 min readLW link