RSS

Richard Juggins

Karma: 57

I am building a research agenda tackling catastrophic risks from AI, which I am documenting on my substack Working Through AI. Where it feels suited, I’m crossposting my work to LessWrong.

The Ice­berg The­ory of Meaning

Richard Juggins26 Jun 2025 12:13 UTC
10 points
9 comments5 min readLW link

How to spec­ify an al­ign­ment target

Richard Juggins1 May 2025 21:11 UTC
14 points
2 comments12 min readLW link

Mak­ing al­ign­ment a law of the universe

Richard Juggins25 Feb 2025 10:44 UTC
0 points
3 comments15 min readLW link

Brighton Ra­tion­al­ish—Jun ’23 Meetup

Richard Juggins13 Jun 2023 17:36 UTC
1 point
0 comments1 min readLW link