RSS

Alfie Lamerton

Karma: 94

Let your ideas die so you don’t have to. Navigating parochialism. Fond of fallibilism.

I’m currently working as founder and research lead at Formation Research on technical interventions for lock-in risks.

As a researcher, I am interested in AI-enabled totalitarianism, authoritarianism, coups, and power concentration, and governance-informed solutions for these problems leveraging technical methods of verification and cooperation.

My website is here.

Submit anonymous feedback here.

Nar­row Se­cret Loy­alty Dodges Black-Box Audits

22 Apr 2026 9:41 UTC
48 points
1 comment13 min readLW link

Digi­tal Er­ror Cor­rec­tion and Lock-In

Alfie Lamerton8 Apr 2025 15:46 UTC
1 point
0 comments5 min readLW link
(alfielamerton.substack.com)

Or­gani­sa­tion-Level Lock-In Risk Interventions

Alfie Lamerton1 Apr 2025 12:42 UTC
5 points
0 comments8 min readLW link

Recom­mender Align­ment for Lock-In Risk

Alfie Lamerton24 Mar 2025 12:56 UTC
8 points
0 comments7 min readLW link

How a Bench­mark for Lock-In Might Look

Alfie Lamerton13 Mar 2025 12:08 UTC
4 points
0 comments1 min readLW link
(huggingface.co)

Lock-In Threat Models

Alfie Lamerton10 Mar 2025 10:22 UTC
5 points
0 comments8 min readLW link

What is Lock-In?

Alfie Lamerton6 Mar 2025 11:09 UTC
5 points
1 comment9 min readLW link

For­ma­tion Re­search: Or­gani­sa­tion Overview

Alfie Lamerton4 Mar 2025 15:03 UTC
6 points
0 comments11 min readLW link

In-Con­text Learn­ing: An Align­ment Survey

Alfie Lamerton30 Sep 2024 18:44 UTC
8 points
0 comments20 min readLW link
(docs.google.com)

A Re­view of In-Con­text Learn­ing Hy­pothe­ses for Au­to­mated AI Align­ment Research

Alfie Lamerton18 Apr 2024 18:29 UTC
25 points
4 comments16 min readLW link