RSS

LawrenceC(Lawrence Chan)

Karma: 4,820

I do AI Alignment research. Currently independent, but previously at: METR, Redwood, UC Berkeley, Good Judgment Project.

I’m also a part-time fund manager for the LTFF.

Obligatory research billboard website: https://​​chanlawrence.me/​​

Mechanis­tic In­ter­pretabil­ity Work­shop Hap­pen­ing at ICML 2024!

3 May 2024 1:18 UTC
47 points
4 comments1 min readLW link

Su­per­po­si­tion is not “just” neu­ron polysemanticity

LawrenceC26 Apr 2024 23:22 UTC
52 points
4 comments13 min readLW link

An­thropic re­lease Claude 3, claims >GPT-4 Performance

LawrenceC4 Mar 2024 18:23 UTC
114 points
40 comments2 min readLW link
(www.anthropic.com)

Sam Alt­man fired from OpenAI

LawrenceC17 Nov 2023 20:42 UTC
192 points
75 comments1 min readLW link
(openai.com)

Open Phil re­leases RFPs on LLM Bench­marks and Forecasting

LawrenceC11 Nov 2023 3:01 UTC
53 points
0 comments2 min readLW link
(www.openphilanthropy.org)

What I would do if I wasn’t at ARC Evals

LawrenceC5 Sep 2023 19:19 UTC
212 points
8 comments13 min readLW link

Long-Term Fu­ture Fund Ask Us Any­thing (Septem­ber 2023)

31 Aug 2023 0:28 UTC
33 points
6 comments1 min readLW link
(forum.effectivealtruism.org)

Meta an­nounces Llama 2; “open sources” it for com­mer­cial use

LawrenceC18 Jul 2023 19:28 UTC
46 points
12 comments1 min readLW link
(about.fb.com)

Should we pub­lish mechanis­tic in­ter­pretabil­ity re­search?

21 Apr 2023 16:19 UTC
105 points
40 comments13 min readLW link

[Ap­pendix] Nat­u­ral Ab­strac­tions: Key Claims, The­o­rems, and Critiques

16 Mar 2023 16:38 UTC
46 points
0 comments13 min readLW link

Nat­u­ral Ab­strac­tions: Key claims, The­o­rems, and Critiques

16 Mar 2023 16:37 UTC
206 points
20 comments45 min readLW link

Sam Alt­man: “Plan­ning for AGI and be­yond”

LawrenceC24 Feb 2023 20:28 UTC
104 points
54 comments6 min readLW link
(openai.com)

Meta “open sources” LMs com­pet­i­tive with Chin­chilla, PaLM, and code-davinci-002 (Paper)

LawrenceC24 Feb 2023 19:57 UTC
38 points
19 comments1 min readLW link
(research.facebook.com)

Be­hav­ioral and mechanis­tic defi­ni­tions (of­ten con­fuse AI al­ign­ment dis­cus­sions)

LawrenceC20 Feb 2023 21:33 UTC
33 points
5 comments6 min readLW link

Paper: The Ca­pac­ity for Mo­ral Self-Cor­rec­tion in Large Lan­guage Models (An­thropic)

LawrenceC16 Feb 2023 19:47 UTC
65 points
9 comments1 min readLW link
(arxiv.org)

GPT-175bee

8 Feb 2023 18:58 UTC
119 points
13 comments1 min readLW link

OpenAI/​Microsoft an­nounce “next gen­er­a­tion lan­guage model” in­te­grated into Bing/​Edge

LawrenceC7 Feb 2023 20:38 UTC
79 points
4 comments1 min readLW link
(blogs.microsoft.com)

Eval­u­a­tions (of new AI Safety re­searchers) can be noisy

LawrenceC5 Feb 2023 4:15 UTC
130 points
10 comments16 min readLW link

The Align­ment Prob­lem from a Deep Learn­ing Per­spec­tive (ma­jor rewrite)

10 Jan 2023 16:06 UTC
83 points
8 comments39 min readLW link
(arxiv.org)

Paper: Su­per­po­si­tion, Me­moriza­tion, and Dou­ble Des­cent (An­thropic)

LawrenceC5 Jan 2023 17:54 UTC
53 points
11 comments1 min readLW link
(transformer-circuits.pub)