RSS

Steven Byrnes

Karma: 22,276

I’m an AGI safety /​ AI alignment researcher in Boston with a particular focus on brain algorithms. Research Fellow at Astera. See https://​​sjbyrnes.com/​​agi.html for a summary of my research and sorted list of writing. Physicist by training. Email: steven.byrnes@gmail.com. Leave me anonymous feedback here. I’m also at: RSS feed, X/​Twitter, Bluesky, Substack, LinkedIn, and more at my website.

Thoughts on “AI is easy to con­trol” by Pope & Belrose

Steven ByrnesDec 1, 2023, 5:30 PM
197 points
63 comments14 min readLW link1 review

I’m con­fused about in­nate smell neuroanatomy

Steven ByrnesNov 28, 2023, 8:49 PM
40 points
2 comments9 min readLW link

8 ex­am­ples in­form­ing my pes­simism on up­load­ing with­out re­verse engineering

Steven ByrnesNov 3, 2023, 8:03 PM
118 points
12 comments12 min readLW link

Late-talk­ing kid part 3: gestalt lan­guage learning

Steven ByrnesOct 17, 2023, 2:00 AM
33 points
5 comments3 min readLW link

“X dis­tracts from Y” as a thinly-dis­guised fight over group sta­tus /​ politics

Steven ByrnesSep 25, 2023, 3:18 PM
112 points
14 comments8 min readLW link

A The­ory of Laugh­ter—Fol­low-Up

Steven ByrnesSep 14, 2023, 3:35 PM
37 points
3 comments8 min readLW link

A The­ory of Laughter

Steven ByrnesAug 23, 2023, 3:05 PM
102 points
14 comments28 min readLW link

Model of psy­chosis, take 2

Steven ByrnesAug 17, 2023, 7:11 PM
34 points
13 comments4 min readLW link

My check­list for pub­lish­ing a blog post

Steven ByrnesAug 15, 2023, 3:04 PM
87 points
6 comments3 min readLW link

Lisa Feld­man Bar­rett ver­sus Paul Ek­man on fa­cial ex­pres­sions & ba­sic emotions

Steven ByrnesJul 19, 2023, 2:26 PM
31 points
15 comments15 min readLW link

Thoughts on “Pro­cess-Based Su­per­vi­sion”

Steven ByrnesJul 17, 2023, 2:08 PM
74 points
4 comments23 min readLW link

Munk AI de­bate: con­fu­sions and pos­si­ble cruxes

Steven ByrnesJun 27, 2023, 2:18 PM
244 points
21 comments8 min readLW link

My side of an ar­gu­ment with Ja­cob Can­nell about chip in­ter­con­nect losses

Steven ByrnesJun 21, 2023, 1:33 PM
144 points
11 comments11 min readLW link

LeCun’s “A Path Towards Au­tonomous Ma­chine In­tel­li­gence” has an un­solved tech­ni­cal al­ign­ment problem

Steven ByrnesMay 8, 2023, 7:35 PM
140 points
37 comments15 min readLW link

Con­nec­tomics seems great from an AI x-risk perspective

Steven ByrnesApr 30, 2023, 2:38 PM
101 points
7 comments10 min readLW link1 review

AI doom from an LLM-plateau-ist perspective

Steven ByrnesApr 27, 2023, 1:58 PM
161 points
24 comments6 min readLW link

Is “FOXP2 speech & lan­guage di­s­or­der” re­ally “FOXP2 fore­brain fine-mo­tor crap­piness”?

Steven ByrnesMar 23, 2023, 4:09 PM
22 points
8 comments6 min readLW link

EAI Align­ment Speaker Series #1: Challenges for Safe & Benefi­cial Brain-Like Ar­tifi­cial Gen­eral In­tel­li­gence with Steve Byrnes

Mar 23, 2023, 2:32 PM
28 points
0 comments27 min readLW link
(youtu.be)

Plan for mediocre al­ign­ment of brain-like [model-based RL] AGI

Steven ByrnesMar 13, 2023, 2:11 PM
68 points
25 comments12 min readLW link

Why I’m not into the Free En­ergy Principle

Steven ByrnesMar 2, 2023, 7:27 PM
150 points
50 comments9 min readLW link1 review