RSS

Joe Rogero

Karma: 472

We won’t get docile, brilli­ant AIs be­fore we solve alignment

Joe Rogero10 Oct 2025 4:11 UTC
7 points
3 comments3 min readLW link

Labs lack the tools to course-correct

Joe Rogero10 Oct 2025 4:10 UTC
4 points
0 comments3 min readLW link

Align­ment progress doesn’t com­pen­sate for higher capabilities

Joe Rogero9 Oct 2025 16:06 UTC
2 points
0 comments6 min readLW link

In­tent al­ign­ment seems incoherent

Joe Rogero7 Oct 2025 23:01 UTC
22 points
2 comments6 min readLW link

We won’t get AIs smart enough to solve al­ign­ment but too dumb to rebel

Joe Rogero6 Oct 2025 21:49 UTC
28 points
16 comments5 min readLW link

LLMs are badly misaligned

Joe Rogero5 Oct 2025 14:00 UTC
26 points
25 comments3 min readLW link

Good­ness is harder to achieve than competence

Joe Rogero3 Oct 2025 21:32 UTC
22 points
0 comments3 min readLW link

Good is a smaller tar­get than smart

Joe Rogero3 Oct 2025 21:04 UTC
21 points
0 comments2 min readLW link

So You Want to Work at a Fron­tier AI Lab

Joe Rogero11 Jun 2025 23:11 UTC
39 points
14 comments7 min readLW link
(intelligence.org)

Ex­ist­ing Safety Frame­works Im­ply Un­rea­son­able Confidence

10 Apr 2025 16:31 UTC
46 points
3 comments15 min readLW link
(intelligence.org)

[Question] How much do fron­tier LLMs code and browse while in train­ing?

Joe Rogero10 Mar 2025 19:34 UTC
7 points
0 comments1 min readLW link

What We Can Do to Prevent Ex­tinc­tion by AI

Joe Rogero24 Feb 2025 17:15 UTC
12 points
0 comments11 min readLW link

Cost, Not Sacrifice

Joe Rogero20 Nov 2024 21:32 UTC
75 points
13 comments4 min readLW link
(subatomicarticles.com)

Flip­ping Out: The Cos­mic Coin­flip Thought Ex­per­i­ment Is Bad Philosophy

Joe Rogero12 Nov 2024 23:55 UTC
34 points
17 comments4 min readLW link

Regis­tra­tions Open for 2024 NYC Sec­u­lar Sols­tice & Megameetup

12 Nov 2024 17:50 UTC
13 points
0 comments1 min readLW link

2024 NYC Sec­u­lar Sols­tice & Megameetup

12 Nov 2024 17:46 UTC
18 points
0 comments1 min readLW link

Men­tor­ship in AGI Safety: Ap­pli­ca­tions for men­tor­ship are open!

28 Jun 2024 14:49 UTC
5 points
0 comments1 min readLW link

Si­tu­a­tional Aware­ness Sum­ma­rized—Part 2

Joe Rogero7 Jun 2024 17:20 UTC
12 points
2 comments4 min readLW link

Si­tu­a­tional Aware­ness Sum­ma­rized—Part 1

Joe Rogero6 Jun 2024 18:59 UTC
21 points
0 comments5 min readLW link

Men­tor­ship in AGI Safety (MAGIS) call for men­tors

23 May 2024 18:28 UTC
32 points
3 comments2 min readLW link