Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Joe Rogero
Karma:
552
All
Posts
Comments
New
Top
Old
Page
1
We do not live by course alone
Joe Rogero
11 Mar 2026 21:12 UTC
33
points
1
comment
2
min read
LW
link
Two ways non-U.S. folks can contribute to AI going well
Joe Rogero
7 Jan 2026 19:37 UTC
21
points
1
comment
2
min read
LW
link
(subatomicarticles.com)
We won’t get docile, brilliant AIs before we solve alignment
Joe Rogero
10 Oct 2025 4:11 UTC
7
points
3
comments
3
min read
LW
link
Labs lack the tools to course-correct
Joe Rogero
10 Oct 2025 4:10 UTC
4
points
0
comments
3
min read
LW
link
Alignment progress doesn’t compensate for higher capabilities
Joe Rogero
9 Oct 2025 16:06 UTC
4
points
0
comments
6
min read
LW
link
Intent alignment seems incoherent
Joe Rogero
7 Oct 2025 23:01 UTC
24
points
2
comments
6
min read
LW
link
We won’t get AIs smart enough to solve alignment but too dumb to rebel
Joe Rogero
6 Oct 2025 21:49 UTC
28
points
16
comments
5
min read
LW
link
LLMs are badly misaligned
Joe Rogero
5 Oct 2025 14:00 UTC
27
points
25
comments
3
min read
LW
link
Goodness is harder to achieve than competence
Joe Rogero
3 Oct 2025 21:32 UTC
22
points
0
comments
3
min read
LW
link
Good is a smaller target than smart
Joe Rogero
3 Oct 2025 21:04 UTC
21
points
0
comments
2
min read
LW
link
So You Want to Work at a Frontier AI Lab
Joe Rogero
11 Jun 2025 23:11 UTC
54
points
14
comments
7
min read
LW
link
(intelligence.org)
Existing Safety Frameworks Imply Unreasonable Confidence
Joe Rogero
,
yams
and
Joe Collman
10 Apr 2025 16:31 UTC
46
points
3
comments
15
min read
LW
link
(intelligence.org)
[Question]
How much do frontier LLMs code and browse while in training?
Joe Rogero
10 Mar 2025 19:34 UTC
7
points
0
comments
1
min read
LW
link
What We Can Do to Prevent Extinction by AI
Joe Rogero
24 Feb 2025 17:15 UTC
13
points
0
comments
11
min read
LW
link
Cost, Not Sacrifice
Joe Rogero
20 Nov 2024 21:32 UTC
77
points
13
comments
4
min read
LW
link
(subatomicarticles.com)
Flipping Out: The Cosmic Coinflip Thought Experiment Is Bad Philosophy
Joe Rogero
12 Nov 2024 23:55 UTC
34
points
17
comments
4
min read
LW
link
Registrations Open for 2024 NYC Secular Solstice & Megameetup
Joe Rogero
and
Screwtape
12 Nov 2024 17:50 UTC
13
points
0
comments
1
min read
LW
link
2024 NYC Secular Solstice & Megameetup
Joe Rogero
and
Screwtape
12 Nov 2024 17:46 UTC
18
points
0
comments
1
min read
LW
link
Mentorship in AGI Safety: Applications for mentorship are open!
Valentin2026
and
Joe Rogero
28 Jun 2024 14:49 UTC
5
points
0
comments
1
min read
LW
link
Situational Awareness Summarized—Part 2
Joe Rogero
7 Jun 2024 17:20 UTC
12
points
2
comments
4
min read
LW
link
Back to top
Next