Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Max_He-Ho
Karma:
28
Doing a PhD in Philosophy of AI. Working on conceptual AI Safety things
All
Posts
Comments
New
Top
Old
Against racing to AGI: Cooperation, deterrence, and catastrophic risks
Max_He-Ho
Jul 29, 2025, 10:23 PM
4
points
0
comments
1
min read
LW
link
(philpapers.org)
Misalignment or misuse? The AGI alignment tradeoff
Max_He-Ho
Jun 20, 2025, 10:43 AM
3
points
0
comments
1
min read
LW
link
(forum.effectivealtruism.org)
EA ErFiN Project work
Max_He-Ho
Mar 17, 2024, 8:42 PM
2
points
0
comments
1
min read
LW
link
EA ErFiN Project work
Max_He-Ho
Mar 17, 2024, 8:37 PM
2
points
0
comments
1
min read
LW
link
Unpredictability and the Increasing Difficulty of AI Alignment for Increasingly Intelligent AI
Max_He-Ho
May 31, 2023, 10:25 PM
5
points
2
comments
20
min read
LW
link
Pessimism about AI Safety
Max_He-Ho
and
Peter Kuhn
Apr 2, 2023, 7:43 AM
4
points
1
comment
25
min read
LW
link
Back to top