Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Max_He-Ho
Karma:
28
Doing a PhD in Philosophy of AI. Working on conceptual AI Safety things
All
Posts
Comments
New
Top
Old
Against racing to AGI: Cooperation, deterrence, and catastrophic risks
Max_He-Ho
29 Jul 2025 22:23 UTC
4
points
0
comments
1
min read
LW
link
(philpapers.org)
Misalignment or misuse? The AGI alignment tradeoff
Max_He-Ho
20 Jun 2025 10:43 UTC
3
points
0
comments
1
min read
LW
link
(forum.effectivealtruism.org)
EA ErFiN Project work
Max_He-Ho
17 Mar 2024 20:42 UTC
2
points
0
comments
1
min read
LW
link
EA ErFiN Project work
Max_He-Ho
17 Mar 2024 20:37 UTC
2
points
0
comments
1
min read
LW
link
Unpredictability and the Increasing Difficulty of AI Alignment for Increasingly Intelligent AI
Max_He-Ho
31 May 2023 22:25 UTC
5
points
2
comments
20
min read
LW
link
Pessimism about AI Safety
Max_He-Ho
and
Peter Kuhn
2 Apr 2023 7:43 UTC
4
points
1
comment
25
min read
LW
link
Back to top