Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Jim Buhler
Karma:
97
www.jimbuhler.site
All
Posts
Comments
New
Top
Old
Why I think ECL shouldn’t make you update your cause prio
Jim Buhler
6 Oct 2025 13:01 UTC
1
point
0
comments
11
min read
LW
link
Strategic Moderation Goals (a Plan B to AI alignment)
Jim Buhler
8 Aug 2025 8:08 UTC
2
points
0
comments
3
min read
LW
link
The Clueless Sniper and the Principle of Indifference
Jim Buhler
27 Jan 2025 11:52 UTC
11
points
26
comments
2
min read
LW
link
[Question]
Would a scope-insensitive AGI be less likely to incapacitate humanity?
Jim Buhler
21 Jul 2024 14:15 UTC
2
points
3
comments
1
min read
LW
link
[Question]
How bad would AI progress need to be for us to think general technological progress is also bad?
Jim Buhler
9 Jul 2024 10:43 UTC
9
points
5
comments
1
min read
LW
link
The (short) case for predicting what Aliens value
Jim Buhler
20 Jul 2023 15:25 UTC
14
points
5
comments
3
min read
LW
link
[Question]
Is the fact that we don’t observe any obvious glitch evidence that we’re not in a simulation?
Jim Buhler
26 Apr 2023 14:57 UTC
8
points
16
comments
1
min read
LW
link
Conditions for Superrationality-motivated Cooperation in a one-shot Prisoner’s Dilemma
Jim Buhler
19 Dec 2022 15:00 UTC
24
points
4
comments
5
min read
LW
link
Back to top