Jim Buhler(Jim Buhler)

Karma: 68

My main focuses at the moment:
▪ S-risk macrostrategy (e.g., evaluating the promisingness of slowing down AI development and/​or reducing malevolent influence over AI)
▪ How to improve the exchange of knowledge in the s-risk community, and other s-risk field-building projects.

Previously, I worked in organizations such as EA Cambridge and EA France (community director), Existential Risk Alliance (research fellow), and the Center on Long-Term Risk (events and community associate).

I’ve conducted research on various longtermist topics (some of it posted on the EA Forum and here) and recently finished a Master’s in moral philosophy.

You can give me anonymous feedback here. :)


The (short) case for pre­dict­ing what Aliens value

Jim Buhler20 Jul 2023 15:25 UTC
11 points
5 comments3 min readLW link

[Question] Is the fact that we don’t ob­serve any ob­vi­ous glitch ev­i­dence that we’re not in a simu­la­tion?

Jim Buhler26 Apr 2023 14:57 UTC
8 points
16 comments1 min readLW link

Con­di­tions for Su­per­ra­tional­ity-mo­ti­vated Co­op­er­a­tion in a one-shot Pri­soner’s Dilemma

Jim Buhler19 Dec 2022 15:00 UTC
24 points
4 comments5 min readLW link