RSS

Jim Buhler(Jim Buhler)

Karma: 81

My main focuses at the moment:
▪ S-risk macrostrategy (e.g., what AI safety proposals decrease rather than increase s-risks?)
▪ How to improve the exchange of knowledge in the s-risk community, and other s-risk field-building projects.

Previously, I worked in organizations such as EA Cambridge and EA France (community director), Existential Risk Alliance (research fellow), and the Center on Long-Term Risk (events and community associate).

I’ve conducted research on various longtermist topics (some of it posted on the EA Forum and here) and recently finished a Master’s in moral philosophy.

You can give me anonymous feedback here. :)

SHOW LESS

[Question] Would a scope-in­sen­si­tive AGI be less likely to in­ca­pac­i­tate hu­man­ity?

Jim Buhler21 Jul 2024 14:15 UTC
2 points
3 comments1 min readLW link

[Question] How bad would AI progress need to be for us to think gen­eral tech­nolog­i­cal progress is also bad?

Jim Buhler9 Jul 2024 10:43 UTC
9 points
5 comments1 min readLW link

The (short) case for pre­dict­ing what Aliens value

Jim Buhler20 Jul 2023 15:25 UTC
14 points
5 comments3 min readLW link

[Question] Is the fact that we don’t ob­serve any ob­vi­ous glitch ev­i­dence that we’re not in a simu­la­tion?

Jim Buhler26 Apr 2023 14:57 UTC
8 points
16 comments1 min readLW link

Con­di­tions for Su­per­ra­tional­ity-mo­ti­vated Co­op­er­a­tion in a one-shot Pri­soner’s Dilemma

Jim Buhler19 Dec 2022 15:00 UTC
24 points
4 comments5 min readLW link