Thanks for writing this, Buck. I’m not going to try to reply to your whole post, because I think some of it is stuff I should chew on for longer and see whether I agree with it. But going through some of your points:
I definitely apologize for making it sound like I was making a harsher criticism of (the relevant parts of) EA than I intended. My tweet was originally written as a quick follow-up comment to someone who asked why I thought EA’s impact on AI x-risk was only ~55% likely to be positive. I turned it into a top-level tweet because I didn’t want to hide it deep in an existing discussion, but this was an error given I didn’t add extra context.
I also apologize for anything I said that made it sound like I was universally criticizing past or present Open Phil / cG staff (or centrally basing my views on first-hand conversations, for that matter). I already believed that tons of past and present rank-and-file OP/cG staff have very reasonable views, and I happily further update in that direction based on your and Oliver’s statements to that effect (e.g., Ollie’s “I have since updated that more people who are a level below Alexander, Dustin and Dario have more reasonable beliefs”).
I agree that my characterization of “Dario and a cluster of Open-Phil-ish people” was phrased in a needlessly confusing and sloppy way. I wanted to talk about a mix of ‘present-day views that seem to be endorsed by Dario and some other key figures’ and ‘general tendencies and memes that seem pretty widespread and that seem suspiciously related to choices EA leadership made many years ago’, but blurring these together is really unnecessarily confusing. Also, it didn’t help that I was sarcastically embedding my criticisms into my summaries of the views.
Insofar as my broad criticism of EA cultural trends/memes is correct (which I think is substantial), I still feel a fair bit of uncertainty about how to divvy up responsibility between more Open-Phil-ish people, more Oxford-ish people, MIRI / the rats, etc. And of course, some of the problem may stem from broader social-or-demographic factors that no EA leaders tried to engineer, and that even go counter to how leadership has tried to optimize. (I too remember the early speeches themed around “Keep EA Weird”, the early EA-leader conversations fretting about overly naive EA consequentialism, etc.)
Thanks for writing this, Buck. I’m not going to try to reply to your whole post, because I think some of it is stuff I should chew on for longer and see whether I agree with it. But going through some of your points:
I definitely apologize for making it sound like I was making a harsher criticism of (the relevant parts of) EA than I intended. My tweet was originally written as a quick follow-up comment to someone who asked why I thought EA’s impact on AI x-risk was only ~55% likely to be positive. I turned it into a top-level tweet because I didn’t want to hide it deep in an existing discussion, but this was an error given I didn’t add extra context.
I also apologize for anything I said that made it sound like I was universally criticizing past or present Open Phil / cG staff (or centrally basing my views on first-hand conversations, for that matter). I already believed that tons of past and present rank-and-file OP/cG staff have very reasonable views, and I happily further update in that direction based on your and Oliver’s statements to that effect (e.g., Ollie’s “I have since updated that more people who are a level below Alexander, Dustin and Dario have more reasonable beliefs”).
I agree that my characterization of “Dario and a cluster of Open-Phil-ish people” was phrased in a needlessly confusing and sloppy way. I wanted to talk about a mix of ‘present-day views that seem to be endorsed by Dario and some other key figures’ and ‘general tendencies and memes that seem pretty widespread and that seem suspiciously related to choices EA leadership made many years ago’, but blurring these together is really unnecessarily confusing. Also, it didn’t help that I was sarcastically embedding my criticisms into my summaries of the views.
Insofar as my broad criticism of EA cultural trends/memes is correct (which I think is substantial), I still feel a fair bit of uncertainty about how to divvy up responsibility between more Open-Phil-ish people, more Oxford-ish people, MIRI / the rats, etc. And of course, some of the problem may stem from broader social-or-demographic factors that no EA leaders tried to engineer, and that even go counter to how leadership has tried to optimize. (I too remember the early speeches themed around “Keep EA Weird”, the early EA-leader conversations fretting about overly naive EA consequentialism, etc.)