Thanks, this is helpful and I basically accept most of what you’re saying. Some more specific comments on the part about me:
I don’t really think of Rob or MIRI as having a comms strategy of undermining EAs. I think Rob and Eliezer just say a bunch of false, wrong things about EAs because they’re mad at them for reasons downstream of the EAs not agreeing with Eliezer as much as Eliezer and Rob think would be reasonable, and a few other things.
I accept this criticism and take back my claim. I noticed that some people who worked for MIRI comms seemed to do this, and I assumed that anything said by enough MIRI comms people in a serious-sounding voice was on some level a MIRI communique. Eliezer has clarified that this isn’t true, so I apologize for saying it was.
I think Dario (like various other Anthropic people) does not believe that AI takeover is a very plausible outcome, and I think his position is indefensible on the merits, as are some of his other AI positions (e.g. his skepticism that there are substantial returns to intelligence above the human level, his skepticism that ASI could lead to 2x manufacturing capacity per year). He moderately disagrees with the OP people about this.
I basically agree with this (while wanting to clarify that I think he assigns a pretty high risk to permanent dictatorship or something along those lines) but I think he’s done an okay job of navigating uncertainty, realizing that even a low chance of human extinction is very bad, and being willing to (somewhat) cooperate and collect gains-from-trade with people who are doomier than he is. I see him as living in a consistent worldview next door to our movement’s (sort of like Vitalik or Dean Ball) and I think that, like those two people, he’s potentially somewhere between a friend / an ally-of-convenience / a negotiating partner, potentially convertible into a full ally if future events prove us right, or into a true enemy if we pre-emptively alienate him. Having someone like this in charge of a frontier lab is better than I expected (Demis might also be in this category, but I’m not sure, and worry that Larry and Sergey have final say).
I think Scott is blaming MIRI much too much here. Dario’s main difficulty when arguing that he thinks AI will pose huge catastrophic risk in the next few years is that lots of people think this seems implausible on priors, not because those people were specifically turned off by MIRI making related arguments earlier. His core audience has never heard of MIRI.
I agree that Dario is slightly being a jerk here, but I think that people have lots of stereotypes of “doomers” which derive from some real behavior of MIRI and PauseAI, and which wouldn’t exist if the median pause AI person was eg the median Constellation person, and I think Dario feels some understandable incentive to distance himself from this.
I disagree in a lot of the claims here about how various aspects of the current situation are good. (E.g. why does he think that Ilya is doing an alignment effort?)
I have no useful knowledge here, but Ilya seems genuinely alignment-pilled and terrified, the fact that he did the very courageous and self-sacrificing thing of trying to blow up OpenAI to try to get rid of Altman for what were mostly safety-related reasons speaks well of him, and IDK, he’s calling it “safe superintelligence” and saying he won’t release anything at all until he’s sure. I don’t claim any secret expertise in Ilya-ology but overall all of this seems encouraging and I’m surprised this part of my tweet attracted so much dissent.
It’s unclear what “you guys” means. I think Pause AI is making a variety of bad strategic choices. I think that knifing other safety advocates is one bad strategic choice, but it’s more like a bad choice that is downstream of my main problems with them, rather than my core concern about them. I think Rob is totally unreasonable and I wish he would stop working on AI safety, but I think he’s much worse than e.g. MIRI is overall. I think MIRI spends very little of their support on knifing AI safety advocates, they spend almost all of it on advocating for people being scared about misalignment risk and advocating for AI pauses (which I am generally in favor of). Eliezer totally does have a hobby of saying ridiculously strawmanny stuff about OP AI people, which I find pretty annoying, but I don’t think it’s a big part of his effect on the world.
I mostly accept your criticism that I should narrow my objections from “MIRI & Co” to “Pause.AI, Rob, maybe sort of Eliezer, & a slightly different co”. I don’t really know how to do this or what one word covers all of them without inflicting different forms of collateral damage (I don’t want to say “PauseAIers” because that also covers some people I like, and it feels extra-aggressive to name specific names), but I’m open to suggestion.
Some helpful points, thanks. I responded in more depth on Twitter, but I don’t want to duplicate every conversation there here, so I’m just signposting that people should check the thread there for most of my opinions.