I think you’re right, and also it seems misleading / like a bad clustering to lump “the EAs” in with “Anthropic’s leadership”. I think those groups have some memetic connections, but they’re not the same group!
More than 50% of the talent-weighted safety people in EA are literally employees of Anthropic! The ex-CEO of Open Phil now works at Anthropic, and is married to one of its founders. These groups have enormous overlap.
Like, there is so enormous overlap, and the overlap results in such an enormous amount of de-facto deference (being an employee of a company is approximately the strongest common deference relationship we have) that it makes sense to think of these as closely intertwined.
Yes, there are people who attach the EA label themselves who are different here, sometimes even quite substantial clusters. But it’s also IMO clear from Scott’s response that he himself is also majorly deferring and is majorly supportive of Anthropic as a representative of EA, so this clearly isn’t just a split between “everyone who works at Anthropic and everyone who doesn’t”.
Rob used “Open Phil” exactly two times. One time saying “a cluster of Dario and Open-Phil-ish people” and another time “EAs / Open Phil” in reference to the broader community that includes all of these things. These seem like totally reasonable ways of using these pointers and words. I don’t have anything better. It’s definitely not “just Anthropic” as I think Scott very unambiguously demonstrates, and it would be of course extremely confusing to refer to Scott as “Anthropic”.
Imagine re Open Phil and hardcore rationalists “the ex-CEO of MIRI now works at Open Phil, and and the CEO of Lightcone is dating an Open Phil employee. These groups have enormous overlap.”
Yes. People can have a lot of social overlap, yet have very different views from one another, especially in the broader Bay Area intellectual ecosystem. My sense is that Anthropic leadership has very different views from most AI safety EAs.
More than 50% of the talent-weighted safety people in EA are literally employees of Anthropic!
Why do you think this? I’m skeptical this is true, especially if you’re including non-technical talent.
Why do you think this? I’m skeptical this is true, especially if you’re including non-technical talent.
IDK, I counted them? I made some spreadsheets over the years, and ran this number by a bunch of other people, and my current guess is that it’s around 55%? When I list organizations with full-time employees working in safety I actually end up at substantially above 50% of people working at Anthropic, but I think that’s overcounting.
My sense is that Anthropic leadership has very different views from most AI safety EAs.
I think there are differences and overlaps. I think Rob points to a thing that is shared across a cluster that spans both of them, and has historically had a lot of influence.
More than 50% of the talent-weighted safety people in EA are literally employees of Anthropic! The ex-CEO of Open Phil now works at Anthropic, and is married to one of its founders. These groups have enormous overlap.
Like, there is so enormous overlap, and the overlap results in such an enormous amount of de-facto deference (being an employee of a company is approximately the strongest common deference relationship we have) that it makes sense to think of these as closely intertwined.
Yes, there are people who attach the EA label themselves who are different here, sometimes even quite substantial clusters. But it’s also IMO clear from Scott’s response that he himself is also majorly deferring and is majorly supportive of Anthropic as a representative of EA, so this clearly isn’t just a split between “everyone who works at Anthropic and everyone who doesn’t”.
Rob used “Open Phil” exactly two times. One time saying “a cluster of Dario and Open-Phil-ish people” and another time “EAs / Open Phil” in reference to the broader community that includes all of these things. These seem like totally reasonable ways of using these pointers and words. I don’t have anything better. It’s definitely not “just Anthropic” as I think Scott very unambiguously demonstrates, and it would be of course extremely confusing to refer to Scott as “Anthropic”.
Imagine re Open Phil and hardcore rationalists “the ex-CEO of MIRI now works at Open Phil, and and the CEO of Lightcone is dating an Open Phil employee. These groups have enormous overlap.”
Yes. People can have a lot of social overlap, yet have very different views from one another, especially in the broader Bay Area intellectual ecosystem. My sense is that Anthropic leadership has very different views from most AI safety EAs.
Why do you think this? I’m skeptical this is true, especially if you’re including non-technical talent.
IDK, I counted them? I made some spreadsheets over the years, and ran this number by a bunch of other people, and my current guess is that it’s around 55%? When I list organizations with full-time employees working in safety I actually end up at substantially above 50% of people working at Anthropic, but I think that’s overcounting.
I think there are differences and overlaps. I think Rob points to a thing that is shared across a cluster that spans both of them, and has historically had a lot of influence.