You are obviously not in the AGI uniparty (e.g. you chose to leave despite great financial cost).
Basically I think it’s pretty accurate at describing the part of the community that inhabits and is closely entangled with the AI companies, but inaccurate at describing e.g. MIRI or AIFP or most of the orgs in Constellation, or FLI or … etc.
I agree with most of these, though my vague sense is some Constellation orgs are quite entangled with Anthropic (e.g. sending people to Anthropic, Anthropic safety teams coworking there, etc.), and Anthropic seems like the cultural core of the AGI uniparty.
FWIW, I disagree that Anthropic is the cultural core of the AGI uniparty. I think you think that because “Being EA” is one of the listed traits of the AGI uniparty, but I think that’s maybe one of the places I disagree with the author—”Being EA” is maybe a common trait in AI safety, but it’s a decreasingly common trait unfortunately IMO, and it’s certainly not a common trait in the AI companies, and I think the AGI uniparty should be a description of the culture of the companies rather than a description of the culture of AI safety more generally (otherwise, it’s just false). I’d describe the AGI uniparty as the people for whom this is true:
One cannot believe that AI development should stop entirely. One cannot believe that the risks are so severe that no level of benefit justifies them. One cannot believe that the people currently working on AI are not the right people to be making these decisions. One cannot believe that traditional political processes might be better equipped to govern AI development than the informal governance of the research community.
...and I’m pretty sure that while this is true for Anthropic, OpenAI, xAI, GDM, etc. it’s probably somewhat less true for Anthropic than for the others, or at least OpenAI.
You are obviously not in the AGI uniparty (e.g. you chose to leave despite great financial cost).
I agree with most of these, though my vague sense is some Constellation orgs are quite entangled with Anthropic (e.g. sending people to Anthropic, Anthropic safety teams coworking there, etc.), and Anthropic seems like the cultural core of the AGI uniparty.
OK, cool.
FWIW, I disagree that Anthropic is the cultural core of the AGI uniparty. I think you think that because “Being EA” is one of the listed traits of the AGI uniparty, but I think that’s maybe one of the places I disagree with the author—”Being EA” is maybe a common trait in AI safety, but it’s a decreasingly common trait unfortunately IMO, and it’s certainly not a common trait in the AI companies, and I think the AGI uniparty should be a description of the culture of the companies rather than a description of the culture of AI safety more generally (otherwise, it’s just false). I’d describe the AGI uniparty as the people for whom this is true:
...and I’m pretty sure that while this is true for Anthropic, OpenAI, xAI, GDM, etc. it’s probably somewhat less true for Anthropic than for the others, or at least OpenAI.