Title is confusing and maybe misleading, when I see “accelerationists” I think either e/acc or the idea that we should hasten the collapse of society in order to bring about a communist, white supremacist, or other extremist utopia. This is different from accelerating AI progress and, as far as I know, not the motivation of most people at Anthropic.
I get that seeing “accelerationists” gives that that association.
I wrote moderate accelerationists to try and make the distinction. I’m not saying that Dario’s circle of researchers who scaled up GPT were gung-ho in their intentions to scale like many e/acc people are. They obviously had safety concerns and tried to delay releases, etc.
I’m just saying they acted as moderate accelerationists.
The title is not perfect, but got to make a decision here. Hope you understand.
I have a better idea now what you intend. At risk of violating the “Not worth getting into?” react, I still don’t think the title is as informative as it could be; summarizing on the object level would be clearer than saying their actions were similar to actions of “moderate accelerationists”, which isn’t a term you define in the post or try to clarify the connotations of.
Who is a “moderate communist”? Hu Jintao, who ran the CCP but in a state capitalism way? Zohran Mamdani, because democratic socialism is sort of halfway to communism? It’s an inherently vague term until defined, and so is “moderate accelerationists”.
I would be fine with the title if you explained it somewhere, with a sentence in the intro and/or conclusion like “Anthropic have disappointingly acted as ‘moderate accelerationists’ who put at least as much resource into accelerating the development of AGI as ensuring it is safe”, or whatever version of this you endorse. As it is some readers, or at least I, have to think
does Remmelt think that Anthropic’s actions would also be taken by people who believe extinction by entropy-maximizing robots is only sort of bad?
Or is it that Remmelt thinks that Anthropic is acting like a company who think the social benefits of speeding up AI could outweigh the costs?
Or is the post trying to claim that ~half of Anthropic’s actions sped up AI against their informal commitments?
This kind of triply recursive intention guessing is why I think the existing title is confusing.
Alternatively, the title could be something different like “Anthropic founders sped AI and abandoned many safety commitments” or even “Anthropic was not consistently candid about its priorities”. In any case it’s not clear to me that it’s worth changing vs making some kind of minor clarification.
Thanks, you’re right that I left that undefined. I edited the introduction. How does this read to you?
“From the get-go, these researchers acted in effect as moderate accelerationists. They picked courses of action that significantly sped up and/or locked in AI developments, while offering flawed rationales of improving safety.”
“acted as” vs “intended” seems to me to be the distinction here. There’s a phrase that was common a few years back which I still like: intent isn’t magic.
Title is confusing and maybe misleading, when I see “accelerationists” I think either e/acc or the idea that we should hasten the collapse of society in order to bring about a communist, white supremacist, or other extremist utopia. This is different from accelerating AI progress and, as far as I know, not the motivation of most people at Anthropic.
I get that seeing “accelerationists” gives that that association.
I wrote moderate accelerationists to try and make the distinction. I’m not saying that Dario’s circle of researchers who scaled up GPT were gung-ho in their intentions to scale like many e/acc people are. They obviously had safety concerns and tried to delay releases, etc.
I’m just saying they acted as moderate accelerationists.
The title is not perfect, but got to make a decision here. Hope you understand.
I have a better idea now what you intend. At risk of violating the “Not worth getting into?” react, I still don’t think the title is as informative as it could be; summarizing on the object level would be clearer than saying their actions were similar to actions of “moderate accelerationists”, which isn’t a term you define in the post or try to clarify the connotations of.
Who is a “moderate communist”? Hu Jintao, who ran the CCP but in a state capitalism way? Zohran Mamdani, because democratic socialism is sort of halfway to communism? It’s an inherently vague term until defined, and so is “moderate accelerationists”.
I would be fine with the title if you explained it somewhere, with a sentence in the intro and/or conclusion like “Anthropic have disappointingly acted as ‘moderate accelerationists’ who put at least as much resource into accelerating the development of AGI as ensuring it is safe”, or whatever version of this you endorse. As it is some readers, or at least I, have to think
does Remmelt think that Anthropic’s actions would also be taken by people who believe extinction by entropy-maximizing robots is only sort of bad?
Or is it that Remmelt thinks that Anthropic is acting like a company who think the social benefits of speeding up AI could outweigh the costs?
Or is the post trying to claim that ~half of Anthropic’s actions sped up AI against their informal commitments?
This kind of triply recursive intention guessing is why I think the existing title is confusing.
Alternatively, the title could be something different like “Anthropic founders sped AI and abandoned many safety commitments” or even “Anthropic was not consistently candid about its priorities”. In any case it’s not clear to me that it’s worth changing vs making some kind of minor clarification.
Thanks, you’re right that I left that undefined. I edited the introduction. How does this read to you?
“From the get-go, these researchers acted in effect as moderate accelerationists. They picked courses of action that significantly sped up and/or locked in AI developments, while offering flawed rationales of improving safety.”
“acted as” vs “intended” seems to me to be the distinction here. There’s a phrase that was common a few years back which I still like: intent isn’t magic.