I have a better idea now what you intend. At risk of violating the “Not worth getting into?” react, I still don’t think the title is as informative as it could be; summarizing on the object level would be clearer than saying their actions were similar to actions of “moderate accelerationists”, which isn’t a term you define in the post or try to clarify the connotations of.
Who is a “moderate communist”? Hu Jintao, who ran the CCP but in a state capitalism way? Zohran Mamdani, because democratic socialism is sort of halfway to communism? It’s an inherently vague term until defined, and so is “moderate accelerationists”.
I would be fine with the title if you explained it somewhere, with a sentence in the intro and/or conclusion like “Anthropic have disappointingly acted as ‘moderate accelerationists’ who put at least as much resource into accelerating the development of AGI as ensuring it is safe”, or whatever version of this you endorse. As it is some readers, or at least I, have to think
does Remmelt think that Anthropic’s actions would also be taken by people who believe extinction by entropy-maximizing robots is only sort of bad?
Or is it that Remmelt thinks that Anthropic is acting like a company who think the social benefits of speeding up AI could outweigh the costs?
Or is the post trying to claim that ~half of Anthropic’s actions sped up AI against their informal commitments?
This kind of triply recursive intention guessing is why I think the existing title is confusing.
Alternatively, the title could be something different like “Anthropic founders sped AI and abandoned many safety commitments” or even “Anthropic was not consistently candid about its priorities”. In any case it’s not clear to me that it’s worth changing vs making some kind of minor clarification.
Thanks, you’re right that I left that undefined. I edited the introduction. How does this read to you?
“From the get-go, these researchers acted in effect as moderate accelerationists. They picked courses of action that significantly sped up and/or locked in AI developments, while offering flawed rationales of improving safety.”
I have a better idea now what you intend. At risk of violating the “Not worth getting into?” react, I still don’t think the title is as informative as it could be; summarizing on the object level would be clearer than saying their actions were similar to actions of “moderate accelerationists”, which isn’t a term you define in the post or try to clarify the connotations of.
Who is a “moderate communist”? Hu Jintao, who ran the CCP but in a state capitalism way? Zohran Mamdani, because democratic socialism is sort of halfway to communism? It’s an inherently vague term until defined, and so is “moderate accelerationists”.
I would be fine with the title if you explained it somewhere, with a sentence in the intro and/or conclusion like “Anthropic have disappointingly acted as ‘moderate accelerationists’ who put at least as much resource into accelerating the development of AGI as ensuring it is safe”, or whatever version of this you endorse. As it is some readers, or at least I, have to think
does Remmelt think that Anthropic’s actions would also be taken by people who believe extinction by entropy-maximizing robots is only sort of bad?
Or is it that Remmelt thinks that Anthropic is acting like a company who think the social benefits of speeding up AI could outweigh the costs?
Or is the post trying to claim that ~half of Anthropic’s actions sped up AI against their informal commitments?
This kind of triply recursive intention guessing is why I think the existing title is confusing.
Alternatively, the title could be something different like “Anthropic founders sped AI and abandoned many safety commitments” or even “Anthropic was not consistently candid about its priorities”. In any case it’s not clear to me that it’s worth changing vs making some kind of minor clarification.
Thanks, you’re right that I left that undefined. I edited the introduction. How does this read to you?
“From the get-go, these researchers acted in effect as moderate accelerationists. They picked courses of action that significantly sped up and/or locked in AI developments, while offering flawed rationales of improving safety.”