This raises an important question: if AI models can be misused for cyberattacks at this scale, why continue to develop and release them? The answer is that the very abilities that allow Claude to be used in these attacks also make it crucial for cyber defense.
Instead of trying to present any kind of utopian vision of the benefits of AI, someone at Anthropic decided to sell us the image of an internet dominated by endless cyberwar trapped in a perverse feedback loop in escalating speed and incomprehensibility.
Instead of trying to present any kind of utopian vision of the benefits of AI, someone at Anthropic decided to sell us the image of an internet dominated by endless cyberwar trapped in a perverse feedback loop in escalating speed and incomprehensibility.
Good. If this is what the authors believe the future holds, it’s much better that they say it than search for a rosy-sounding justification.
You might are probably right. For someone arguing the benefits of AI I certainly can’t accuse this writer of being misleadingly optimistic.
But personally I’ve recently found it quite disconcerting how bleak the image of the future of people who work in AI (on both sides of the capabilities/safety divide) seem to be willingly to work towards building.
Overcoming this kind of reflexive defeatism seems to me much harder than simply trying to convince people that we are going in a bad direction as a matter of fact.
This quote from Anthropic’s report on the large scale Claude code cyberattack seems utterly comical to me:
Instead of trying to present any kind of utopian vision of the benefits of AI, someone at Anthropic decided to sell us the image of an internet dominated by endless cyberwar trapped in a perverse feedback loop in escalating speed and incomprehensibility.
Good. If this is what the authors believe the future holds, it’s much better that they say it than search for a rosy-sounding justification.
You might are probably right. For someone arguing the benefits of AI I certainly can’t accuse this writer of being misleadingly optimistic.
But personally I’ve recently found it quite disconcerting how bleak the image of the future of people who work in AI (on both sides of the capabilities/safety divide) seem to be willingly to work towards building.
Overcoming this kind of reflexive defeatism seems to me much harder than simply trying to convince people that we are going in a bad direction as a matter of fact.