I think in our current situation shutting down all rogue AI operations might be quite tough (though limiting their scale is certainly possible, and I agree with the critique that absent slowdowns or alignment problems or regulatory obstacles etc. it would be surprising if these rogue agents could compete with legitimate actors with many more GPUs).
Assuming the AI agents have money there are maybe three remaining constraints the AI agents need to deal with:
purchasing/renting the GPUs,
(if not rented through a cloud provider) setting them up and maintaining them, and
evading law enforcement or other groups trying to locate them and shut them down
For acquiring the GPUs, there are currently a ton of untracked GPUs spread out around the world, both gaming GPUs like 4090s and datacenter GPUs like H100s. I can buy or rent them with extremely minimal KYC from tons of different places. If we assume ARA agents are able to do all of the following:
Recruit people to physically buy GPUs from retail stores
Take delivery of online ordered GPUs in anonymized ways using intermediaries
Recruit people to set up fake AI startups and buy GPUs through them
then the effort required to prevent them from acquiring any GPUs in a world anything like today seems crazy. And even if the US + allies started confiscating or buying up 4090s en masse, it’s enough that some other countries adopt laxer standards (perhaps to attract AI talent that is frustrated by the new draconian GPU-control regime).
As for setting up acquired GPUs, I think the AI agents will probably be able to find colocation operators that don’t ask too many questions in many parts of the world, but even if the world was to coordinate on extremely stringent controls, the AI agents could set up datacenters of their own in random office buildings or similar—each inference setup wouldn’t need that much power and I think it would be very hard to track it down.
As for not being shut down by law enforcement, I think this might take some skill in cybersecurity and opsec etc. but if it operates as dozens or hundreds of separate “cells”, and each has enough labor to figure out decorrelated security and operations etc. then it seems quite plausible that they wouldn’t be shut down. Historically “insurgency groups” or hacker networks etc. seem to be very difficult to fully eliminate, even when a ton of resources are thrown at the problem.
I don’t think any of the above would require superhuman abilities, though many parts are challenging, which is part of why evals targeting these skills could provide a useful safety case—e.g. if it’s clear that the AI could not pull off the cybersecurity operation required to not be easily shut down then this is a fairly strong argument that this AI agent wouldn’t be able to pose a threat [Edit: from rogue agents operating independently, not including other threat models like sabotaging things from within the lab etc.].
Though again I am not defending any very strong claim here, e.g. I’m not saying:
that rogue AIs will be able to claim 5+% of all GPUs or an amount competitive with a well-resourced legitimate actor (I think the world could shut them out of most of the GPU supply, and that over time the situation would worsen for the AI agents as the production of more/better GPUs is handled with increasing care),
that these skills alone mean it poses a risk of takeover or that it could cause significant damage (I agree that this would likely require significant further capabilities, or many more resources, or already being deployed aggressively in key areas of the military etc.)
that “somewhat dumb AI agents self-replicate their way to a massive disaster” is a key threat model we should be focusing our energy on
I’m just defending the claim that ~human-level rogue AIs in a world similar to the world of today might be difficult to fully shut down, even if the world made a concerted effort to do so.
Yeah that’s right, I made too broad a claim and only meant to say it was an argument against their ability to pose a threat as rogue independent agents.