Malicious non-state actors and AI safety

Here, I discuss the possibility of malicious non-state actors causing catastrophic suffering or existential risk. This may be a very significant, but neglected issue.

Consider the sort of person who becomes a mass-shooter. They’re malicious. They’re willing to incur large personal costs to cause large amounts of suffering. However, mass-shootings kill only an extremely small proportion of people. But if they could AGI, they would have the potential to cause vastly greater amounts of suffering. So that is what some may try to do.

It wouldn’t surprise me if they’d succeed. To do so, they would need to do two things: acquire the information necessary to create AGI, and be the first one to use it to either destroy or take over the world. Both of these sound pretty possible to me.

A lot of artificial intelligence capability and alignment research is public. If the information necessary to create and/​or control AGI is public, then it would be easy for a malicious actor to obtain it.

If the information is private, then a malicious non-state actor could potentially join the organization to attain access to it. They could try to join the company organization to gain access to it. If they act like decent people, it may be extremely difficult to detect malicious tendencies in them.

Even if they can’t join the company or organization, they could still potentially just steal the information. Of course, the organization could try to protect their information. But malicious actors trying to take over the world may act quite differently than regular cyber criminals, so it’s not clear to me that regular information security would defend against them. For example, such an actor would potentially try physically breaking into the places with the information as well as coercing people for it. Both of these would normally be too dangerous for a regular cyber criminal, so the organization might not sufficiently anticipate the threat.

If a malicious actor can acquire the information necessary for creating AGI, I wouldn’t be surprised if they would be able to destroy or take over the world before anyone else could.

First, the malicious actor would need a sufficiently large amount of computational resources. Computers in botnets can be rented out extremely cheaply, and vastly more cheaply than actually purchasing the computers. Or they could hack into massive numbers of computers on their own. People can massively distribute malware, and doing so is vastly cheaper than actually getting the hardware yourself. I wouldn’t be surprised if a malicious non-state actor would be able to have, for a time, more processing power than the competing organizations working on AI have available.

A malicious actor could also make preparations to allow its AGI to take over the world as quickly as possible. For example, they could provide lots of raw materials and physics textbooks to the AI to make it able to create nanotechnology as quickly as possible. If non-malicious actors aren’t worrying about this, this may provide a malicious actor with a large advantage.

A malicious actor could also try to get ahead of the other AIs by neglecting safety. Potentially, non-malicious people would carefully review and test any AGI system for safety, but a malicious actor would probably be willing to avoid doing so.

For a malicious actor to establish a singleton assuming a hard takeoff, basically three conditions would be necessary: there is at least one malicious actor, at least one such actor can acquire the code for the AGI, and at least one actor who obtained the information is able to use it to establish a singleton.

I think assigning probabilities 0.5 to each of those conjunctions would be reasonable. All seem quite plausibly correct, and quite plausibly incorrect. I’m not sure what could be argued to justify much lower probabilities than these.

Using my guesses above, that would place probability of about 18 that a malicious person would seize control of the world, assuming a hard take off. Is that reasonable?

Currently, there hasn’t seemed to have been much work dealing with malicious non-state actors. It seems overly neglected to me. Am I right?