Levels of safety for AI and other technologies

Link post

What does it mean for AI to be “safe”?

Right now there is a lot of debate about AI safety. But people often end up talking past each other because they’re not using the same definitions or standards.

For the sake of productive debates, let me propose some distinctions to add clarity:

A scale of technology safety

Here are four levels of safety for any given technology:

  1. So dangerous that no one can use it safely

  2. Safe only if used very carefully

  3. Safe unless used recklessly or maliciously

  4. So safe that no one can cause serious harm with it

Another way to think about this is, roughly:

  • Level 1 is generally banned

  • Level 2 is generally restricted to trained professionals

  • Level 3 can be used by anyone, perhaps with a basic license/​permit

  • Level 4 requires no special safety measures

All of this is oversimplified, but hopefully useful.

Examples

The most harmful drugs and other chemicals, and arguably the most dangerous pathogens and most destructive weapons of war, are level 1.

Operating a power plant, or flying a commercial airplane, is level 2: only for trained professionals.

Driving a car, or taking prescription drugs, is level 3: we make this generally accessible, perhaps with a modest amount of instruction, and perhaps requiring a license or some other kind of permit. (Note that prescribing drugs is level 2.)

Many everyday or household technologies are level 4. Anything you are allowed to take on an airplane is certainly level 4.

Caveats

Again, all of this is oversimplified. Just to indicate some of the complexities:

  • There are more than four levels you could identify; maybe it’s a continuous spectrum.

  • “Safe” doesn’t mean absolutely or perfectly safe, but rather reasonably or acceptably safe: it depends on the scope and magnitude of potential harm, and on a society’s general standards for safety.

  • Safety is not an inherent property of a technology, but of a technology as embedded in a social system, including law and culture.

  • How tightly we regulate things, in general, is not only about safety but is a tradeoff between safety and the importance and value of a technology.

  • Accidental vs. deliberate misuse are arguably different things that might require different scales. Whether we have special security measures in place to prevent criminals or terrorists accessing a technology may not be perfectly correlated with what safety level you would designate a technology when considering only accidents.

  • Related, weapons are kind of a special case, since they are designed to cause harm. (But to add to the complexity, some items are dual-purpose, such as knives and arguably guns.)

Applications to AI

The strongest AI “doom” position argues that AI is level 1: even the most carefully designed system would take over the world and kill us all. And therefore, AI development should be stopped (or “paused” indefinitely).

If AI is level 2, then it is reasonably safe to develop, but arguably it should be carefully controlled by a few companies that give access only through an online service or API. (This seems to be the position of leading AI companies such as OpenAI.)

If AI is level 3, then the biggest risk is a terrorist group or mad scientist who uses an AI to wreak havoc—perhaps much more than they intended.

AI at level 4 would be great, but this seems hard to achieve as a property of the technology itself—rather, the security systems of the entire world need to be upgraded to better protect against threats.

The “genie” metaphor for AI implies that any superintelligent AI is either level 1 or 4, but nothing in between.

How this creates confusion

People talk past each other when they are thinking about different levels of the scale:

“AI is safe!” (because trained professionals can give it carefully balanced rewards, and avoid known pitfalls)

“No, AI is dangerous!” (because a malicious actor could cause a lot of harm with it if they tried)

If AI is at level 2 or 3, then both of these positions are correct. This will be a fruitless and frustrating debate.

Bottom line: When thinking about safety, it helps to draw a line somewhere on this scale and ask whether AI (or any technology in question) is above or below the line.


The ideas above were initially explored in this Twitter thread.

No comments.