[Linkpost] A Chinese AI optimized for killing

The AI depicted in the Terminator movies is rather stupid: there are much more efficient ways to kill all humans than robots with guns.

We can safely ignore the unrealistic Terminator-like scenario of AI X-risk.

...Or can we?

Tsinghua University is a top university located in Beijing. It is heavily involved in research for the Chinese military. One of its military labs is called “The State Key Laboratory of Intelligent Technology and Systems”.

In 2021, two of the university’s researchers released a paper called Counter-Strike Deathmatch with Large-Scale Behavioural Cloning.

Some highlights:

  • The rewards are calculated as r = 1.0K − 0.5D − 0.02F, where K is a kill, D is own death, and F is a shot fired. One could interpret it as follows: 1) the agent must kill, 2) the agent must protect its own existence, as long as such protection does not conflict with the first rule, 3) the agent must spare ammunition, as long as it does not conflict with the first and the second rule.

  • “To determine when to stop training, we evaluated the agent after each epoch, measuring kills-per-minute”

  • “Kill/​death ratio (K/​D) is the number of times a player kills an enemy compared to how many times they die. Whilst useful as one measure of an agent’s performance, more information is needed – avoiding all but the most favourable firefights would score a high K/​D ratio, but may be undesirable. We therefore also report kills-per-minute (KPM). A strong agent should have both a high KPM and high K/​D”

  • “In this paper we take on such a challenge; building an agent for Counter-Strike: Global Offensive (CSGO), with no access to an API, and only modest compute resources (several GPUs and one game terminal).”

  • “Our solution uses behavioural cloning—training on a large noisy dataset scraped from human play on online servers...”

A video linked in the article:

From the article and the authors’ affiliation, I drew the following conclusions:

  • It is likely that China is already working on fully autonomous weaponry

  • One can already build autonomous weaponry with very modest computational resources and publicly available data

  • Efficient autonomous weaponry of mass destruction doesn’t require human-level intelligence. In the game environment, even the primitive agent described in the article can kill at the rate of 30 humans per hour, while fighting against skilled armed humans. Slightly more intelligent mass-produced slaughterbots may be able to decimate entire cities, in hours.

  • Some researchers really should stop for a minute and ask themselves: maybe I shouldn’t build an AI optimized for killing people, while working at a university involved in AI research for the Chinese military?