Limit intelligent weapons

Edit: this post currently has more downvotes than upvotes. The title, though broad, is uncontroversial: everything has its limits. Please point out flawed reasoning or need for further clarification in the comments instead of downvoting.


Let’s agree that the first step towards AI alignment is to refrain from building intelligent machines that are designed to kill people. Very simple. As a global community, we need to agree completely on this topic.

Some will provide arguments in favor of intelligent lethal machines, such as the following:

That intelligent weapons kill with more precision, saving innocent lives.

That intelligent weapons do the most dangerous work, saving soldiers’ lives.

Both of the above points are clearly valid. However, they do not justify the associated risk: that these machines turn against the humans they were designed to protect.

Currently, leading militaries around the world are developing and using:

-Drone swarms

-Suicide drones

-Assassin drones

-Intelligent AI pilots for fighter jets

-Targeting based on facial recognition

- Robot dogs with mounted guns

Given some of the unpredictable emergent behavior seen in studies and tests, malicious behavior could emerge from future artificial intelligence. If it did, it would have a clear vector of attack through a coordinated takeover of intelligent weapons. Let’s agree as a global community to limit our development of intelligent weapons, thus limiting the potential damage from an out-of-control AI.