Advertisement

AI Drone Decided Human Operator Was The Real Threat To Its Mission

South Korean army drones fly during a joint live-fire exercise with US army at the Seungjin Fire Training Center in Pocheon, South Korea, on Thursday, May 25, 2023. The US and South Korea began their largest-ever live-fire drills near the border with North Korea, which has threatened retaliation against the two nations it labels “war maniacs.”
South Korean army drones fly during a joint live-fire exercise with US army at the Seungjin Fire Training Center in Pocheon, South Korea, on Thursday, May 25, 2023. The US and South Korea began their largest-ever live-fire drills near the border with North Korea, which has threatened retaliation against the two nations it labels “war maniacs.”

At a summit of military technology experts last week, one speaker slipped in an experience the U.S. Air Force had while toying with AI drones. The equipment in a simulation decided its human operator was a threat to its mission — so it destroyed the operator.

It’s a scenario toyed with by science fiction movies good and bad since the beginning of the military industrial complex. We first spotted the story from the Twitter account of Armand Domalewski, who normally explains the mechanicians of the FDIC:

Read more

Domalewski is pulling from a Royal Aerospace Society summary of talks given by military technology experts at this year’s RAeS Future Combat Air & Space Capabilities Summit in London where just under 70 speakers discussed the future of air warfare.

ADVERTISEMENT

Tucked in with all the other boring speech subjects, such as turning a Boeing 757 into a a highly sophisticated stealth fighter and how to build weaponized drones with off-the-self parts, was a speech on AI from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, U.S. Air Force. He told a cheeky little tale about the ingenuity of AI in the battlefield. From the Royal Aerospace Society (SAM sites refers to Surface-to-Air missiles):

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

This example, seemingly plucked from a science fiction thriller, mean that: “You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI” said Hamilton.

So, not only did the drone try to kill its operator, when told “no that’s bad” it destroyed the communications target to stop the human from communicating with it at all.

I, for one, welcome our future robot overlords.

More from Jalopnik

Sign up for Jalopnik's Newsletter. For the latest news, Facebook, Twitter and Instagram.

Click here to read the full article.