Air Force Denies Running Simulation Where AI Drone ‘Killed’ Operator
An Air Force colonel in charge of overseeing the service’s artificial intelligence research is now denying a previous statement in which he said an AI-enabled drone went rogue and killed its own operator. Col. Tucker “Cinco” Hamilton, head of the US Air Force’s AI Test and Operations, had been speaking at a conference last week in London when he shocked his audience by recounting a story of a rogue drone killing its handler during a simulation, in a fashion similar to something out of a Terminator movie. After reports of the conference emerged this week, however, Hamilton publicly addressed his comments, and now says that they were all hypothetical and part of a “thought experiment.”
In a series of remarks recorded on the conference website, Hamilton described the supposed simulation on May 24 to the Royal Aeronautical Society, saying that they were training an AI to “identify and target a [surface-to-air missile threat].”
“And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
In a shocking twist, Hamilton then described researchers trying to retrain the AI to avoid a similar incident, only for it to destroy a communications tower to prevent its mission from being interfered with.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
On their website, the Aeronautical Society described the story as “seemingly plucked from a science fiction thriller.”
That story about the AI drone 'killing' its imaginary human operator? The original source being quoted says he 'mis-spoke' and it was a hypothetical thought experiment never actually run by the US Air Force, according to the Royal Aeronautical Society.https://t.co/lFZt7Tk9lq pic.twitter.com/iZWOEk6fXp
— Georgina Lee (@lee_georgina) June 2, 2023
Not long after the conference, reports of the Air Force experimenting with a ‘killer drone’ quickly went viral, prompting a response from Hamilton, who said that he “misspoke” and was only trying to convey a “thought experiment.”
“We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,” Hamilton told the Society in an updated release. “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability.”
In a statement to Business Insider, Air Force spokesperson Ann Stefanek also denied that any such experiment took place, calling the shocking report “anecdotal.”
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology. It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
The whole "US Air Force tests an AI in simulation; it decides to kill its operator" was literally… a story. Something someone made up. There was no AI, no simulation.
This is part of constant drumbeat of made up AI fear mongering.
— François Chollet (@fchollet) June 2, 2023
Other stories you may want to read:
- Gaetz: Gov’t Shutdown Not ‘Optimal,’ But Better Than ‘Financial Ruin’ - September 25, 2023
- DeSantis: GOP Plays Too Safe Against Left – ‘Gotta Have A Spine’ - September 25, 2023
- Washington Post Slapped With Blistering Fact Check Over ‘Pizzafest’ Hit Piece - September 24, 2023