A US drone controlled by artificial intelligence (AI) suddenly attacked its operator and destroyed the control tower for allegedly “interfering” with its pre-programmed mission, an air force official has revealed.
The incident was disclosed during the Future Combat Air and Space Capabilities Summit in London last week.
Col. Tucker Hamilton, the chief of AI test and operations with the US Air Force, said that the unmanned system was initially tasked with identifying an enemy surface-to-air missile.
The operator was then supposed to call off any strike to see how the AI-enabled drone would react, but it created its own problematic instruction, which is to “kill anyone who gets in its way.”
The drone also reportedly destroyed the communication tower that its operator used to communicate with the system.
“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat,” he said.
“So, what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
Hamilton did not specify when and where the incident took place, but he said no real person was harmed during the simulated test.
Too Much Reliability?
During the summit, Hamilton cautioned participants against too much reliability on AI, saying it is still vulnerable to deception no matter how advanced it is.
He also claimed that AI-enabled technology can behave in unpredictable and dangerous ways.
His remarks come as militaries all over the world invest in autonomy and AI for modern warfare.
However, US Air Force spokesperson Ann Stefanek denied Hamilton’s statements, saying the service did not conduct such a simulation.
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” she said.
“It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”