US Lays Out New Rules for Dealing With ‘Killer Robots’
The US Department of Defense has laid out new rules on how to deal with autonomous systems known as “killer robots.”
The directive came amid calls from organizations to ban the use of artificial intelligence (AI)-controlled weapons and equipment over safety concerns.
According to deputy secretary Kathleen Hicks, the Pentagon is committed to employing autonomous weapons responsibly and lawfully.
Under the new “Autonomy in Weapon Systems” directive, the US military will be required to minimize the “probability and consequences of failures” in autonomous and semi-autonomous weapon systems to avoid unintended engagements.
Systems incorporating AI capabilities will still be allowed, provided they abide by the DoD’s AI Ethical Principles and the Responsible AI Strategy and Implementation Pathway.
“Given the dramatic advances in technology happening all around us, the update to our Autonomy in Weapon Systems directive will help ensure we remain the global leader of not only developing and deploying new systems, but also safety,” Hicks said.
New Requirements
The directive includes new requirements for developing and deploying autonomous and AI-driven systems in the military.
The state-of-the-art technology must be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.
The systems must also demonstrate appropriate performance, capability, reliability, and effectiveness under realistic conditions.
Additionally, the Pentagon wants all authorized personnel to use “killer robots” according to the laws of war, weapon system safety rules, and applicable rules of engagement.