Strict rules regarding meaningful human control over artificial intelligence combat systems may not always be possible, US Army Futures Command Gen. John Murray stated during a webinar yesterday. The Pentagon may have to relax the rules when it comes to defeating drone swarms, he said.
The head of army command responsible for modernization said that some drones move too quickly for soldiers to track and require AI for faster target recognition to defeat them.
The Defense Department has thus far been consistent about emphasizing the importance of human control when firing deadly weapons.
“It just becomes very hard when you are talking about swarms of small drones — not impossible, but harder,” Gen. Murray said.
Military Use of AI
Artificial intelligence is widely believed to be a game changer for the military, with countries around the world significantly investing to modernize their forces through the use of AI.
China has articulated its military AI strategy in 2017 while Russia expects to have significantly improved its AI technology by 2024. Meanwhile, countries such as the UK, Israel, Brazil, Australia, South Korea, and Iran are also integrating AI into their militaries.
Last week, the EU adopted a set of more stringent ethical guidelines for the military use of AI. It recommends, among other things, to prohibit the development, production, and use of autonomous weapon systems “without meaningful human control.”