X

Purdue Working to Make Military Drone Software Hackproof

Christened SCRAMBLE, the prototype software suite will patch vulnerabilities in the system in three ways.

For illustrative purpose only. Photo: Ben Stanstall/AFP

Purdue University has embarked on a five-year project in collaboration with Princeton to make military drone software as well as that of other unmanned machines hackproof.

The project, part of the Army Research Laboratory and Army Artificial Intelligence Institute, is funded with up to $3.7 million.

Christened SCRAMBLE (SeCure Real-time Decision-Making for the AutonoMous BattLefield) by its developers, the prototype software suite is designed to provide more secure autonomous operations.

Purdue said in a statement that to achieve this goal the team will be focusing on better protecting machine learning algorithms used by military drones and other automated machines to execute maneuvers.

“The implications of insecure operation of these machine learning algorithms are very dire,” said Saurabh Bagchi, a Purdue professor of electrical and computer engineering and the project’s principal investigator.

“If your platoon mistakes an enemy platoon for an ally, for example, then bad things happen. If your drone misidentifies a projectile coming at your base, then again, bad things happen. So you want these machine learning algorithms to be secure from the ground up.”

Three-Part Strategy to Secure Autonomous Systems

An autonomous system can be hacked at several points of the operation, Bagchi said, for example by manipulating the process that technicians use to feed data into algorithms and train them offline. Such security breaches can occur even before a system is deployed.

SCRAMBLE, the developers say, will patch vulnerabilities in the system in three ways.

First, by making the algorithm robust enough that it can operate with uncertain, incomplete, or maliciously manipulated data sources.

“Malicious agents can insert bogus or corrupted information into the stream of data that an artificial intelligence system is using to learn, thereby compromising security,” explained Prateek Mittal, an associate professor of electrical engineering and computer science at Princeton, who is also part of the project.

“Our goal is to design trustworthy machine learning systems that are resilient to such threats.”

Second, the prototype will include a set of “interpretable” machine learning algorithms that will change the “operating environment” of SCRAMBLE due to reasons as divergent as benign weather changes or adversarial cyberattacks.

But the operators will have knowledge of the reasons behind the change.

“These changes can significantly degrade the accuracy of the autonomous system or signal an enemy attack,” said Purdue’s David Inouye, who is designing this particular aspect of the system. “Explaining these changes will help warfighters decide whether to trust the system or investigate potentially compromised components.” 

The third strategy is to provide a “secure, distributed execution of these various machine learning algorithms on multiple platforms in autonomous operation,” Purdue said.

“The goal is to make all of these algorithms secure despite the fact that they are distributed and separated out over an entire domain,” said Somali Chaterji, co-principal investigator on the project.

Once the prototype is ready, the US Army researchers will evaluate it to check whether it can be feasibly deployed on the battlefield and whether it avoids requiring “cognitive overload” from its users.

Related Posts