Saahil Dama
Recently, it was reported that about a dozen Google employees had resigned over the company’s involvement in the controversial Project Maven. These resignations come in the aftermath of a letter that employees had written to CEO Sundar Pichai protesting against Google assisting the U.S. Department of Defense in developing image-recognition technology that would be used by military drones to detect objects and track their movements. This project, titled the Algorithmic Warfare Cross-Function Team (or simply Project Maven), is aimed at providing the DoD with ‘actionable intelligence and decision-quality insights’ which will, ostensibly, be used for building autonomous drones and weapons in the future.
The letter is in the same vein as other letters against autonomous weapons, albeit with concerns about how being associated with Project Maven would tarnish Google’s reputation. “Google should not be in the business of war,” write the employees, arguing that the company should not outsource the moral responsibility of its technologies to third parties. To the employees, assisting the DoD in military surveillance and possibly even autonomous warfare is completely unacceptable. Their demands are clear – cancel Project Maven and implement a policy stating that Google would not build military technology.
There is nothing new in this debate around autonomous weapons. About a month ago, I published an essay titled “Banning Autonomous Weapons is not the Solution” which proposed that instead of trying to ban autonomous weapons, the international community should focus on regulating the development and use of such weapons. This idea was premised on the fact that nations have a significant incentive to possess autonomous weapons owing to their military and strategic benefits. Attempts to ban autonomous weapons would prove futile, especially since non-abiding countries and terrorist groups might use such weapons for causing large-scale destruction. Hence, countries would need an arsenal of autonomous weapons for reducing military casualties and maintaining the threat of mutually assured destruction against potential aggressors.
This essay evoked an interesting response from the Campaign to Stop Killer Robots, who wrote that pursuing regulation of autonomous weapons at this stage would be tantamount to accepting that it is already too late to retain control over such weapons.
Admittedly, it is not too late to retain control over autonomous weapons. As the Project Maven protests show, people are willing to go to great lengths – even risking livelihoods – to prevent the proliferation of autonomous weapons. The Guardian recently posted an article by three professors who expressed support for the resigning employees and asked Google to pledge against autonomous weapons. From Elon Musk to the late Stephen Hawking, there is widespread agreement over the fact that autonomous weapons need to be opposed.
And for good reason. It is easy to be terrified at the thought of automated drones indiscriminately slaughtering civilians in the Middle-East and robotic battle-tanks razing villages without remorse. But this is precisely the future that Google and Project Maven are seeking to prevent.
Autonomous weapons are a logical sequitur in modern warfare. There are two important reasons for this. First, humans are suboptimal agents in conflict situations and they have a history of causing excessive collateral damage and committing human rights violations in the heat of battle. Second, the moment a single country or organization acquires autonomous weapons, it would trigger an arms race because other countries would need autonomous weapons for self-defense.
If this is accepted as a starting point for a conversation on autonomous weapons, the issue shifts from what we should do to prevent the proliferation of autonomous weapons to how we can build and use autonomous weapons in accordance with international law and human rights. Regulation, not prohibition, becomes key.
Having accepted this, Google and other companies such as Amazon are working towards ensuring that autonomous weapons do not lead to the dystopian future envisaged by critics. Problems such as the inability to distinguish between civilians and combatants or weapons and ordinary objects, excessive collateral damage, and other life-threatening mistakes would largely be a consequence of insufficient or improper data and training. Google and Project Maven are addressing these issues; data labelling, turning raw data into actionable intelligence, and developing algorithms to accomplish key tasks are some of the core objectives of Project Maven. Given the breadth and volume of data that Google possesses, its involvement would help the DoD in building safer autonomous weapons systems.
While the world deliberates over what must be done to stop autonomous weapons, Google, with the foresight of a grandmaster, is solving problems that will arise when the arms industry is flooded with these weapons, as it inevitably will be. Project Maven is a small start, but it is a harbinger of the role that companies like Google will play in shaping the future of autonomous weapons. As a Google spokesperson was noted saying, “The technology is used to flag images for human review and is intended to save lives and save people from having to do highly tedious work.”
Intended to save lives, are the operative words.
Follow him on Twitter @life_trotter
All views and opinions expressed in this article are those of the author, and do not necessarily reflect the opinions or positions of The Defense Post.
The Defense Post aims to publish a wide range of high-quality opinion and analysis from a diverse array of people – do you want to send us yours? Click here to submit an Op-Ed.