US DoD Kicks Off Inaugural AI Risk Exercise
The US Department of Defense has initiated its first digital exercise focusing on the risks of generative artificial intelligence (AI) systems.
The AI Bias Bounty leverages crowdsourced partners to help the agency produce new methods for cyber auditing and red-teaming activities.
Practices honed through the exercise will be used to identify unknown threats in large language models, including chatbots, that are trained with diverse text data to give users the most probable results or decisions depending on the deployment context.
Under the exercise’s first two activities, the US DoD Chief Digital and AI Office (CDAO) has collaborated with New York-based Conductor AI, Australian company Bugcrowd, and BiasBounty.AI.
“The [CDAO Responsible AI] team is thrilled to lead these AI Bias Bounties, as we are strongly committed to ensuring that the Department’s AI-enabled systems – and the contexts in which they run – are safe, secure, reliable, and bias free,” US Defense Responsible AI Acting Chief Dr. Matthew Johnson stated.
AI Bias Bounty
The Pentagon wrote that the exercise “encourages public involvement” without coding requirements to detect, mitigate, and control AI risks.
Participants will earn monetary bounties according to performances evaluated by the ConductorAI-Bugcrowd consortium.
The first phase has begun this month and will conclude in February, with the second phase to be announced.
The US government is expected to utilize results from the exercise for additional analysis, best practices, studies, and policy recommendations.
“Given the Department’s current focus on risks associated with [large language models], the CDAO is actively monitoring this area; the outcome of the AI Bias Bounties could powerfully impact future [Department of Defense] AI policies and adoption,” CDAO Officer Craig Martell said.