The US Special Operations Command (USSOCOM) has contracted New York-based Accrete AI to deploy software that detects “real time” disinformation threats on social media.
The company’s Argus anomaly detection AI software analyzes social media data, accurately capturing “emerging narratives” and generating intelligence reports for military forces to speedily neutralize disinformation threats.
“Synthetic media, including AI-generated viral narratives, deep fakes, and other harmful social media-based applications of AI, pose a serious threat to US national security and civil society,” Accrete founder and CEO Prashant Bhuyan said.
“Social media is widely recognized as an unregulated environment where adversaries routinely exploit reasoning vulnerabilities and manipulate behavior through the intentional spread of disinformation.
“USSOCOM is at the tip of the spear in recognizing the critical need to identify and analytically predict social media narratives at an embryonic stage before those narratives evolve and gain traction. Accrete is proud to support USSOCOM’s mission.”
The US Department of Defense first partnered with Accrete for the Argus platform’s licensing contract in November 2022.
Enterprise Version for Business
The company also revealed that it will launch an enterprise version of Argus Social for disinformation detection later this year.
The AI software will provide protection for “urgent customer pain points” against AI-generated synthetic media, such as viral disinformation and deep fakes.
Providing this protection requires AI that can automatically “learn” what is most important to an enterprise and predict the likely social media narratives that will emerge before they influence behavior.
Nebula Social helps customers manage AI-generated media risks such as smear campaigns, according to the company.
It also autonomously generates speedy, relevant content to counter such malicious attacks.
“Government agencies and enterprises alike have an urgent need to manage a plethora of risks and opportunities posed by AI-generated synthetic media.” Bhuyan said.
“Companies are already experiencing significant economic damage caused by the spread of AI-generated viral disinformation and deep fakes manufactured by competitors, disgruntled employees, and other types of adversaries. We believe that the market for AI that can predict and neutralize malign AI-generated synthetic media is about to explode,” he added.