Measuring the outcomes of events that never happened seems impossible, right? Yet, for the US intelligence community, this paradox presents a critical challenge in evaluating the success of preventing or mitigating attacks.
Consider this scenario: homeland security officials receive a tip from a human source that a terrorist network plans to target a local concert.
The venue is notified, security is heightened, and additional law enforcement is deployed. But the attack never occurs. Was the intelligence invalid, or did the would-be attackers notice the increased security and decide to abandon or alter their plans?
Warning Dilemma
This scenario outlines the catch-22 of successful intelligence known as the warning dilemma.
While preventing harm to civilians is the ultimate objective, the intelligence community often faces skepticism from stakeholders when warnings seem unsubstantiated, costing resources without visible results.
Failing to accurately capture the reasons behind events that do not occur can lead to uninformed congressional oversight, decision-making, and funding decisions. This highlights the need to create metrics that capture the effects of thwarted events.
Measuring Prevention
The intelligence community must find ways to communicate its unseen successes to decision-makers and stakeholders by quantifying how often its actions prevent or alter adversarial plans.
But how can something that didn’t happen be measured?
During intelligence collection, analysts and technical specialists could document adversarial changes in tactics, techniques, and procedures following public warnings.
For instance, if a signal intelligence team observes a hostile actor ceasing communications after an alert, the community can infer that its efforts influenced the actor’s behavior.
Similarly, geospatial analysts might detect a suspicious vehicle returning to its base following heightened security at a target. This data can help analysts assess whether and how adversaries adjust their intentions, offering a way to measure prevention.
Codifying Intelligence
Jacob Scheidemann is a current all-source intelligence analyst and intelligence management graduate student.
A former active-duty Army officer with intelligence leadership roles in INDOPACOM and CENTCOM, Jake routinely contributes national security writing to platforms, including the Modern War Institute and the Military Times.
The views and opinions expressed here are those of the author and do not necessarily reflect the editorial position of The Defense Post.
The Defense Post aims to publish a wide range of high-quality opinion and analysis from a diverse array of people – do you want to send us yours? Click here to submit an op-ed.