X

The Warning Dilemma: Why the Intelligence Community Must Capture What Didn’t Happen

Measuring outcomes that never happened demonstrates the effectiveness of preventive actions, justifies funding, and builds trust with decision-makers and the public.

US cyber force in action. Photo: US Cyber Command

Measuring the outcomes of events that never happened seems impossible, right? Yet, for the US intelligence community, this paradox presents a critical challenge in evaluating the success of preventing or mitigating attacks.

Consider this scenario: homeland security officials receive a tip from a human source that a terrorist network plans to target a local concert.

The venue is notified, security is heightened, and additional law enforcement is deployed. But the attack never occurs. Was the intelligence invalid, or did the would-be attackers notice the increased security and decide to abandon or alter their plans?

Warning Dilemma

This scenario outlines the catch-22 of successful intelligence known as the warning dilemma.

While preventing harm to civilians is the ultimate objective, the intelligence community often faces skepticism from stakeholders when warnings seem unsubstantiated, costing resources without visible results.

Failing to accurately capture the reasons behind events that do not occur can lead to uninformed congressional oversight, decision-making, and funding decisions. This highlights the need to create metrics that capture the effects of thwarted events.

Operations at the US Cyber Command. Photo: US Navy

Measuring Prevention

The intelligence community must find ways to communicate its unseen successes to decision-makers and stakeholders by quantifying how often its actions prevent or alter adversarial plans.

But how can something that didn’t happen be measured?

During intelligence collection, analysts and technical specialists could document adversarial changes in tactics, techniques, and procedures following public warnings.

For instance, if a signal intelligence team observes a hostile actor ceasing communications after an alert, the community can infer that its efforts influenced the actor’s behavior.

Similarly, geospatial analysts might detect a suspicious vehicle returning to its base following heightened security at a target. This data can help analysts assess whether and how adversaries adjust their intentions, offering a way to measure prevention.

Codifying Intelligence

One way to codify such intelligence success is through mandated confidence and probability scales. Intelligence Community Directive 203, for example, provides guidelines for assessing the likelihood of potential adversarial actions.

An analyst, confident that an attack was nearly certain, could assign a probability of 95 to 99 percent before an intervention. When applied to disrupted events, these scales allow the community to communicate how national security measures altered potential outcomes.

Aggregating these metrics enables agencies to report on their impact effectively, making a strong case for funding, building public trust, and sharing best practices with partners.

Critics might argue that these measures are imprecise, and they’re right; these metrics aren’t exact. But they represent a significant step toward validating intelligence activities and securing essential resources.

By quantifying what would otherwise be invisible successes, the intelligence community can offer stakeholders a clearer understanding of the value it provides.

While a universal approach to capture the warning dilemma’s effects may not emerge soon, individual agencies can adopt these methods to track and report their most effective tactics.

This shift can improve oversight and empower decision-makers with a fuller picture of intelligence in action — often behind the scenes yet vital to public safety.


Jacob Scheidemann is a current all-source intelligence analyst and intelligence management graduate student.

A former active-duty Army officer with intelligence leadership roles in INDOPACOM and CENTCOM, Jake routinely contributes national security writing to platforms, including the Modern War Institute and the Military Times.


The views and opinions expressed here are those of the author and do not necessarily reflect the editorial position of The Defense Post.

The Defense Post aims to publish a wide range of high-quality opinion and analysis from a diverse array of people – do you want to send us yours? Click here to submit an op-ed.

Related Posts