Module 03  ·  Analytic Tradecraft

The Discipline of
Getting It Right

Good analysis does not happen by accident. It requires structured thinking, rigorous self-examination for cognitive bias, and writing that serves the reader rather than impresses them.

Foundation

What Is Analytic Tradecraft?

Analytic tradecraft is the set of standards, practices, and methods that intelligence analysts use to ensure their work is accurate, objective, timely, and well-sourced. It is, in short, the discipline of thinking well under uncertainty about adversaries who are actively trying to conceal their intentions and capabilities.1

Tradecraft exists because good analysis does not happen naturally. The human mind is subject to dozens of well-documented cognitive tendencies that distort judgment, especially when processing complex, ambiguous, or emotionally significant information. Left unchecked, these tendencies cause analysts to see patterns that are not there, miss patterns that are, and mirror their own cultural assumptions onto foreign actors whose thinking is fundamentally different.

The Intelligence Community has formalized its tradecraft standards in Intelligence Community Directive 203 (ICD 203), which establishes analytic standards that apply to all IC analytic products. These standards are not aspirational — they are the baseline against which AIC products should be measured.2

"The analyst who believes they are immune to bias is the analyst most at risk."

ICD 203: The Five Standards

⚖️

Objective

Uninfluenced by policy preferences or desired outcomes

🛡️

Independent

Free from political considerations; analysis follows the evidence

⏱️

Timely

Produced within the consumer's decision cycle

🔗

All-Source

Draws on all available collection disciplines, not just one

📋

Properly Sourced

Confidence levels and source quality explicitly stated in the product

The Enemy Within

Cognitive Biases in Intelligence Analysis

Cognitive bias is not a sign of intellectual weakness — it is a universal feature of human cognition. Before analysts can apply structured methods, they must first understand the ways their own minds work against them.3 The following biases are particularly consequential in intelligence work.

🔍

Confirmation Bias

The tendency to search for, favor, and remember information that confirms existing beliefs while discounting contradictory information. Perhaps the most pervasive analytical error: an analyst who has already formed a view will unconsciously weight new reporting to support it, even when evidence is genuinely ambiguous.

🪞

Mirror Imaging

Assuming that foreign actors think the way we do — that they share our values, decision-making frameworks, and definitions of rationality. Especially dangerous when analyzing adversaries with radically different cultures or ideologies. An analyst who assumes an adversary will not take a certain action because we would not take it is mirror imaging.4

Anchoring

Relying too heavily on the first piece of information encountered when making subsequent judgments. Once an initial estimate is formed, anchoring makes it psychologically difficult to revise it even as new, contradicting information arrives. This is one reason initial assessments about a target often persist long after they should have been revised.

🐑

Groupthink

The tendency for groups to suppress dissent and converge on consensus, prioritizing harmony over accuracy. Particularly dangerous in intelligence because analytical products are almost always produced by teams and reviewed by supervisors — processes that can amplify rather than correct initial errors if dissent is discouraged.5

💡

Availability Bias

Assessing the likelihood of events based on how easily examples come to mind. An analyst who has recently studied a particular threat scenario will tend to see that scenario everywhere, even when the evidence points elsewhere. Recency and emotional impact distort probability judgments.

🎭

Vividness Bias

Weighting vivid, emotionally engaging information more heavily than dry, statistical, or abstract information — even when the latter is more reliable and more relevant. A single dramatic anecdotal report can outweigh a body of systematic evidence in the mind of the analyst.

Structured Methods

Structured Analytic Techniques

Structured Analytic Techniques (SATs) are formalized methods for organizing and examining information systematically, making the analytical process more explicit, transparent, and resistant to cognitive bias. They do not replace analytical judgment — they discipline it.6 Select each technique below to expand.

ACH is a structured method for evaluating multiple alternative explanations for a body of evidence. It was developed at the CIA as a direct response to the tendency of analysts to proceed from a single hypothesis they seek to confirm, rather than from multiple hypotheses they evaluate against the evidence.7

The power of ACH lies in step 3: it forces the analyst to think about diagnostic evidence — evidence that distinguishes between hypotheses — rather than evidence that merely confirms the preferred view.

  1. 1 Identify all plausible hypotheses, including unlikely ones.
  2. 2 List key pieces of evidence relevant to each hypothesis.
  3. 3 For each piece of evidence, assess whether it is inconsistent with each hypothesis — not whether it supports the preferred one.
  4. 4 Eliminate hypotheses that are inconsistent with the evidence.
  5. 5 Draw a conclusion from the hypothesis that is least inconsistent with the totality of the evidence.
  6. 6 Identify what new evidence, if it emerged, would cause you to revise your conclusion.

Every analytical product rests on assumptions — things the analyst takes as given, either because they cannot be directly verified or because they are so deeply embedded in the analytic framework that they have become invisible. Key Assumptions Check is a structured technique for surfacing those assumptions and evaluating their validity.8

This technique is particularly valuable as a quality-control step before a major product goes out, and as a review mechanism when a situation has changed significantly.

  1. 1 Identify the key assumptions underlying a current assessment.
  2. 2 Evaluate how confident you are in each assumption.
  3. 3 Ask: what would happen to the assessment if this assumption turned out to be wrong?
  4. 4 For high-impact, low-confidence assumptions, determine whether additional collection or analysis could test them.

Red teaming involves assigning a team or individual to argue against the prevailing assessment — to make the strongest possible case for an alternative conclusion. A red team does not simply poke holes in the existing analysis; it develops a fully reasoned alternative view using the same evidence.9

Red teaming is most valuable when there is a risk of groupthink — when a team has coalesced around a consensus view without adequately examining alternatives, or when the stakes of being wrong are very high. The red team product should be treated as a genuine alternative assessment, not a strawman.

Similar to red teaming, devil's advocacy assigns an analyst to challenge the assumptions and logic of a prevailing assessment, identifying its weakest points and the evidence that could undermine it. Where red teaming produces an alternative assessment, devil's advocacy focuses on stress-testing the existing one.

Devil's advocacy is most useful earlier in the analytical process — before a consensus has fully formed and while the team is still willing to entertain challenge. It is an institutionally less threatening version of red teaming that can be incorporated into routine analytical workflows.

This technique involves identifying specific, observable events or behaviors that would indicate movement toward or away from a particular outcome. Indicators are most valuable in warning analysis, where the goal is to detect preparation for an adversary action before it occurs.

Good indicators are:

  • Observable — capable of being detected by available collection
  • Specific — unambiguously either present or absent
  • Diagnostic — more consistent with one hypothesis than others
  • Timely — detectable early enough to be actionable
The Final Mile

Writing for the Consumer

An analytically sound assessment that no consumer can understand or use has failed. Writing quality is not an aesthetic preference in intelligence — it is a functional requirement. Poor writing has directly contributed to intelligence failures by obscuring key judgments, burying the bottom line, and leaving consumers unable to distinguish high-confidence conclusions from speculation.10

🎯

Bottom Line Up Front (BLUF)

Lead with the conclusion. Senior consumers do not have time to read three pages of context before reaching the judgment. State the bottom line in the first sentence, then support it. This is the opposite of academic writing.

📊

Explicit Confidence Levels

ICD 203 requires that products characterize confidence in key judgments: high, moderate, or low. These are not hedges — they are substantive claims about evidential quality that allow consumers to weight judgments appropriately.

🔐

Source Attribution Without Exposure

Products should indicate, in general terms, what types of sources underlie key judgments — without exposing sources and methods. A consumer should know whether a judgment rests on a single source or multiple independent streams.

✍️

Active Voice and Plain Language

Passive constructions, jargon, and bureaucratic hedging are the enemies of clear intelligence writing. Say who did what. Avoid phrases like "it is assessed that" or "it cannot be ruled out that" as substitutes for an actual judgment.

Write at the level of clarity you would expect from a well-written news article — with the addition of appropriate sourcing and confidence characterization. If a smart generalist could not understand your key judgment after one reading, rewrite it.

Continue Training

Next: Intelligence Failures

Tradecraft defines how analysis should be done. The failure case studies show what happens when it is not — and the compounding systemic breakdowns that turn analytic errors into strategic catastrophes.

Module 04: Intelligence Failures  →
Footnotes
  1. Johnson, National Security Intelligence, 82.
  2. Johnson, National Security Intelligence, 84.
  3. Johnson, National Security Intelligence, 86–87.
  4. Johnson, National Security Intelligence, 88.
  5. Johnson, National Security Intelligence, 91.
  6. Johnson, National Security Intelligence, 94.
  7. Johnson, National Security Intelligence, 96.
  8. Johnson, National Security Intelligence, 97.
  9. Johnson, National Security Intelligence, 98.
  10. Johnson, National Security Intelligence, 100–101.