Good analysis does not happen by accident. It requires structured thinking, rigorous self-examination for cognitive bias, and writing that serves the reader rather than impresses them.
Analytic tradecraft is the set of standards, practices, and methods that intelligence analysts use to ensure their work is accurate, objective, timely, and well-sourced. It is, in short, the discipline of thinking well under uncertainty about adversaries who are actively trying to conceal their intentions and capabilities.1
Tradecraft exists because good analysis does not happen naturally. The human mind is subject to dozens of well-documented cognitive tendencies that distort judgment, especially when processing complex, ambiguous, or emotionally significant information. Left unchecked, these tendencies cause analysts to see patterns that are not there, miss patterns that are, and mirror their own cultural assumptions onto foreign actors whose thinking is fundamentally different.
The Intelligence Community has formalized its tradecraft standards in Intelligence Community Directive 203 (ICD 203), which establishes analytic standards that apply to all IC analytic products. These standards are not aspirational — they are the baseline against which AIC products should be measured.2
"The analyst who believes they are immune to bias is the analyst most at risk."
Uninfluenced by policy preferences or desired outcomes
Free from political considerations; analysis follows the evidence
Produced within the consumer's decision cycle
Draws on all available collection disciplines, not just one
Confidence levels and source quality explicitly stated in the product
Cognitive bias is not a sign of intellectual weakness — it is a universal feature of human cognition. Before analysts can apply structured methods, they must first understand the ways their own minds work against them.3 The following biases are particularly consequential in intelligence work.
The tendency to search for, favor, and remember information that confirms existing beliefs while discounting contradictory information. Perhaps the most pervasive analytical error: an analyst who has already formed a view will unconsciously weight new reporting to support it, even when evidence is genuinely ambiguous.
Assuming that foreign actors think the way we do — that they share our values, decision-making frameworks, and definitions of rationality. Especially dangerous when analyzing adversaries with radically different cultures or ideologies. An analyst who assumes an adversary will not take a certain action because we would not take it is mirror imaging.4
Relying too heavily on the first piece of information encountered when making subsequent judgments. Once an initial estimate is formed, anchoring makes it psychologically difficult to revise it even as new, contradicting information arrives. This is one reason initial assessments about a target often persist long after they should have been revised.
The tendency for groups to suppress dissent and converge on consensus, prioritizing harmony over accuracy. Particularly dangerous in intelligence because analytical products are almost always produced by teams and reviewed by supervisors — processes that can amplify rather than correct initial errors if dissent is discouraged.5
Assessing the likelihood of events based on how easily examples come to mind. An analyst who has recently studied a particular threat scenario will tend to see that scenario everywhere, even when the evidence points elsewhere. Recency and emotional impact distort probability judgments.
Weighting vivid, emotionally engaging information more heavily than dry, statistical, or abstract information — even when the latter is more reliable and more relevant. A single dramatic anecdotal report can outweigh a body of systematic evidence in the mind of the analyst.
Structured Analytic Techniques (SATs) are formalized methods for organizing and examining information systematically, making the analytical process more explicit, transparent, and resistant to cognitive bias. They do not replace analytical judgment — they discipline it.6 Select each technique below to expand.
ACH is a structured method for evaluating multiple alternative explanations for a body of evidence. It was developed at the CIA as a direct response to the tendency of analysts to proceed from a single hypothesis they seek to confirm, rather than from multiple hypotheses they evaluate against the evidence.7
The power of ACH lies in step 3: it forces the analyst to think about diagnostic evidence — evidence that distinguishes between hypotheses — rather than evidence that merely confirms the preferred view.
Every analytical product rests on assumptions — things the analyst takes as given, either because they cannot be directly verified or because they are so deeply embedded in the analytic framework that they have become invisible. Key Assumptions Check is a structured technique for surfacing those assumptions and evaluating their validity.8
This technique is particularly valuable as a quality-control step before a major product goes out, and as a review mechanism when a situation has changed significantly.
Red teaming involves assigning a team or individual to argue against the prevailing assessment — to make the strongest possible case for an alternative conclusion. A red team does not simply poke holes in the existing analysis; it develops a fully reasoned alternative view using the same evidence.9
Red teaming is most valuable when there is a risk of groupthink — when a team has coalesced around a consensus view without adequately examining alternatives, or when the stakes of being wrong are very high. The red team product should be treated as a genuine alternative assessment, not a strawman.
Similar to red teaming, devil's advocacy assigns an analyst to challenge the assumptions and logic of a prevailing assessment, identifying its weakest points and the evidence that could undermine it. Where red teaming produces an alternative assessment, devil's advocacy focuses on stress-testing the existing one.
Devil's advocacy is most useful earlier in the analytical process — before a consensus has fully formed and while the team is still willing to entertain challenge. It is an institutionally less threatening version of red teaming that can be incorporated into routine analytical workflows.
This technique involves identifying specific, observable events or behaviors that would indicate movement toward or away from a particular outcome. Indicators are most valuable in warning analysis, where the goal is to detect preparation for an adversary action before it occurs.
Good indicators are:
An analytically sound assessment that no consumer can understand or use has failed. Writing quality is not an aesthetic preference in intelligence — it is a functional requirement. Poor writing has directly contributed to intelligence failures by obscuring key judgments, burying the bottom line, and leaving consumers unable to distinguish high-confidence conclusions from speculation.10
Lead with the conclusion. Senior consumers do not have time to read three pages of context before reaching the judgment. State the bottom line in the first sentence, then support it. This is the opposite of academic writing.
ICD 203 requires that products characterize confidence in key judgments: high, moderate, or low. These are not hedges — they are substantive claims about evidential quality that allow consumers to weight judgments appropriately.
Products should indicate, in general terms, what types of sources underlie key judgments — without exposing sources and methods. A consumer should know whether a judgment rests on a single source or multiple independent streams.
Passive constructions, jargon, and bureaucratic hedging are the enemies of clear intelligence writing. Say who did what. Avoid phrases like "it is assessed that" or "it cannot be ruled out that" as substitutes for an actual judgment.
Write at the level of clarity you would expect from a well-written news article — with the addition of appropriate sourcing and confidence characterization. If a smart generalist could not understand your key judgment after one reading, rewrite it.
Tradecraft defines how analysis should be done. The failure case studies show what happens when it is not — and the compounding systemic breakdowns that turn analytic errors into strategic catastrophes.
Module 04: Intelligence Failures →