0% found this document useful (0 votes)
479 views23 pages

Intelligence Failures What Are They Really and What Do We Do About Them - Jensen - 2012

Uploaded by

hari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
479 views23 pages

Intelligence Failures What Are They Really and What Do We Do About Them - Jensen - 2012

Uploaded by

hari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

This article was downloaded by: [Memorial University of Newfoundland]

On: 01 August 2014, At: 14:48


Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Intelligence and National Security


Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/fint20

Intelligence Failures: What Are They


Really and What Do We Do about Them?
Mark A. Jensen
Published online: 27 Apr 2012.

To cite this article: Mark A. Jensen (2012) Intelligence Failures: What Are They Really
and What Do We Do about Them?, Intelligence and National Security, 27:2, 261-282, DOI:
10.1080/02684527.2012.661646

To link to this article: http://dx.doi.org/10.1080/02684527.2012.661646

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
Intelligence and National Security
Vol. 27, No. 2, 261–282, April 2012

Intelligence Failures: What Are They


Really and What Do We Do about
Them?
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

MARK A. JENSEN*

ABSTRACT Intelligence failures occur for more reasons than just sloppy tradecraft and are
often attributable to decision-makers as well as to the intelligence community. Before
exploring the subjective nature of intelligence failures, this article first discusses three
foundational concepts underlying them: process vs. product, fact vs. judgment, and
prediction. It then outlines major components of intelligence failures: accuracy, surprise,
and the role of decision-makers, particularly unrealistic expectations and the use or non-
use of intelligence. The article concludes with a discussion of what the intelligence
community and decision-makers can do to deal with these three components.

How often is the trite statement repeated: there are only two possible
outcomes in national security matters: policy successes and intelligence
failures? Even if there happen to be policy failures, faulty intelligence is
often cited as a major cause for those unsuccessful policies.1 Intelligence
failures seem to be ubiquitous. Volumes have been written about
intelligence failures, a subject that is ‘perhaps the most academically
advanced field in the study of intelligence’.2 Even multi-day conferences
have been devoted to the subject.3 We seem to be fascinated by the

*Email: [email protected]
This work was authored as part of the Contributor’s official duties as an Employee of the United
States Government and is therefore a work of the United States Government. In accordance with
17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.
1
Some examples include Operation Zapata in 1961 (Bay of Pigs invasion), Operation Eagle
Claw in 1980 (Iranian hostage rescue attempt), and Operation Continue Hope in 1993 (attempt
to subdue Aidid-led guerrilla attacks in Somalia).
2
Woodrow J. Kuhns, ‘Intelligence Failures: Forecasting and the Lessons of Epistemology’ in
Richard K. Betts and Thomas G. Mahnken (eds.) Paradoxes of Strategic Intelligence: Essays
in Honor of Michael I. Handel (London: Frank Cass 2003) p.80; see also John A. Gentry,
‘Intelligence Failure Reframed’, Political Science Quarterly 123/2 (2008) pp.247–70.
3
For example, ‘Military Intelligence Failures’, University of California, Davis, 9–11 June 2005
and ‘Intelligence Failures and Cultural Misperceptions: Asia, 1945 till the present’, Netherlands
Intelligence Studies Association, 27–28 September 2008. An example of a panel on the subject
in a larger conference is ‘American Intelligence Failures and Successes: The Lessons for the
ISSN 0268-4527 Print/ISSN 1743-9019 Online/12/020261-22
http://dx.doi.org/10.1080/02684527.2012.661646
262 Intelligence and National Security

subject. Even President John F. Kennedy, speaking to a CIA audience, has


acknowledged: ‘It is not always easy. Your successes are unheralded –
your failures are trumpeted.’4
It is no wonder that the intelligence community (IC) often seems on the
defensive about what it does. True, intelligence activities, as typical of all
human endeavors, rarely achieve perfect results. But with so much riding on
the success of this critical element of national security, one would hope that
failure would be the exception.
Besides shoddy tradecraft, a major source of intelligence failures stems
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

from a disconnect between what the IC can legitimately provide and what
some decision-makers5 or vocal journalists expect. Contrary to what may be
desired, omniscience about the past and present and clairvoyance about the
future are not legitimate expectations of the IC. Although this seems like
common sense, a major method for dealing with and minimizing intelligence
failures is the establishment of a clear and common understanding of
reasonable expectations of intelligence among the IC and decision-makers –
and as secondary consumers, the media and the public. Failure to meet this
standard would legitimately constitute an intelligence failure. Therefore, one
the IC’s primary missions has to be the education of the decision-makers it
serves about intelligence capabilities and limitations.
This article focuses exclusively on failures of analysis and the preparation
and delivery of intelligence products to decision-makers, not the myriad of
other activities in which the IC engages such as collection, counter-
intelligence, covert action, and enterprise management. It first outlines three
foundational concepts that underlie the discussion of intelligence failures:
process vs. product, fact vs. judgment, and prediction. Understanding the
ramifications of the latter concept is particularly important because so many
citizens and consumers of intelligence believe prediction is the primary
function of the IC. A discussion of the nature of intelligence failures follows
to include three major components of failure: accuracy, surprise, and IC
interaction with decision-makers. The article concludes with recommenda-
tions on what the IC and decision-makers can do about intelligence failures.

Future?’, part of the conference ‘Breaking Down the Walls: Increasing the Discourse in the
American Policy Making Community’, Arizona State University, 31 March–2 April 2010.
4
Center for the Study of Intelligence, ‘Our First Line of Defense’: Presidential Reflections on
US Intelligence (Washington, DC: Center for the Study of Intelligence 1996); one rare
publicly known intelligence success lead to the demise of Osama Bin Laden on 2 May 2011.
See Ken Dilanian, ‘In Finding Osama bin Laden, CIA Soars from Distress to Success’, Los
Angeles Times, 8 May 2011, 5www.latimes.com/news/nationworld/nation/la-na-bin-laden-
cia-20110508,0,7184857.story4 (accessed 9 May 2011).
5
Other terms often used to designate users of intelligence who are outside of the IC include
policymaker, consumer, customer, client, warfighter, or law enforcement officer. Users inside
the IC are intermediate consumers, who use intelligence information provided to them to
craft other products for the ultimate consumers – those outside the IC. For simplicity
purposes, the term ‘decision-maker’ will be used throughout this article to refer to individuals
outside the IC who use intelligence products.
Intelligence Failures 263

What are Intelligence Failures?


What are intelligence failures? Why do there seem to be so many? What
causes them? Most importantly, what do we do about them? Certainly the
answers to these questions should be of utmost importance to the IC. There
are numerous examples throughout history of events that are frequently
decried as intelligence failures, e.g. the detonation of the first Soviet atomic
bomb in 1949, the launch of Sputnik in 1957, the Tet Offensive in 1968, the
seizure of the US embassy in Tehran in 1979, the demise of the Soviet Union
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

in 1991, the Al Qaeda attacks on the US homeland in 2001, and the ouster
of President Hosni Mubarak of Egypt in 2011. Undoubtedly many US
decision-makers and even some intelligence professionals were surprised by
some of the foregoing. Because of the shock some of these surprises have
caused both to national security professionals and to the public in general,
the nation’s senior leaders often establish high-level commissions to
investigate the sources of these failures. Naturally many others, including
Congress, have declared their views on causes, impact, and remedies in
various books, reports, and periodicals.6
Yet, with all this study and attention on intelligence failures, why can we
not eliminate them? Are they simply a product of human frailty or are there
still major systemic issues hindering this goal? Before we can reduce these
failures, we at least must understand their nature and causes. From the
perspective of a decision-maker seeking clarity, an intelligence failure results
simply when the intelligence input into the decision-making process is
lacking or unsatisfactory. Of course, what is deemed satisfactory depends on
the individual decision-maker and the specific situation at hand. Hence,
intelligence professionals must understand the needs and preferences of
those to whom they provide intelligence products.
However, some decision-makers, or other authors writing on their behalf
for public consumption, seem to find fault incessantly with the IC and deem
any intelligence short of omniscience and clairvoyance to be unsatisfactory.7
This especially seems to be the case if there is an associated policy that does
not appear to be succeeding. For other than divine intelligence officers,
this standard is incredibly high and in fact impossible to meet. Before
discussing a legitimate standard, however, we must review three important
6
For example: Bill Gertz, Breakdown: How America’s Intelligence Failures Led to September
11 (Washington, DC: Regnery Publishing, Inc. 2002); Senate Select Committee on
Intelligence, Report on the US. Intelligence Community’s Prewar Intelligence Assessments
on Iraq, 7 July 2004; Angelo M. Codevilla, Why U.S. Intelligence is Inadequate, and How to
Fix It, Occasional Papers Series, No. 2 (Washington, DC: The Center for Security Policy, July
2004).
7
For example, the IC was severely criticized because it did not discover the ‘Christmas
bomber’ until after he had boarded a transatlantic flight on 25 December 2009 and attempted
to ignite explosives hidden in his underwear. The IC has also been denounced for numerous
National Intelligence Estimates (NIE), although often for perceived politicization (e.g. the
1995 NIE on the ballistic missile threat or the 2007 NIE on Iranian weapons of mass
destruction (WMD)).
264 Intelligence and National Security

foundational concepts related to intelligence failures: 1) process vs. product,


2) fact vs. judgment, and 3) prediction.

Foundational Concepts
Process vs. Product
Kristan Wheaton discusses extensively the concept of intelligence process
and product and explains that process is more important because it can
influence multiple products.8 Although this logic makes sense, it will not
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

make decision-makers happy if the product does not meet their needs. It is
disingenuous and self serving for the IC to claim it did its work right and
hence no failure can result, regardless of the utility of the product.
Intelligence failures ascribed to analytical process can be categorized into
areas such as: lack of creativity, unconfirmed assumptions, groupthink,
faulty evidence weighting, data misinterpretation or erroneous linkages,
insufficient source validation, signal-to-noise problems, negligence in
considering denial and deception, mirror imaging, overlooked gradual
trends, over/underestimation, etc.9 Human error is often the source of
process failures, although systemic or organizational flaws can be culpable,
despite the IC’s decades of experience trying to set the proper environment
for effective analysis. Although the IC cannot eliminate all intelligence
failures as thoroughly described in Richard K. Betts’ seminal work on the
subject,10 at least the IC can strive to minimize them as much as possible.
The Intelligence Reform and Terrorism Prevention Act of 2004 stipulated
that the Director of National Intelligence (DNI) ‘assign an individual or
entity to be responsible for ensuring that finished intelligence products
produced by any element or elements of the intelligence community are
timely, objective, independent of political considerations, based upon all
sources of available intelligence, and employ the standards of proper analytic
tradecraft.’11 Tradecraft standards naturally call for processes that lead to
attributes expected of any written product, i.e. clarity and completeness,
with well reasoned and supported arguments. Analytical intelligence
products, however, must also be relevant to needs of decision-makers and
delivered on a timely basis. More importantly, they must characterize the
8
Kristan J. Wheaton, ‘Evaluating Intelligence: Answering Questions Asked and Not’,
International Journal of Intelligence and Counterintelligence 22/4 (2009–2010), p.618.
9
Numerous publications have been written on process failures and how to fix them, such as:
Mary Sandow-Quirk, ‘A Failure of Intelligence’, Prometheus 20/2 (2002), pp.131–42; Uri
Bar-Joseph and Jack S. Levy, ‘Conscious Action and Intelligence Failure’, Political Science
Quarterly 124/3 (2009) pp.461–88; ‘Intelligence Failures: Some Historical Lessons, Parts 1
and 2’, The Estimate 15/12 (2002); Michael A. Turner, Why Secret Intelligence Fails, Rev. ed.
(Washington, DC: Potomac Books 2006).
10
Richard K. Betts, ‘Analysis, War, and Decisions: Why Intelligence Failures Are Inevitable’,
World Politics 31/1 (1978) pp.35–54; see also a relevant article with a similar theme: Russ
Travers, ‘The Coming Intelligence Failure’, Studies in Intelligence 40/2 (1996) pp.27–34.
11
Intelligence Reform and Terrorism Prevention Act of 2004, Public Law 108–458, December
17, 2004, Section 1019 (a).
Intelligence Failures 265

nature of uncertainty, describe the reliability of key sources, state confidence


levels in judgments,12 and clearly distinguish them from assertions of fact.
The absence or inadequacy of any of these attributes would make an
intelligence product less useful and subject to an accusation of intelligence
failure. Accuracy is another important attribute, but because it is often at the
heart of success or failure, it requires a separate discussion below. The DNI
has set up an office to review analytic products and take steps to minimize
analytic process failures. In addition, most IC organizations have set up a
lessons learned function to identify and retain best practices and correct
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

errors from the past to benefit the present and future.


Typically poor products, whether written or oral, result from bad or
poorly executed processes. However, decision-makers can still deem
products useless regardless of how flawlessly they were prepared. These
include products that are irrelevant, late, or do not directly answer their
questions or provide insight desired. One senior analyst has concluded that
‘failing to identify a specific audience and an intelligence question up front is
often at the root of the weakest analytic efforts’.13 Decision-makers also
deem intelligence products to be failures if they provide little value added
beyond what is available elsewhere, particularly from open sources.
A product should not be considered a failure solely because its content
does not support a decision-maker’s policy preferences.14 Politicization, the
practice of intelligence professionals bending intelligence to meet decision-
maker needs or decision-makers focusing on selected intelligence products or
passages thereof for their own political purposes, is unacceptable, but
unfortunately still occurs. However, this topic is the subject of a different
article, many of which have already been written.15
Figure 1 summarizes a decision-maker’s view of intelligence processes and
products. Ideally, a decision-maker should not have to know much or care
about the processes employed; credible and useful products are of greatest

12
Similar words are often used interchangeably to express the same thought about analytical
conclusions, e.g. estimate, inference, assessment, opinion, or judgment. Critics would also
words such as guess, conjecture, speculation, or assumption. ‘Estimate’ is often used in
connection with products about the future, e.g. National Intelligence Estimate. For purposes
of this article, the word ‘judgment’ will generally be used to describe analytical conclusions
about the past and present; ‘estimate’ will be used for judgments about the future.
13
Martin Petersen, ‘What I Learned in 40 Years of Doing Intelligence Analysis for US Foreign
Policymakers’, Studies in Intelligence 55/1 (2011) p.17.
14
For a discussion of how the White House did not appreciate DNI James Clapper’s view of
Gadhafi’s anticipated endurance, see Michael V. Hayden, ‘Is it OK for Spy Agency Chiefs to
Tell the Truth?’ CNN, 22 April 2011, 5http://www.cnn.com/2011/OPINION/04/11/
hayden.intelligence.truth/index.html?hpt¼C24 (accessed 11 April 2011).
15
Three excellent articles on politicization are: Robert M. Gates, ‘Guarding Against
Politicization’, Studies in Intelligence 36/5 (1992) pp.5–13; Michael Handel, ‘The Politics
of Intelligence’, Intelligence and National Security 2/4 (1987) pp.5–46; Richard K. Betts,
‘Politicization of Intelligence: Costs and Benefits’ in Richard K. Betts and Thomas G.
Mahnken (eds.) Paradoxes of Strategic Intelligence: Essays in Honor of Michael I. Handel
(London: Frank Cass 2003) pp.59–79.
266 Intelligence and National Security
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

Figure 1. Decisionmaker View of Intelligence Processes and Products.

concern. However, as noted in Figure 1, only when the process and product
are both good is the decision-maker fully satisfied. In today’s age of
ubiquitous information, ‘getting the answer right is not enough; analysts
must also ‘‘show their work’’ in order to demonstrate that they were not
merely lucky’.16 The figure depicting subjective categorizations of process
and product is of course an oversimplification of the complex intelligence
environment.
Fact vs. Judgment
Another fundamental concept pertinent to understanding intelligence failures
is that judgment is an integral part of intelligence products. Intelligence
analysts attempt to describe and explain past and present situations to the
best of their ability and to the extent of their collection holdings.17 They also
legitimately fill in some gaps with reasonable inference about the truth. These
judgments are not necessarily truth, but may be. Hence the phrase ‘speaking
truth to power’, demanded so often of intelligence seniors by congressional
overseers, is in fact a misnomer because intelligence consists not only of
known truth, but also judgments about the truth.
Analysts also make judgments about future situations, typically called
estimates. Although situations about the past and present are theoretically
knowable, there are no facts about the future. Although past collection and
analysis may be able to give insight about the future, discontinuities may
cause the future to look very different from the past and present. Estimates

16
Wheaton, ‘Evaluating Intelligence’, p.617.
17
For products dealing with the past and present, certainty is theoretically possible. However,
truth may not be knowable in a practical sense. Insufficient funds, personnel, time, or access
may preclude the IC from obtaining the answer in sufficient time to impact a decision. In
addition, unwillingly adversaries may employ denial and deception techniques to prevent
collection.
Intelligence Failures 267

about the future are largely the domain of the National Intelligence Council
(NIC), but other IC organizations also concern themselves with the future
and attempt to describe it. Estimating adversaries’ intentions for future
action is clearly an intelligence function.18
Language used to make judgments about the past or present or estimates
about the future can be ambiguous, so analysts make great efforts to define
the terms they use to minimize potential misunderstanding. Unfortunately
there are numerous words along the ‘likelihood continuum’ to describe the
future that have different meanings to different people. Some of these terms
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

are general in nature, e.g. horizon/environmental scan, prospects, outlooks,


possible outcomes, or anticipated situations. Others describe possible futures
with varying degrees of certainty about any one state: speculations, alternate
futures, projections, or forecasts. Still another is based on probabilities19 and
implies a greater degree of quantifiability: prediction. It is this latter term
that seems to cause the most problem for the IC.
Prediction
Prediction is a specific type of judgment about the future, which many
citizens seem to believe is a primary purpose for the IC. Prediction in general
is a difficult task, whether establishing odds for sporting events, projecting
future revenue streams in business ventures, or anticipating the future in the
national security domain. As Yogi Berra sagely observed: ‘It’s tough to make
predictions, especially about the future.’20 Every year many odds-makers
with years of experience and extensive data inaccurately estimate which team
will win the Super Bowl, which is just a simple binary prediction. Because
prediction is so difficult, one could even question whether the IC should be in
the business of predicting. Asking the IC to assume the role of soothsayers
increases the likelihood of intelligence failure. Despite this risk, the answer to
the question simply depends on whom one asks.
‘Did the CIA Blow the Call?’21 is the title of an article that implies CIA’s
job was to predict 9/11 and may have erred. Nearly a decade later, President
Barack Obama stated that he was ‘‘‘disappointed with the intelligence
community’’ and its failure to predict the unrest that led to the ouster of
President Zine el-Abidine Ben Ali in Tunisia. Emphasizing that policy
decisions by the president and Congress depend on timely intelligence
analysis, Senator Diane Feinstein [chair of the Senate Select Committee on
Intelligence] bluntly stated, ‘‘I have doubts whether the intelligence

18
Robert Mandel, ‘On Estimating Post-Cold War Enemy Intentions’, Intelligence and
National Security 24/2 (2009) pp.194–215.
19
For a useful discussion on probabilities, see Joab Rosenberg, ‘The Interpretation of
Probability in Intelligence Estimation and Strategic Assessment’, Intelligence and National
Security 23/2 (2008) pp.139–52.
20
Yogi Berra Quotes, 5http://www.digitaldreamdoor.com/pages/quotes/yogiberra.html4
(accessed 7 April 2011).
21
Dusko Doder, ‘Did the CIA Blow the Call?’ review of: Breakdown: How America’s
Intelligence Failures Led to September 11, by Bill Gertz, The Nation, 4 November 2002.
268 Intelligence and National Security

community lived up to its obligation in this area’’’.22 Numerous other


articles describe the IC’s role as a predictor.23
Even the CIA director discusses the IC’s prediction function. In the annual
threat assessment hearing before the House Permanent Select Committee on
Intelligence (HPSCI), Director Leon Panetta admitted regarding the turmoil
in Egypt before Mubarak stepped down as president: ‘The intelligence
community has to do a better job collecting information that will predict
uprisings like those going on in Egypt’.24 He was castigated at that same
hearing for also claiming that there was a ‘strong likelihood’25 that Mubarak
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

would step down before day’s end. Mubarak did not, contrary to Panetta’s
assertion. Ironically he was vindicated when Mubarak did step down the
following day, even though his ‘prediction’ was off by a day.
At the same hearing, the DNI, James Clapper stated in seeming
contradiction to Panetta: ‘We can reduce uncertainty, but cannot eliminate
it. We are not clairvoyant’.26 Panetta subsequently softened his earlier
statement before the HPSCI by comparing the prediction of political unrest
to earthquakes:

People can tell you where the tremors are, they can tell you where the
fault lines are, they can tell you what the past is, they can even tell you
that the threat of something happening is close. But they can’t tell you
exactly when an earthquake is going to take place. Those are the kinds
of things that are obviously very tough for intelligence to predict. But I
think our job is to collect as much as we can to know those triggers.27

22
Marcus Baram, ‘CIA’s Mideast Surprise Recalls History of Intelligence Failures’, The
Huffington Post, 14 February 2011, 5http://www.globalsecurity.org/org/news/2011/
110214-egypt-intelligence.htm4 (accessed 20 May 2011).
23
See for example: Richard K. Herrmann and Jong Kun Choi, ‘From Prediction to Learning’,
International Security 31/4 (2007) pp.132–61; Puong Fei Yeh, ‘Using Prediction Markets to
Enhance US Intelligence Capabilities’, Studies in Intelligence 50/4 (2006) pp.37–49; Captain
Paulo Shakarian, ‘The Future of Analytical Tools: Prediction in a Counterinsurgency Fight’,
Military Intelligence Professional Bulletin 34/4 (2008) pp.80–7.
24
Lisa Daniel, ‘Panetta: Intelligence Community Needs to Predict Uprisings’, Defense.gov, 11
February 2011, 5http://www.defense.gov/news/newsarticle.aspx?id¼627904 (accessed 26
April 2011).
25
Greg Miller, ‘Faulty Comment on Egypt by Panetta Leads to Confusion’, The Washington
Post, 11 February 2011, 5http://www.washingtonpost.com/wp-dyn/content/article/2011/02/
10/AR2011021007642.html4 (accessed 7 April 2011).
26
Kimberly Dozier, ‘Spy Chiefs Defend Mideast Work but Miss Egypt Call,’ Associated Press,
11 February 2011, 5http://www.msnbc.msn.com/id/41529505/ns/politics-more_politics/4
(accessed 26 April 2011); see also James Clapper, ‘Remarks as delivered by James R. Clapper
Director of National Intelligence, Open Hearing on the Worldwide Threat Assessment House
Permanent Select Committee on Intelligence, February 10, 2011’, 5http://www.dni.gov/
testimonies/20110210_testimony_hpsci_clapper.pdf4 (accessed 27 April 2011).
27
Daniel, ‘Panetta’. Earthquakes, like weather, are caused almost exclusively by the forces of
nature and the principles of physics. They should be theoretically easier to predict than
human action, which is subject to the whims of people who may change their minds at any
Intelligence Failures 269

Although the IC can help decision-makers understand impending events, it


cannot predict them. ‘As Director Clapper stated at his confirmation hearing
[in July 2010], the intelligence community is not in the prediction
business.’28 Furthermore he declared: ‘I think too often, people assume that
the intelligence community is equally adept at divining both secrets (which
are theoretically knowable) and mysteries (which are generally unknowa-
ble) . . . but we are not. The best that intelligence can do is to reduce
uncertainty for decisionmakers . . . but rarely can intelligence eliminate such
uncertainty’.29 Even a prime consumer of intelligence products, Representa-
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

tive Mike Rogers, HPSCI chairman, recognizes: ‘nobody can read a crystal
ball’.30 A MITRE study commissioned by the Defense Department has
concluded that ‘it is simply not possible to validate (evaluate) predictive
models of rare events [e.g. a terrorist attack using a weapon of mass
destruction (WMD)] that have not occurred, and unvalidated models cannot
be relied upon. An additional difficulty is that rare event assessment is
largely a question of human behavior, in the domain of the social sciences,
and predictive social sciences models pose even greater challenges than
predictive models in the physical sciences.’31 Philip Zelikow, executive
director of the 9/11 Commission, has also stated: ‘It’s a sucker’s game to
predict the future; the IC should just coach the jockey, not set the odds.’32
Lastly Jim Steinberg, Deputy Secretary of State, similarly quipped, but with a
serious message: if intelligence professionals could predict accurately, they
would probably be in Las Vegas getting rich.33
Predicting wrongly can certainly make one look stupid. Just ask any soccer
goalie who erroneously guesses into which corner the ball will be kicked on a
penalty kick and looks foolish when he lunges to one side when the ball is
kicked to the other side. Without substantial and credible evidence, the IC
treads on shaky ground when it starts predicting individual events.
Forecasting possible futures, even with estimates of likelihood, and

time. Even though physical phenomena are more predictable, weather forecasters are often
wrong, even about events only several days into the future.
28
Marc Ambinder, ‘An Intelligence Failure in Egypt?’ The Atlantic, 5 February 2011,
5http://www.theatlantic.com/politics/archive/2011/02/an-intelligence-failure-in-egypt/708
20/4 (accessed 26 May 2011).
29
James Clapper, ‘Statement for the Record by James R. Clapper, Jr., Nominee for the
Position of Director of National Intelligence Before the Senate Select Committee on
Intelligence United States Senate, 20 July 2010’, 5http://www.dni.gov/testimonies/
20100720_testimony.pdf4 (accessed 27 April 2011) .
30
Dan De Luce, ‘CIA Chief’s Egypt Remark Causes Confusion’, The Sydney Morning Herald,
5http://news.smh.com.au/breaking-news-world/cia-chiefs-egypt-remark-causes-confusion-
20110211-1apfo.html4 (accessed 26 April 2011).
31
D. McMorrow, Rare Events (McLean, VA: JASON, The MITRE Corporation 2009) p.7.
32
Philip Zelikow, ‘20th Century and the Onset of the Cold War’, speech sponsored by
Brookings Institution, Keswick, VA, 16 March 2011.
33
Jim Steinberg, ‘Deputy Secretary of State Steinberg on Intelligence Support to Policy-
makers’, speech, Central Intelligence Agency auditorium, 18 February 2011.
270 Intelligence and National Security

otherwise peering into the future are appropriate IC functions; prediction is


not.34

The Nature of Failures


In general, intelligence failures result when analytic judgments turn out to be
‘inaccurate’ in a material way or a significant surprise has occurred.35
Accuracy, as noted above, is a desirable attribute for intelligence products.
If a product turns out to be inaccurate compared with subsequently
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

discovered truth, then the ‘failure’ is attributable to a problem with


judgment. Naturally, a shortage of credible collected evidence or even a glut
of contradictory evidence contributes to the erroneous judgment, but the
analyst nonetheless felt sufficient data were available to make a judgment,
tenuous though it may have been.
Surprise can occur when gaps on any one topic are too large and no
judgment can be rendered. More often, it occurs when substantial
differences between judgments and truth are discovered. The two most
significant surprises in US history, Pearl Harbor and 9/11, fall into this latter
category. Prior to these events, the intelligence system was aware of possible
threats of the nature eventually carried out, but did not have adequate detail
in advance of the attacks.
Regardless of how well the IC does its job, intelligence failures to some
degree are caused by decision-makers’ unrealistic expectations about
intelligence and their use or non-use of intelligence. Decision-makers are
the ones in the end who ‘consume’ the intelligence and either cause action or
inaction based on the information provided. However, just because they
deem a situation an intelligence failure, does not necessarily make it so.
Each of these three areas, accuracy, surprise, and the decision-maker role
in intelligence failures, is discussed in greater detail below.

Accuracy
It seems intuitive that accuracy is an indispensable quality for intelligence
products; basing national security decisions on fiction is nonsensical.
Statements of fact in intelligence products about the past and present can
be disputed, but should be able to be resolved with existing knowledge. The
IC should have the highest confidence in the veracity of information that it
asserts is fact. Of course, accuracy depends on the reliability of source
information.

34
See also Paul R. Pillar, ‘Predictive Intelligence: Policy Support or Spectator Sport?’ SAIS
Review 28/1 (2008) pp.25–35.
35
This is consistent with the definition of intelligence failures provided by Ehud Eiran,
‘Preventing the ‘‘Next Intelligence Failure’’? The Three Tensions of Investigating Intelligence
Failures’, paper presented at the 46th Annual Convention of the International Studies
Association, Honolulu, Hawaii, 1–5 March 2005, p.4. Parenthetically, the three tensions are
time, purpose, and process.
Intelligence Failures 271

The accuracy of judgments in intelligence products, on the other hand,


must be viewed differently because they cannot be confirmed or refuted
until after an intelligence product has been delivered to decision-makers.
It is possible that judgments about past, present, or future situations
may prove to be accurate, but this determination may arrive too late
for this new information to be relevant to the decision at hand. It is
likewise possible that the accuracy of these judgments can never be
ascertained.
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

Specificity
A subset of accuracy, often erroneously used interchangeably with it, is the
concept of specificity. It describes the precision or granularity of detail
about a statement of fact or a judgment, e.g. located in a city,
neighborhood, or building; or to the nearest month, day, or hour. Most
decision-makers would naturally prefer intelligence as specific as possible,
especially when being warned; of course this is not always possible. If a
judgment is essentially right, but the granularity is too coarse, one might
question whether this is an intelligence failure. The answer has to depend
on the needs of the decision-maker and the purpose for which the
intelligence is being used.
For instance, the IC did warn about potential domestic airline attacks
prior to 9/11, but obviously did not specify exactly when or where, a most
difficult task. The Senior Executive Intelligence Brief (SEIB) contains similar
information to the President’s Daily Briefing less some of the most sensitive
information. The 9/11 Commission reviewed a number of these SEIBs
written in the spring and summer of 2001 with titles such as: ‘Bin Ladin
Attacks May Be Imminent’ (23 June); ‘Bin Ladin Planning High-Profile
Attacks’ (30 June); ‘Planning for Bin Ladin Attacks Continues, Despite
Delays’ (2 July).36 These documents did foreshadow an impending attack,
but since they did not specify when or where, policymakers could not
effectively prepare. Hence, because many policymakers and the public were
surprised when the attacks did occur, they concluded there was an
intelligence failure. Details of the warnings were not sufficiently granular.37
Track Record
If ‘Monday-morning quarterbacks’ wish to compare IC judgments to
subsequent ground truth, they will undoubtedly find that in many cases one
or more components of these judgments will turn out to be inaccurate (e.g.

36
Philip Shenon, The Commission: The Uncensored History of the 9/11 Investigation (New
York: Twelve 2008) pp.151–2.
37
In this specific case, the execution of intelligence processes could have been better and some
authors have concluded the attacks could have been prevented had the IC exercised more due
diligence in following up leads and sharing information more widely; see for example Bob
Graham and Jeff Nussbaum, Intelligence Matters: The CIA, the FBI, Saudi Arabia, and the
Failure of America’s War on Terror, Reprint ed. (Lawrence, KS: University Press of Kansas
2008).
272 Intelligence and National Security

how, where, when). 38 How one evaluates the accuracy of a judgment with
so many components is a difficult task. Assigning ‘partial credit’ to a
judgment for being accurate about some components hardly seems satisfying
or useful. One truth is certain: ‘in a contest of predictive accuracy, hindsight
will win every time’.39
Sometimes the IC’s ‘track record’ of how accurately it judges or estimates
is compared to baseball batting averages,40 where success in three of ten
attempts is considered commendable. Others state that the fielding average is
a much better comparison where a 98 per cent success rate is more the
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

norm.41 Such analogies, however, are not suitable because these metrics for
both professions are not comparable. In the case of baseball, players fail
when they make mental or physical mistakes in applying the laws of physics
and opposing players do not make a mistake.42 In the case of intelligence,
analysts may perfectly execute all analytic processes and still make
judgments that end up not being true when compared to subsequent ground
truth. The lack of accuracy in this case is due to inaccurate, incomplete, or
conflicting information available, which is beyond the control of analysts.
Collectors certainly attempt to provide as much reliable information as
possible for analysts to consider. However, because of constrained resources,
the nature and ‘knowability’ of needed data, or for other reasons, analysts
will never have all the relevant information they desire. So they make
judgments when facts are scarce.
Omniscience about the past and present and clairvoyance about the future
(i.e. perfect accuracy) cannot be the standard by which judgments are
evaluated. There are essentially an infinite number of details about the IC’s
record and against which it could be scrutinized. Furthermore, ‘accuracy
rate’ cannot be the right metric for analysis. Establishing such a metric
would be counterproductive for several reasons. First, achieving total
accuracy is beyond the control of analysts. Second, two or more analysts can
make different judgments based on the same set of facts. Third, such a metric
would convey the wrong message to decision-makers about how they should
evaluate the utility of the intelligence they receive. Fourth, pressure to
eliminate inaccurate judgments would make analysts gun shy about making
38
For examples of ‘accurate’ and ‘inaccurate’ NIEs, see Loch Johnson, ‘Weighing the Value of
Estimates’ in The Threat on the Horizon: An Inside Account of America’s Search for Security
after the Cold War (Oxford: Oxford University Press 2011) pp.173–4.
39
Robert Callum, ‘The Case for Cultural Diversity in the Intelligence Community’,
International Journal of Intelligence and Counterintelligence 14/1 (2001) p.25.
40
Robert Jervis, Why Intelligence Fails: Lessons from the Iranian Revolution and the Iraq
War (Ithaca, NY: Cornell University Press 2010) p.178.
41
Mike Hayden, ‘Transcript of Director Hayden’s Interview with Fox News’, 21 January
2009, 5https://www.cia.gov/news-information/press-releases-statements/cia-director-inter-
view-with-fox-news.html4 (accessed 3 June 2011).
42
An opponent’s mistake here means more than just what is scored as an official error. These
mistakes can include pitching too close to the center of the plate, positioning oneself
defensively without considering a batter’s hitting tendencies, overrunning a base, or turning
the wrong way to pursue a fly ball, etc.
Intelligence Failures 273

them for fear of being wrong. This in turn would lead to more generic and
worthless judgments, certainly in the eyes of decision-makers. Fifth,
judgments acted upon which modify the threat could result in charges that
the original judgments were in error (i.e. warning paradox). Lastly,
‘referring to intelligence as being ‘‘right or wrong’’ makes no sense. Such
an appraisal is overly simplistic and omits critical evaluative information’.43
If decision-makers insist on using accuracy as benchmark for success of
intelligence, they will always be able to find examples of failures. There are
however other ways of dealing with the concept of accuracy, as will be
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

discussed below.

Surprise
Warning has been an intelligence function for millennia. It is similar to, but
not the same as predicting. It consists of providing an alert to decision-
makers so that they are not surprised and can make preparations, time
permitting.44 The IC can strive to minimize surprises, but cannot completely
eliminate or prevent them; again, clairvoyance is not an IC capability.
They often occur when intelligence products contain insufficient detail or
are determined to be materially inaccurate when eventual truth is learned,
especially on a topic about which a decision-maker has urgent information
needs.
When warning includes an assessment of likelihood, it ventures into
forecasting mode, perhaps even close to prediction, depending on the
specificity of the warning. A warning ideally answers at least the ‘what’ and
‘how’ strategic-level questions. The more difficult, tactical questions to be
answered are the ‘where’ and the ‘when’. A warning without knowing both
the ‘when’ and ‘where’ is naturally dissatisfying to decision-makers and
makes policy planning difficult. Other factors influencing whether a warning
is considered adequate and not a failure include: how far in advance was a
warning issued before the anticipated event of interest, how detailed was the
warning, who specifically was warned, and whether the media/public was
also informed. Furthermore, the difference between tactical and strategic
warning influences whether a failure truly occurred. Decision-makers knew
of impending threats prior to Pearl Harbor and 9/11; what failures there
were must be considered tactical, not strategic. Of course, warning about
every conceivable threat is not realistic or useful. ‘Officials cannot expect
intelligence warnings to be precise or unequivocal, except in the area of last-
minute tactical warning’.45
Regarding unrest in the Middle East in early 2011, a White House
spokesman asserted: ‘For decades, the intelligence community and the State

43
Wheaton, ‘Evaluating Intelligence’, p.618.
44
James P. Finley, ‘Nobody Likes to be Surprised: Intelligence Failures’, Military Intelligence
20/1 (1994) pp.15–21, 40.
45
Richard K. Betts, ‘Intelligence for Policymaking’, The Washington Quarterly 3/3 (1980)
p.128.
274 Intelligence and National Security

Department have been reporting on simmering unrest in the region. Did


anyone in the world know in advance that a fruit vendor in Tunisia was
going to light himself on fire and spark a revolution? No’.46 Despite the
difficulty for the IC or for anyone to anticipate and warn about this trigger
and the resultant instability that spread into Egypt, Senator Feinstein
complained that ‘intelligence agencies gave policymakers ‘‘no real warning’’
about the unrest that has threatened to topple the regime of Egyptian
President Hosni Mubarak’.47
Warning about a single event will rarely include the level of detail that
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

decision-makers crave, especially the timing of catalysts for surprise events.


The best the IC can do is to estimate. Of course the estimate is more likely to
be accurate when the historical track record is longer, the target is more
stable over time, and more data about the target are available. Anticipating
and warning about fluid issues, however, is most difficult, if not impossible.
Furthermore, warning about a single individual who may be carrying
explosives is much more difficult than anticipating an attack by dozens of
Soviet motorized rifle divisions. Intelligence failures due to surprise seem to
be in the eye of the beholder; when anything startling happens, some choose
to immediately play the intelligence failure card, a convenient scapegoat to
account for the vicissitudes of life. Unfortunately, ‘the public too often
assumes that the intelligence community is some sort of Department of
Avoid Surprises and consequently blames it for every unexpected event’.48

Intelligence Failures and Decision-makers


Besides intelligence failures related to poorly executed intelligence processes
or unsatisfactory intelligence products, intelligence failures may occur
because of decision-makers. In fact, one could argue whether situations often
touted as failures by the IC are really intelligence failures, policy failures, a
combination of both, or neither.49 According to Richard Betts, ‘it is usually
impossible to disentangle intelligence failures from policy failures’.50 Former
Director of Central Intelligence and Defense Secretary James Schlesinger,
both a senior intelligence officer as well as a senior decision-maker, has also
stated that ‘many intelligence ‘‘failures’’ are not the fault of the security or
intelligence services at all. ‘‘Intelligence tends to be the favorite scapegoat of
46
Ambinder, ‘An Intelligence Failure in Egypt?’ The article’s author goes on to muse: ‘If the
CIA thought that Ben Ali [former Tunisian president] would be deposed in, say, a week
instead of 48 hours, does that count as a botched call?’
47
Josh Gerstein, ‘Feinstein: U.S. Intelligence Offered ‘‘No Real Warning’’ on Egypt’, Politico,
8 February 2011, 5http://www.politico.com/blogs/joshgerstein/0211/Feinstein_US_intelli-
gence_offered_no_real_warning_on_Egypt.html4 (accessed 27 April 2011).
48
Paul R. Pillar, ‘Don’t Blame the Spies’, Foreign Policy, 16 March 2011, 5http://
www.foreignpolicy.com/articles/2011/03/16/dont_blame_the_spies4 (accessed 26 May
2011).
49
Gentry, ‘Intelligence Failure Reframed’, pp.255–61. Gentry devotes an entire section of the
article to ‘Policymaking and Leadership Failures’ and specifically addresses expectations.
50
Betts, ‘Analysis, War, and Decisions’, p.39.
Intelligence Failures 275

politicians’’, he noted, who ‘‘frequently ascribe to the intelligence commu-


nity failures which are really failures of policy’’’.51 Prillaman and Dempsey
similarly argue: ‘The perception of what constitutes an intelligence success
or failure sometimes is a normative or political judgment rather than an
indisputable fact; observers working with the same set of evidence will differ
over the answer and what some view as an ‘‘intelligence failure’’ may instead
reflect a ‘‘policy failure’’’.52 This is undoubtedly a primary reason why there
seem to be so many intelligence failures – there are many policy failures.
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

Decision-maker Expectations
Intelligence failure may be the fig leaf that some decision-makers use to cover
their policy failures, but there is a less self-serving reason why decision-
makers first consider intelligence as the source of policy problems: unrealistic
expectations. Intelligence failures ultimately result when decision-makers’
expectations for intelligence information exceed what the IC can provide.
Decision-makers who do not understand intelligence often ascribe ‘pie-in-
the-sky’ capabilities to the IC and resultant omniscience, such as the ‘absurd
expectations heaped on the intelligence community during the recent [2011]
Arab uprisings’.53 Often these expectations stem from unrealistic portrayals
of intelligence in movies or on television. The following two quotes highlight
the tug-of-war decision-makers often face when dealing with intelligence:
the great desire for valuable intelligence that must be tempered by an
understanding of what intelligence really can do.

Far from being disrespected as they once were, intelligence gathering


and analysis are now considered such indispensable government
functions – and so much is expected of them – that their inability to
disperse the fog of war or of international politics causes outrage. The
irony is that even though intelligence has come of age, it will inevitably
fall short of the public’s expectations, no matter what resources and
attention it receives, because of the irreducible unpredictability of its
targets. And no matter how accurate intelligence is, it will be useless if
ignored.54

Policymakers who knew how to use intelligence generally had a


realistic view of what it could and could not do. They understood, for
example, that intelligence is almost always more helpful in detecting
trends than in predicting specific events. They knew how to ask
questions that forced intelligence specialists to separate what they

51
The Institute of World Politics, ‘Institute Event: Experts Identify Anti-Terror Problems &
Solutions’, 5 December 2003, 5http://www.iwp.edu/news_publications/detail/institute-
event-experts-identify-anti-terror-problems-solutions4 (accessed 8 April 2011).
52
William C. Prillaman and Michael P. Dempsey, ‘Mything the Point: What’s Wrong with the
Conventional Wisdom about the C.I.A’, Intelligence and National Security 19/1 (2004) p.5.
53
Pillar, ‘Don’t Blame the Spies’.
54
David Kahn, ‘The Rise of Intelligence’, Foreign Affairs 85/5 (2006) p.134.
276 Intelligence and National Security

actually knew from what they thought. They were not intimidated by
intelligence that ran counter to the prevailing policy but saw it as a
useful job to thinking about their courses of action.55

Decision-maker Use of Intelligence


Once they are provided intelligence, decision-makers have basically two
choices on what to do with it: 1) use it in their decision-making process, or 2)
ignore it, with or without prejudice.56 Benignly ignoring intelligence for
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

whatever reason or rejecting it outright because it is perceived as erroneous,


irrelevant, or does not support the decision-maker’s policy preferences all
have the same effect – intelligence did not help the decision. This is counter
to the DNI’s guidance to the IC in Intelligence Community Directive (ICD)
208, Write for Maximum Utility, which has an obvious intent.57
If decision-makers wish to benefit from intelligence, they need to include it
alongside other inputs in their decision-making process and consider it in the
light for which it was provided. Of course their objective is the formulation
and execution of policy, not necessarily the minimization of intelligence
failures, which sometimes result after their neglect of the intelligence
available. For example,

the case of the Soviet invasion of Afghanistan does not seem to be one
of traditional ‘intelligence failure.’ US leaders were not surprised by the
invasion because they lacked clear evidence of Soviet military
preparations and movements in and around Afghanistan prior to the
invasion. As the historical record unequivocally demonstrates, such
intelligence was regularly reported to top US policymakers. Rather, a
combination of mindsets, wishful thinking, political divisions in the
policy community, and Administration preoccupation with other issues
helped preclude a discussion of alternative US policy options vis-à-vis
Soviet involvement in Afghanistan.58

Even the attacks on 9/11, an event most Americans probably consider as one
of the most egregious intelligence failures in US history, has been described

55
John McLaughlin, ‘Serving the Policymaker’ in Roger Z. George and James B. Bruce (eds.)
Analyzing Intelligence: Origins, Obstacles, and Innovations (Washington, DC: Georgetown
University 2008) p.72.
56
See an excellent discussion of how intelligence does or does not influence decisions in Ohad
Leslau, ‘The Effect of Intelligence on the Decisionmaking Process’, International Journal of
Intelligence and Counterintelligence 23/3 (2010) pp.426–48. See also Amanda J. Gookins,
‘The Role of Intelligence in Policy Making’, SAIS Review 28/1 (2008) pp.65–73.
57
Intelligence Community Directive 208, Write for Maximum Utility, Director of National
Intelligence, 17 December 2008.
58
Doug MacEachin and Janne E. Nolan, co-chairs, The Soviet Invasion of Afghanistan in
1979: Failure of Intelligence or of the Policy Process? Working Group Report No. 111
(Washington, DC: Georgetown University 2005) pp.12–13.
Intelligence Failures 277
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

Figure 2. Policy Success/Failure and the Role of Intelligence.

by at least one author as a policy and not an intelligence failure, largely


because of how decision-makers did and did not use intelligence.59
Decision-makers can turn an adverse situation into an intelligence failure
by not: 1) taking the time to thoughtfully consider or even review
intelligence provided to determine how it may help the decision at hand,60
2) bothering to inform intelligence personnel on how they should better
tailor their products and make them more useful, 3) understanding the limits
of intelligence and hesitating to decide, hoping more concrete intelligence
will appear, or 4) taking precautions when warned.
Policy and Intelligence – Successes and Failures
The utility of intelligence depends not only on product content, but also on
how decision-makers use it. The resulting decisions and actions by the
decision-maker ultimately end up as policy successes or failures or
somewhere in between. Naturally the characterization of policy results is
subjective and depends on the timeframe, whether short or long term.61
Further, the evaluation of intelligence may or may not correspond with the
evaluation of policy. Figure 2 summarizes the relationships among policy
successes and failures and the role of intelligence. ‘Good’ and ‘bad’

59
Stephen Marrin, ‘The 9/11 Terrorist Attacks: A Failure of Policy not Strategic Intelligence’,
Intelligence and National Security 26/2–3 (2011) pp.182–202.
60
See Erik J. Dahl, ‘Missing the Wake-up Call: Why Intelligence Failures Rarely Inspire
Improved Performance’, Intelligence and National Security 25/6 (2010) pp.778–99 on the
importance of decision-maker receptivity to intelligence; see also W.R. Baker, ‘The Easter
Offensive of 1972: A Failure to Use Intelligence’, Military Intelligence Professional Bulletin
24/1 (1998) pp.40–2, 60.
61
The United States viewed its overthrow of Iranian Prime Minister Mossadegh in 1953 at the
time as a success. In light of the Iranian revolution in 1979 and seizure of the US embassy, one
could question the long-term success of the policy.
278 Intelligence and National Security

intelligence, though highly subjective and generalized, refers to the adequacy


and utility of the content.

What Do We Do About Intelligence Failures?


Even though intelligence failures are likely inevitable, there are some measures
that the IC and decision-makers can take to minimize them. Naturally when the
IC executes its prescribed processes as flawlessly as possible, it makes a major
effort towards reducing the number of potential failures. Of course, with
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

normal personnel turnover and changing circumstances, the IC has to conti-


nuously improve and learn since ‘lessons learned do not stay learned’.62
The recommendations below may seem like common sense. Each is
probably ongoing to some degree; however consistent effort is required.
They fall into three areas: 1) understanding the role of accuracy, 2)
preparing for surprise, and 3) improving IC/decision-maker interaction.

Understand the Role of Accuracy


Certainly intelligence analysts want as many of their judgments as possible
to prove accurate, which would give decision-makers more confidence in the
products they prepare. Given the complexity and danger in today’s strategic
world, ‘the consequences of getting analysis wrong are much greater now’.63
Despite the desirability of accurate judgments, though, this cannot be the
overarching goal of analysis. Analysts can only be as accurate as the data
they are provided plus their sense or gut feel regarding a situation. Processes
are already in place to enhance the ‘likelihood’ of being accurate: alternative
analyses, red teaming, use of structured analytic techniques, etc.64 Besides
improving and executing processes correctly to minimize failures, the
question remains about what else can be done about accuracy.
When considering accuracy, it is important to keep in focus the overall
goal of analysis: clarifying a decision-maker’s understanding of the situation
and providing insight for decisions. Analysts can do so even when some of
their judgments prove wrong.65 Accuracy is just one of many attributes of
analysis. Although inaccuracies will inevitably appear, overall failure does
not have to be the result. ‘Policy makers may have to accept the fact that all
62
John Hollister Hedley, ‘Learning from Intelligence Failures’, International Journal of
Intelligence and Counterintelligence 18/3 (2005) p.446.
63
Petersen, ‘What I Learned in 40 Years of Doing Intelligence Analysis for US Foreign
Policymakers’, p.19.
64
Two excellent works outlining ways to improve intelligence analysis are: Richards J. Heuer,
Jr., Psychology of Intelligence Analysis (Washington, DC: Center for the Study of Intelligence
1999) and Rob Johnston, Analytic Culture in the US Intelligence Community (Washington,
DC: Center for the Study of Intelligence 2005).
65
Glenn Hastedt, ‘Intelligence and U.S. Foreign Policy: How to Measure Success?’
International Journal of Intelligence and Counterintelligence 5/1 (1991) pp.49–62; Mark
M. Lowenthal, ‘Towards a Reasonable Standard for Analysis: How Right, How Often on
Which Issues?’ Intelligence and National Security 23/3 (2008) pp.303–15.
Intelligence Failures 279

intelligence estimators can really hope to do is to give them guidelines or


scenarios to support policy decisions, and not the predictions they so badly
want and expect from intelligence.’66 Decision-makers will be better served
if they are encouraged to focus on threads of analysis over time and not on a
single event.
The following illustrates a proper view of accuracy:

If an analyst’s reputation were to hinge on a single prediction for the


year, he would have been reckless to say the event [OPEC raising its oil
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

prices in late 1973] would happen. If he were to be judged, however, by


how well he flagged dangerous possibilities rather than by whether he
was always ‘right,’ a strong warning about a price rise in the near term
would have been warranted even several years earlier.67

Although putting the attribute of accuracy into perspective does not


necessarily reduce the number of inaccurate judgments, it does redirect
decision-makers’ attention away from analytical track records and towards
the proper way to view intelligence analysis, accepting it for what it can do,
and not criticizing it for what it cannot do.

Prepare for Surprise


Since avoiding surprise has been a long-standing intelligence function,
numerous articles have been written on how to minimize and remedy it.68
Most of these articles deal with improving analytic processes. Some of the
more significant measures the IC can take to reduce surprise are summarized
as follows: adhere to the prescribed analytic standards to the highest degree
possible under the circumstances; learn from past failures, whether
attributable to process or product; continue to educate the IC on these
standards and past lessons; make efforts to characterize wildcards, drivers,
and triggers for potential unexpected events; monitor issues that decision-
makers are not following and warn as necessary; avoid deadening decision-
makers’ attentiveness by subjecting them to incessant warnings of a general
nature (‘Chicken Little’ or ‘Cry Wolf’ syndromes); educate decision-makers
to consider situations holistically over time and not just point to single
surprising events; set the intellectual threshold properly by seeking to avoid
significant surprise, not necessarily all surprise.

66
Arthur S. Hulnick, review of Estimative Intelligence: The Purposes of Problems of National
Intelligence Estimating by Harold Ford, Conflict Quarterly 14/1 (1994) p.74.
67
Betts, ‘Intelligence for Policymaking’, p.121.
68
See for example, Chester A. Crocker, ‘Reflections on Strategic Surprise’ in Patrick Cronin
(ed.) The Impenetrable Fog of War: Reflections on Modern Warfare and Strategic Surprise
(Westport, CT: Greenwood Publishing Inc. 2008); Steve Chan, ‘Intelligence of Stupidity:
Understanding Failures in Strategic Warning’, American Political Science Review 73/1 (1979)
pp.171–80; Stephen Marrin, ‘Preventing Intelligence Failures by Learning from the Past’,
International Journal of Intelligence and Counterintelligence 17/4 (2004–2005) pp.655–72.
280 Intelligence and National Security

As much as the IC tries to limit surprise, it will not always succeed.


However, there is something else that the IC and decision-makers can do in
light of inevitable surprises: prepare to be surprised.69 Both should anticipate
the unexpected. Of course one cannot be fully prepared for every
eventuality, but should look at possible events that are most devastating.
Both the IC and the supported decision-makers must be sufficiently agile to
respond, so that when they are surprised they will not be totally caught off
guard nor paralyzed by the shock. In dealing with possible future threats,
Phillip Zelikow has outlined the way the IC should deal with them: 1)
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

determine potential catastrophic threats, 2) determine the indicators for


these threats, 3) determine whether these indicators can be collected, 4)
determine what do to counter the threats (in concert with decision-
makers).70 Being prepared will not necessarily limit surprises, but it will
make them relatively less catastrophic.

Improve IC/Decision-maker Interaction


Improving interaction between the IC and the various decision-makers it
serves is one of the most important tasks for dealing with intelligence
failures. Each group must not only interact, but understand each other and
their roles, despite political motivations behind policy. The IC should adopt
a mentality of service and work with decision-makers so that it can come up
with tailored and relevant insight and products.71 The following are some
suggestions on how both the IC and decision-makers can improve their
relationship to minimize intelligence failures.
Intelligence Community Responsibilities
The best way for intelligence analysts to build trust and establish credibility
with decision-makers is by delivering reliable and useful products. This is
more likely to happen when analysts seek out and clearly understand
decision-makers’ information needs and tailor products to meet them. They
must also ensure that decision-makers clearly understand the distinction
among facts, judgments, and gaps and the important role of judgments.
More importantly analysts should discuss with decision-makers to the extent

69
Fareed Zakaria agrees when he states: ‘The goal should instead [of trying to predict events]
be preparedness. Government agencies should be readying policymakers and bureaucrats for
sharp changes in international, regional and national patterns. They should be imaginative
about the possibilities of sudden shifts and new circumstances and force policymakers to
confront the scenarios in advance. That is what has distinguished the most successful private-
sector firms in managing crises.’ Fareed Zakaria, ‘Risk Management’, Washington Post, 28
April 2011, p.17.
70
Zelikow, ‘20th Century and the Onset of the Cold War’.
71
For further discussion on service to decision-makers see Josh Kerbel and Anthony Olcott,
‘Synthesizing with Clients, Not Analyzing for Customers’, Studies in Intelligence 54/4 (2010)
pp.11–27.
Intelligence Failures 281

possible the basis of judgments and the related confidence levels. Above all
they should resist the temptation to curry favor by politicizing intelligence.
In order to build appropriate expectations, the IC must take seriously its
responsibility to educate decision-makers about IC capabilities and
limitations, especially regarding accuracy and surprise.72 Naturally getting
on their busy calendars will be a daunting challenge, but it is critical to
maximizing the value of intelligence. The IC, however, must not over-
promise its ability to deliver truth. To a lesser and unclassified degree, the IC
also needs to educate the media and public about the same.73 Further, if a
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

decision-maker or the media publically castigates the IC for not being able to
predict, then the IC must selectively, but publically correct the misperception
that the IC has a prediction mission.
One policymaker clearly expressed his legitimate expectations of the
intelligence community as follows:

. ‘Intelligence should not be politicized – but should be policy relevant


. Intelligence should not just inform – but also challenge policymakers
. Intelligence should not just be descriptive – but should also be actionable
. Help policymakers understand and use the Intelligence Community’.74

Decision-maker Responsibilities
Decision-makers will find that the IC will serve them better when they
engage in a vibrant dialogue. Information flowing from decision-makers to
the IC should include: clearly articulated information needs, priorities, and
issues under consideration as well as constructive comments on intelligence
products previously provided. Decision-makers must also understand IC
capabilities and recognize that omniscience and clairvoyance are not among
them. They also need to accept the fact that some judgments will be
‘inaccurate’ and that surprises will occur; they must suppress the urge to
mask policy failures as intelligence failures.
By accomplishing the foregoing, decision-makers can have realistic
expectations of intelligence such as the following: the IC will do its best to
provide what is needed but cannot exceed its capabilities regardless of how
intensely the information is desired; intelligence will clearly delineate
between facts and judgments; the IC will weigh opportunity costs, consider

72
Dennis C. Wilder, ‘An Educated Consumer is Our Best Customer’, Office of the Director of
National Intelligence, 2010 Galileo Awards Winner; see also Martin Petersen, ‘What We
Should Demand from Intelligence’, National Security Studies Quarterly 5/2 (1999) pp.107–
13.
73
Mark M. Lowenthal, ‘The Real Intelligence Failure? Spineless Spies’, Washington Post, 25
May 2008, 5http://www.washingtonpost.com/wp-dyn/content/article/2008/05/22/
AR2008052202961.html4 (accessed 24 May 2011).
74
Gregory L. Schulte, ‘From the Balkans to Iran – Coupling Policy and Intelligence to Address
the World’s Complex Problems’, Lessons learned presentation, McLean, VA, 2 December
2009. Ambassador Schulte is the former executive secretary of the National Security Council.
282 Intelligence and National Security

the desires of multiple decision-makers, and accept risk in some areas


because of constrained resources; the IC will shy away from providing a
single prediction of what is going to happen – if demanded, the prediction
may not happen as described; only in rare cases will warning include all the
details desired, such as specifically when or where an event may occur.
Intelligence failures may occur more often that we would like. Realistic
expectations of the IC are important to understand them. With good
tradecraft and effective interaction between the IC and decision-makers, they
can be minimized. Good decision-makers do not ask the IC to do more than
Downloaded by [Memorial University of Newfoundland] at 14:48 01 August 2014

it is able; good intelligence officers do not overpromise on what they can


deliver.

Acknowledgments
All statements of fact, opinion, or analysis expressed in this article are those
of the author. Nothing in this article should be construed as asserting or
implying US government endorsement of its factual statements and
interpretations. This article has gone through the pre-publication review
process of the Office of the Director of National Intelligence (ODNI).

You might also like