The limitations of intelligence analysis

The limitations of intelligence analysis

As a small boy growing up in Delhi, I remember sleeping out in the terrace during summers. During those nights, my father would often point out the constellations and teach me how to identify them. The Orion, Big Dipper, Cassiopeia and the Indian variants such as the Saptarishi. As a kid, it seemed to me that if I stared at the sky long enough, I could pretty much imagine any shape I wanted to. The business of intelligence is a bit like that. If you stare long enough at events that have passed, you could always come up with the question—why didn’t we see that happening? But there are some very good reasons why we don’t see obvious events coming.

The Pearl Harbor, the Bay of Pigs, fall of the Berlin Wall, the Yom Kippur war, 9/11 or the Mumbai terror attacks—the list goes on. These seem to have been totally predictable events (world changing events at that)—after the fact—and yet experienced analysts, political observers, intelligence agencies and even powerful governments seem to have been completely blindsided by them. Businesses are no better, by the way. Established business leaders routinely fail to spot or leverage disruptive technologies, paradigm shifts or changing customer preferences. The fact that the Googles, the iPhones and the Dells of the world have come and displaced market leaders in an environment where the facts were apparently for all to see and use, proves that developing intelligence is not as easy as just having all the information.

One of my favourite examples is an incident of failure of intelligence taught in military curriculums. The story goes something like this. In 1967, Israel won the six-day war against half-a-dozen countries. This victory came with the price of humiliated and belligerent enemies and a very volatile neighbourhood. While there were sporadic periods of peace, there was underlying tension with Egypt vowing revenge and retribution. From 1972, the then president Anwar Sadat began gearing up in earnest in preparing Egypt’s armed forces. Acquisition of weapons, step-up of training and reconnaissance, buttressing of defences and repositioning of artillery were all signs that something major was afoot.

Also Read Raghu Raman’s earlier columns

Towards the end of September and through the first week of October 1973, the signs of war became dramatically obvious. Russian advisers in Egypt were moving out their families and Golda Meir, the Israeli prime minister, received a warning about the impending attack directly from King Hussein of Jordan. Mossad chief Zvi Zamir continued to hold the view that war was not an Arab option. He was wrong. Syria and Egypt attacked Israel on 6 October 1973 in what was the bloodiest war that Israel had ever fought.

Why did intelligence fail in this instance? Or as in the case of several other instances in world history. It is because intelligence doesn’t fail. Intelligence is not a project—which can fail or succeed. It’s a process that has elements of planning and execution and so for “intelligence" to succeed, it needs to be acted upon successfully. So why don’t decision makers act on intelligence? That is because of two factors.

One, specific intelligence is very hard to come by in real time. Yes, the dots do exist, and while they are easy to connect post facto, it is exceptionally difficult to do that in real-time. Even extremely predictable cyclical events such as economic volatility leave experts flummoxed.

To see why this happens, let us go back to Israel in 1970. At that point in Egypt, Sadat became president after Gamal Abdel Nasser had died. For over three years, there was intense pressure on him to avenge the defeat of the six-day war. From the time he took over, Sadat was vociferous in his articulation of Egypt’s intention to retaliate. Egypt mobilized its army over a dozen times in 1973 and—nothing happened. Trusted sources gave warnings of attacks and no attacks took place. Such false positives take a heavy toll on security apparatuses. For instance, each time a city goes on alert, it causes incredible damage and trauma. Airlines and passengers lose millions every year because of airport alerts, millions of man-hours are lost because of security checks on roads, airports, hotels, hospitals and malls. The very act of acting on intelligence can make a society behave as if under a siege. Also, there is intense pressure on the intelligence advisers—not to be wrong.

And this leads to a second phenomenon called “committed credibility". Intelligence analysts are like doctors. They read signs; use their tools, training and experiential knowledge to diagnose situations. Like in any other field, some are gifted and some are average. Their diagnosis is used by decision makers to decide on courses of action. But like complex diseases, no diagnosis can be truly conclusive. There are always elements missing, and each new piece of information has the potential to transform the diagnosis, sometimes drastically.

When I was in my early 20s, I was once again staring at the skies in Dehradun, this time under the watchful gaze of my instructors in the Indian Military Academy. They were teaching me the constellations all over again, but now, it was with a very specific purpose—to locate the North Star and get directions. And again, intelligence is a bit like that. It gives us directions and guidance, but there is no getting away from the fact that like any diagnosis, there will be failures. Which is why as a society and a nation, our ability to respond must be swift and decisive because the hope of prevention cannot be a strategy.

Raghu Raman is an expert on homeland security. Respond to this fortnightly column at