Of late, whenever I witness debates on security strategies and doctrines, I am a little amused by the conviction some people seem to have about what ought to be done. Interestingly, their detractors are equally clear and strong in their arguments. But this seeming dichotomy of views is natural when dealing with issues that have major trade-offs, regardless of which decision is taken.
Understanding and designing security is a challenge precisely because of the inherently paradoxical inputs and contradictory solution requirements. Let me illustrate this with an example. When a weapon system is being designed, it must conform to several different parameters. Let us take the design of an assault rifle.
The user’s—in this case a soldier’s—wish list would go something like this: The weapon has to be lightweight, since he has to carry and live with his personal weapon. It has to be accurate over long ranges, so that he can engage the enemy before being fired at. It must be easy to handle, so that it can be operated with one hand (this especially holds true when fighting in built-up areas, where one hand is needed to open doors and remove obstructions). It must require low maintenance, as field environments are always grimy. It should have a rapid rate of fire, to overwhelm the enemy with superior firepower. And it should be compact, so that it can be used in confined spaces such as rooms.
As if these were not enough, there are operational and strategic requirements as well. The ammunition the weapon fires should be of standard calibre, so that a single logistic line can be maintained for different weapons such as rifles and machine guns. And the weapons must have a high degree of standardization and interchangeability, to enable cannibalization in an emergency and also lower inventory of parts.
As you might have guessed, almost all of these requirements contradict each other. You can have either a compact weapon or an accurate one—not both, because a longer barrel is needed for the latter. For robustness, the construction has to be rugged, which in turn increases the weight of the weapon. If you want easy handling, then the weapon can’t be complicated, in which case it also can’t have a high rate of fire. Designing a weapon system around an existing calibre puts severe limitations on its ballistic capabilities, and any radical change in the design sets off a whole chain of events in the logistical supply chain as well as retraining requirements.
There are two lessons in this. Designing good security is about calculated trade-offs, where a certain aspect is deliberately compromised to achieve another. More importantly, since an ideal security design caters to achieving the general good against a specific speciality, any user (or faction) can accuse it of being suboptimal from his point of view.
This situation is further complicated by inaccurate, misplaced and misjudged inputs (often confused with intelligence).
During the early years of World War I, troops went into battle with almost no protection for their heads. Thousands of casualties among French troops engaged in trench warfare prompted the development of the “Adrian Helmet” made of mild steel. However, immediately after the induction of the helmets, the number of wounded soldiers increased dramatically. This inconsistency confounded the military high command, which believed the spike was because of improper use of the helmets. But that did not make sense. Even an improperly worn helmet would offer better protection than no helmet at all. Several other equally preposterous theories were propounded, such as helmeted soldiers became better targets or the helmets made soldiers more reckless.
This paradox, however, had a simple yet counter-intuitive explanation. The number of wounded increased because the number of fatalities was decreasing. In other words, more soldiers were being recorded as wounded after using the helmets because they would have been dead without that protection.
Both these examples are straightforward, and are illustrative for that very reason, but the security environment is usually far more complicated and clandestine. By their very nature, security failures are highly visible and achieve notoriety, but successes have to remain covert. Also, unlike a weapon’s design, the success or failure of a security doctrine needs to be judged in a long-term perspective. What seems successful in a certain year may be a disaster just a decade later. For example, US support for the Taliban-Pakistan combine against the Russian onslaught into Afghanistan was considered an intelligence coup in the 1980s, but laid the foundation for 9/11 and ongoing threats to the US.
Carl von Clausewitz, the great military strategist, referred to the twilight state of ambiguity in which operational decisions are taken as the “fog of war”. Developing an intelligent security strategy is more complicated. Not only does it need to operate in the fog of war, it must also withstand the test of history and be satisfactory to as many stakeholders as possible. This is something the critics viewing security situations from a singular perspective ought to be cognizant of.
Raghu Raman is an expert and a commentator on internal security.
Comments are welcome at firstname.lastname@example.org