×
Home Companies Industry Politics Money Opinion LoungeMultimedia Science Education Sports TechnologyConsumerSpecialsMint on Sunday
×

Clever risk models fail to reflect reality

Clever risk models fail to reflect reality
Comment E-mail Print Share
First Published: Sun, Nov 18 2007. 11 49 PM IST
Updated: Sun, Nov 18 2007. 11 49 PM IST
Computerized models of risk played a bigger role in the recent crunch than ever before. Common flaws in risk models link the failure of rating agencies to predict the level of defaults on subprime mortgages, the freezing up of the market for complex debt securities, and problems at several dozens hedge funds and a handful of banks. We handed over the job of assessing financial risk to computers programmed by mathematicians and physicists—and they failed.
Quantitative risk management tools are widely used by both institutional investors and bankers. The most popular of these is the value-at-risk (VaR) model, which seeks to estimate the maximum loss a portfolio of equities, loans or other securities is likely to experience over a given period. Since it was first developed in the early 1990s by JP Morgan, VaR has swept through the world of finance.
An account of how these models came to be adopted by banks is provided in an excellent new book, Plight of the Fortune Tellers: Why We Need to Manage Risk Differently by Riccardo Rebonato, the global head of market risk and quantitative analysis at the Royal Bank of Scotland. The business of banking has dramatically changed over the past three decades. After financial markets were deregulated, banks found that many of their traditional corporate customers were by-passing them to borrow directly in the capital markets. In response, many banks changed their line of business. They started originating loans, turning them into securities and selling them on.
Once banks no longer held all their loans to maturity but instead owned large amounts of tradable securities, they needed to find a way to estimate the influence of market risk on their balance sheets. The quants at JP Morgan came up with their VaR model, which uses statistical techniques to measure risk. This model looks at the historic correlations and volatilities of different assets in a portfolio, crunches the numbers and comes up with a figure; say that 99% of the time a certain portfolio with $100 million (Rs393.5 crore) of assets will have a maximum expected loss of $5 million over the course of a month.
This model was attractive to bankers for several reasons. For all its quantitative sophistication, VaR generates a single number of the maximum expected loss, expressed in dollar terms. Even-old fashioned bank managers, who had spent more of their time improving their golf handicaps than studying advanced mathematics, could understand that. Different securities respond in varying ways to the same event. For instance, when interest rates rise, fixed-income bonds tend to fall, but some securities such as interest-only strips formed from a pool of mortgages move in the opposite direction. The beauty of VaR is that it calculates the co-dependence between the assets and estimates the overall volatility of a portfolio of various securities.
Seductive as they are, the quantitative risk models can’t meet the high expectations that are piled upon them. For instance, the models used by the rating agencies to calculate default probabilities on subprime mortgages assumed that US home prices would continue rising. The models failed to take into account the possibility that after several years of rampant house price inflation and widespread mortgage fraud the future was unlikely to resemble the past.
In addition, there’s the question of how much relevant information is available. Rebonato cites an example of a bank employing a mere five years of data to measure the loss distribution on a portfolio to the 99.95th percentile. Such precision is truly absurd. It suggests that a certain loss is unlikely to be exceeded more than once in 2,000 years.
VaR models assume that profits and losses on a portfolio have a given distribution from which probabilities can be inferred. But that’s another dubious assumption. Normal or Gaussian distributions don’t exist in the world of finance, which is replete with feedback effects and so-called “fat tails”. As a result, some argue that it’s impossible to make strong inferences from financial data alone. This is scarcely a new discovery.
Not only are VaR models poor at predicting losses during a crisis, they actually contribute to market instability. In a prize-winning essay, Sending the herd of the cliff edge: the disturbing interaction between herding and market sensitive risk management practices, Avinash Persaud of London-based financial adviser Intelligence Capital warned that the widespread adoption of similar risk models encourages herding by market participants.
When markets are calm and volatilities are low, as they were in the years leading up to the past summer’s debacle, VaR models indicate that risk levels are declining. That encourages financial players to take on more debt and place riskier trades. But when an upset occurs, the models show that risk has increased, forcing banks and others to offload their positions at the same time. That’s what happened last February and again in the summer. Only in the looking-glass world of finance could an attempt to precisely forecast losses actually increase market turbulence and magnify the size of those losses.
Comment E-mail Print Share
First Published: Sun, Nov 18 2007. 11 49 PM IST
More Topics: Risk | VaR model | JP Morgan | Banking | Securities |