Big banks’ risk does not compute
- Revenue collected under GST in October at Rs95,000 crore: Sushil Modi
- Honda recalling 900,000 minivans because seats may tip forward
- Bandipora encounter: 5 militants gunned down, IAF Garud commando killed
- Policy implementation in India lacks speed: G.P. Hinduja
- Congress committee to meet on Monday to decide party president’s election schedule
In his new book The End of Alchemy, former Bank of England governor Mervyn King makes a bold argument: No matter how fine-tuned our regulations, no matter how sophisticated our risk management, they cannot properly address the hazards that the financial system in its current form presents.
As it happens, mathematical analysis points to the same conclusion.
Prior to the 2008 financial crisis, global regulators required banks to assess the riskiness of their investments with a measure known as “value at risk”—an estimate of how much, given recent price history, they might lose on a very bad day (say, the worst one in a hundred). As of 2019, they will have to use a new measure, known as expected shortfall, that is supposed to do a better job of capturing the kind of severe losses than can happen in a crisis.
Calculating either measure entails gathering historical data on the investments in question—be they stocks, bonds or derivatives—and assuming that the future will look something like the past. This is a flaw in itself, because regulations, technology and political and economic conditions keep changing, making the future invariably new and different.
But there’s a deeper mathematical problem: Too little data.
Mathematicians Jon Danielsson and Chen Zhou have examined how much data would be required to get reliable estimates of either value-at-risk or expected shortfall, even in a world where the future is like the past. Suppose you wanted a reasonably accurate reading of expected shortfall—say, an estimate likely to fall within 5% of actual losses. For the complex portfolios of large financial institutions, this would require decades of price history on hundreds or thousands of different assets—something that simply doesn’t exist for many of those assets (many firms don’t even stay in business that long, for example). With less data, the result would be illusory, offering no meaningful sense of the risk present at all.
To be clear, this isn’t an issue with the definition of expected shortfall. All such summary measures suffer from the same shortcoming. Given that the entire point is to help banks and regulators control risk by giving them a clear view of it, this is a fundamental failure. Newer measures may be harder for banks to manipulate, but this is a pointless improvement if the numbers bear no relation to risk in the first place.
This line of mathematical research also has implications for a central concept of asset management: portfolio optimization, the idea that, by choosing the right combination of assets, an investor can get the same return with less risk.
Because it depends on price histories to figure out what the right mix would be, the method suffers from the same data inadequacy problem. In a series of works over the past decade (see https://goo.gl/BguluZ, most papers from 2007 onwards), physicist Imre Kondor and colleagues have shown that optimizing a portfolio of dozens or hundreds of assets is often simply impossible.
In short, the measures of risk used by the world’s largest financial institutions may be so far from optimal as to be useless. Which suggests that Mervyn King is probably right on another point: Achieving a resilient financial system—one that won’t pose a threat to the economy—will require much more radical change than policymakers have contemplated so far. bloomberg
Comments are welcome at email@example.com