The highlight of 2012 for me was when, during a difficult moment, I received a message of encouragement from a firefighter.
His point was that he found my ideas on tail risk extremely easy to understand. His question was: How come risk gurus, academics, and financial modellers don’t get it?
Well, the answer is right there, staring at me, in the message itself. The fellow is a firefighter; he cannot afford to misunderstand risk. He is the one who would be directly harmed by his error. In other words, he has skin in the game. And, in addition, he is honourable, risking his life for no bonus.
This idea of skin in the game is central to the proper functioning of a complex world. In an opaque system, alas, there is an incentive for operators to hide risk, taking upside without downside. And there is no possible risk-management method that can replace skin in the game—particularly when informational opacity is compounded by informational asymmetry, along with what economists call the principal-agent problem.
Those who have the upside are not necessarily those who incur the downside. For example, bankers and corporate managers get bonuses for “performance”, but not reverse bonuses for negative performance, and they have an incentive to bury risks in the tails of the distribution—in other words, to delay blowups.
The ancients were fully aware of this incentive to hide risks, and implemented very simple but potent heuristics. About 3,800 years ago, the Code of Hammurabi specified that if a house collapses and causes the death of its owner, the house’s builder shall be put to death. This simple tenet is at the origin of “an eye for an eye” and the Golden Rule in ethics (“Do unto others as you would have them do unto you”). But, beyond ethics, this was simply the best risk-management rule ever.
The ancients understood that the builder always knows more about the risks than the client, and can hide sources of fragility and improve his profitability by cutting corners. The foundation is the best place to hide risk. The builder can also fool the inspector; the person hiding risk has a large informational advantage over the one who has to find it.
Why do I believe that a certain class of people has an incentive to “look good” rather than “do good”? The reason is simply the absence of personal risk. And the problems and remedies are as follows:
First, consider policymakers and politicians. In a decentralized system—say, municipalities—these people are checked by a feeling of shame upon harming others with their mistakes. In a large centralized system, by contrast, the source of errors is not so visible, and a spreadsheet does not make one feel shame. This penalty, shame, in addition to other arguments, is a case for decentralization.
Second, we misunderstand corporate managers’ incentive structure.
Contrary to public perception, corporate managers are not entrepreneurs. They are not what one could call agents of capitalism. Since 2000, in the US, the stock market has lost—depending on how one measures it—up to $2 trillion for investors (compared to returns had they left their funds in cash or treasury Bills).
So, one would be inclined to think that since managers’ pay is based on performance incentives, they would be incurring losses.
Not at all: there is an asymmetry. Money-losing managers do not have negative compensation. There is a built-in optionality in the compensation of corporate managers that can be removed only by forcing them to eat some of the losses. Because of the embedded option, while shareholders have lost, managers have earned more than a half-trillion dollars for themselves.
Third, there is a problem with academic economists, quantitative modellers, and policy wonks. The reason why economic models do not fit reality is that economists have no disincentive, and are never penalized for their errors. So long as they please the editors of academic journals, their work is considered fine.
As a result, we use models such as portfolio theory and similar methods without the remotest empirical reason. The solution is to prevent economists from teaching practitioners. Again, this highlights the case for decentralization: a system in which policy is decided at a local level by smaller units—and thus is not in need of economists.
Fourth, predictions in socio-economic domains do not work, but predictors are rarely harmed by their forecasts. Yet we know that people take more risks after they see a numerical prediction. The solution is to ask—and only take into account—what the predictor has done, or will do in the future.
I tell people what I have in my portfolio, not what I predict; that way, I will be the first to be harmed. It is not ethical to drag people into these exposures without incurring the risk of losses. In my book Antifragile, I tell people what I do, not what they should do, to the great irritation of the literary critics. I do so not for autobiographical reasons, but only because the other approach would not be ethical.
Finally, there are warmongers. To deal with them, the one-time consumer advocate and former US presidential candidate Ralph Nader has proposed that those who vote in favour of war should place themselves or a descendent into military service.
One can only hope that something will be done in 2013 to implement some skin in the game heuristics. A safe and just society demands nothing less. ©2012/Project Syndicate
Nassim Nicholas Taleb, a former trader, is a professor of risk engineering at New York University’s Polytechnic institute. He is the author of Antifragile: Things that Gain From Disorder, from which this essay was adapted.