13.2 Model Risk
We define Model risk as the risk of a model being poorly specified, incorrectly implemented or used in a manner for which it is inappropriate.
Consider the task of pricing swaptions. A financial engineer might employ finance theory to develop a model for that purpose. A programmer might implement the model as a computer program. A trader might use that implementation to price a swaptions trade. In this example there is risk that the financial engineer might poorly specify the model, or that the programmer might incorrectly implement it, or that the trader might use it in a manner for which it is not intended—perhaps pricing some non-standard swaption for which the model is not suited. Here we have three types of model risk:
- Type A: model specification risk,
- Type B: model implementation risk, and
- Type C: model application risk.
Every quantitative model has three components:
- inputs,
- analytics, and
- outputs.
To assess model risk, we should assess the potential for each type of model risk to arise in each model component.
13.2.1 Type A: Model Specification Risk
Model specification risk is the risk that a model will be poorly specified. The question is not so much of a model being “right” as it is of the model being “useful.” As Box and Draper (1987, p. 424) observed
Essentially, all models are wrong, but some are useful.
Any meaningful model makes predictions. A well specified model will—if correctly implemented and appropriately used—make generally useful predictions. This is true whether a model is used for predicting earthquakes, forecasting weather or assessing market risk.
Model specification risk arises primarily when a model is designed—when inputs are operationally defined, mathematical computations are specified, and outputs are indicated, preferably all in a formal design document. While careless errors such as misstating a formula are always a problem, a more general issue is the fact that a particular combination of inputs and computations may yield outputs that are misleading or otherwise not useful. While an experienced modeler may review a design document and make some assessment of how well the model might perform, the ultimate test is to implement the model and assess its performance empirically.
Model specification error also exists after a model is designed. The model itself may be modified from time to time, perhaps to address new traded instruments, fix a perceived problem with its performance, or to remain current with industry practices. It can also arrise without any obvious changes to a value-at-risk measure. For example, historical values for a specific key factor may be drawn from a particular time series maintained by a data vendor. If the data vendor changes how it calculates values of that time series, or loosens quality controls, this might impact the value-at-risk measure’s performance.
13.2.2 Type B: Model Implementation Risk
Model implementation risk is the risk that a model, as it is implemented, strays from what is specified in the design document. Inputs may differ: perhaps historical data for a key factor is obtained from a source other than that specified in the design document. Mathematical formulas may differ: this can arise from simple typing errors, or a programmer may implement a formula in a manner that inadvertently alters it. Outputs can be misrepresented: perhaps two numbers are juxtaposed in a risk report, or outputs are presented in a manner that is confusing or misleads.
I recall once attending a bank’s risk committee meeting. A printout of the risk report was distributed, and I noticed that the exposure indicated for a particular category of risks was zero. I asked about this, as I knew the bank took such risks. No one knew why the number was zero. I went back and looked at past risk reports and found that the number was always zero. The system was calculating that exposure, but the result never made its way into the risk report. Somehow, zero had been hard-coded into that particular field of the risk report.
Implementation error can cause a good model to be implemented as a bad model, but that is just part of the problem. If the implemented model strays from its design—whether it performs well or not—it is a different model from what the designer and users believe it to be. They think they have one model, when they actually have another. Results may be unpredictable.
Implementation error generally results from human error. With large software projects, some coding or logic errors are all but inevitable. Implementation risk also arises if dishonest programmers decide to cut corners. Unfortunately, such dishonesty occurs, but it is not common on largevalue-at-risk implementations subject to a rigorous testing and validation process.
13.2.3 Type C: Model Application Risk
Model application risk is the risk that a model will be misinterpreted or used in a manner for which it is unsuited. Misinterpretation can produce a false sense of security or prompt actions that are inadvisable. There are many ways a model can be misused.
A risk manager once brought me in to consult for a day. He wanted me to explain to his CFO how the firm suffered a market loss of USD 22MM when their 95%value-at-risk had been reported as USD 5MM. The loss had wiped out an entire year’s profits, and the CFO was trying to shift blame to a “flawed”value-at-risk system. The risk manager had installed the system a few years earlier, so he was taking some heat.
I sat with the CFO—Machiavelli reincarnated, if I am not mistaken—and explained that the USD 22MM loss was for the entire month of March. Thevalue-at-risk system reported one-dayvalue-at-risk. He looked at me blankly. I asked him if it was reasonable to expect monthly P&L’s to be of a greater magnitude than daily P&L’s, since markets typically move more in a month than in a day. He accepted this, so I had him take out his calculator. I explained that, if we assumed 21 trading days in a month, we could approximate the one-monthvalue-at-risk as the one-dayvalue-at-risk multiplied by the square root of 21. He typed the numbers into his calculator, and the result was USD 23MM. Whatever the man’s blame-shifting motives may have been, he was impressed with the math. He cracked a smile.
I was fortunate the numbers worked out so nicely. Politics aside, the CFO learned a lesson that day. He really had misunderstood what the value-at-risk measure was telling him.
Modelers tend to have different backgrounds from traders or senior managers. We think in different ways. While designers of a model understand the meaning of its outputs on a technical level, users may have a less precise, more intuitive understanding. Modelers generally have a better sense of how reliable a model’s outputs are than do users, who may have unrealistic expectations. Also, if they don’t like what a model tells them, users are more likely to let emotions cloud their assessment of the model’s worth.
I once had a trader confide to me that a real-time value-at-risk measure I had implemented was wrong. He had checked his reported value-at-risk, placed a trade, and then noticed that the value-at-risk went down. I asked him if the new trade had been a hedge. When he said “yes,” I explained that the value-at-risk measure didn’t quantify the risk of each trade and then sum the results. It analyzed correlations to capture hedging and diversification effects. In this way, the value-at-risk measure understood that his last trade had been a hedge, which is why the value-at-risk had gone down. The trader stared at me uneasily, as if I were some used car salesman.
A common misperception with value-at-risk is a belief that it represents a maximum possible loss. But in the scheme of things, a quantile-of-loss is easier to grasp than other PMMRs, such as expected tail loss or standard deviation of return.
A common form of model application risk is use of an otherwise good value-at-risk measure with a portfolio for which it is unsuited. This might happen if an organization’s trading activities evolve over time, but their value-at-risk measure is not updated to reflect new instruments or trading strategies.
Alternatively, the environment in which a model is used may evolve, rendering its predictions less useful—the model was well specified for the old environment but now it is being mis-applied in a new environment. For example, a value-at-risk measure should employ key factors related to liquid, actively traded-instruments. But markets evolve; liquidity can dry up. A choice of key factors that is suitable today may not be suitable next year. Consider how the forward power market dried up in the wake of the 2001 Enron bankruptcy, or how the CDO market collapsed following the 2008 subprime mortgage debacle.
Exercises
Categorize the following situations as relating to Type A, Type B or Type C model risk. Some relate to more than one category. Discuss each situation.
- A trading organization implements a value-at-risk measure for an options portfolio. It constructs a remapping based on the portfolio’s deltas and then applying a linear transformation.
- A firm calculates value-at-risk each day and includes it in a risk report, which is available to traders and management through the firm’s intranet. Most have never figured out how to access the risk report.
- A value-at-risk measure employs many key factors. Some of them are prices of certain novel instruments. A few years after the value-at-risk measure is implemented, turmoil in the markets causes trading in those instruments to cease. Without data, the value-at-risk measure is inoperable.
- An equity portfolio manager asks his staff to calculate his portfolio’s one-year 99.9%value-at-risk. This relates to a loss that will occur once every thousand years, but most of the stocks in his portfolio haven’t existed thirty years. With inadequate data, his staff cobbles together a number. The portfolio manager then asks his staff to calculate what the portfolio’s value-at-risk would be if he implemented various put options hedges. The staff has no idea how to extend their crude analysis to handle put options, but they don’t tell him this. Instead, they simply make up somevalue-at-risk numbers. The portfolio manager reviews these and implements a put options strategy whose cost he feels is warranted by the indicated reduction in value-at-risk.
- A firm implements its own one-day 97.5% USD value-at-risk measure, which employs a Monte Carlo transformation. A programmer coding the measure inadvertently hits an incorrect key, which causes the measure to always calculate the 94.5% quantile of loss instead of the intended 97.5% quantile of loss.
- A firm bases traders’ compensation on their risk-adjusted P&L’s. The risk adjustment is calculated from the traders’ value-at-risk, which is measured daily based on the composition of their portfolios at 4:30PM. The head of risk management eventually notices that traders hedge all their significant exposures with futures just before 4:30PM each day—and then lift the hedges, either later in the evening or early the next morning.