13.2 Model Risk

13.2  Model Risk

We define Model risk as the risk of a model being poorly specified, incorrectly implemented or used in a manner for which it is inappropriate.

Consider the task of pricing swaptions. A financial engineer might employ finance theory to develop a model for that purpose. A programmer might implement the model as a computer program. A trader might use that implementation to price a swaptions trade. In this example there is risk that the financial engineer might poorly specify the model, or that the programmer might incorrectly implement it, or that the trader might use it in a manner for which it is not intended—perhaps pricing some non-standard swaption for which the model is not suited. Here we have three types of model risk:

  • Type A: model specification risk,
  • Type B: model implementation risk, and
  • Type C: model application risk.

Every quantitative model has three components:

  • inputs,
  • analytics, and
  • outputs.

To assess model risk, we should assess the potential for each type of model risk to arise in each model component.

13.2.1 Type A: Model Specification Risk

Model specification risk is the risk that a model will be poorly specified. The question is not so much of a model being “right” as it is of the model being “useful.” As Box and Draper (1987, p. 424) observed

Essentially, all models are wrong, but some are useful.

Any meaningful model makes predictions. A well specified model will—if correctly implemented and appropriately used—make generally useful predictions. This is true whether a model is used for predicting earthquakes, forecasting weather or assessing market risk.

Model specification risk arises primarily when a model is designed—when inputs are operationally defined, mathematical computations are specified, and outputs are indicated, preferably all in a formal design document. While careless errors such as misstating a formula are always a problem, a more general issue is the fact that a particular combination of inputs and computations may yield outputs that are misleading or otherwise not useful. While an experienced modeler may review a design document and make some assessment of how well the model might perform, the ultimate test is to implement the model and assess its performance empirically.

Model specification error also exists after a model is designed. The model itself may be modified from time to time, perhaps to address new traded instruments, fix a perceived problem with its performance, or to remain current with industry practices. It can also arrise without any obvious changes to a value-at-risk measure. For example, historical values for a specific key factor may be drawn from a particular time series maintained by a data vendor. If the data vendor changes how it calculates values of that time series, or loosens quality controls, this might impact the value-at-risk measure’s performance.

13.2.2 Type B: Model Implementation Risk

Model implementation risk is the risk that a model, as it is implemented, strays from what is specified in the design document. Inputs may differ: perhaps historical data for a key factor is obtained from a source other than that specified in the design document. Mathematical formulas may differ: this can arise from simple typing errors, or a programmer may implement a formula in a manner that inadvertently alters it. Outputs can be misrepresented: perhaps two numbers are juxtaposed in a risk report, or outputs are presented in a manner that is confusing or misleads.

I recall once attending a bank’s risk committee meeting. A printout of the risk report was distributed, and I noticed that the exposure indicated for a particular category of risks was zero. I asked about this, as I knew the bank took such risks. No one knew why the number was zero. I went back and looked at past risk reports and found that the number was always zero. The system was calculating that exposure, but the result never made its way into the risk report. Somehow, zero had been hard-coded into that particular field of the risk report.

Implementation error can cause a good model to be implemented as a bad model, but that is just part of the problem. If the implemented model strays from its design—whether it performs well or not—it is a different model from what the designer and users believe it to be. They think they have one model, when they actually have another. Results may be unpredictable.

Implementation error generally results from human error. With large software projects, some coding or logic errors are all but inevitable. Implementation risk also arises if dishonest programmers decide to cut corners. Unfortunately, such dishonesty occurs, but it is not common on largevalue-at-risk implementations subject to a rigorous testing and validation process.

13.2.3 Type C: Model Application Risk

Model application risk is the risk that a model will be misinterpreted or used in a manner for which it is unsuited. Misinterpretation can produce a false sense of security or prompt actions that are inadvisable. There are many ways a model can be misused.

A risk manager once brought me in to consult for a day. He wanted me to explain to his CFO how the firm suffered a market loss of USD 22MM when their 95%value-at-risk had been reported as USD 5MM. The loss had wiped out an entire year’s profits, and the CFO was trying to shift blame to a “flawed”value-at-risk system. The risk manager had installed the system a few years earlier, so he was taking some heat.

I sat with the CFO—Machiavelli reincarnated, if I am not mistaken—and explained that the USD 22MM loss was for the entire month of March. Thevalue-at-risk system reported one-dayvalue-at-risk. He looked at me blankly. I asked him if it was reasonable to expect monthly P&L’s to be of a greater magnitude than daily P&L’s, since markets typically move more in a month than in a day. He accepted this, so I had him take out his calculator. I explained that, if we assumed 21 trading days in a month, we could approximate the one-monthvalue-at-risk as the one-dayvalue-at-risk multiplied by the square root of 21. He typed the numbers into his calculator, and the result was USD 23MM. Whatever the man’s blame-shifting motives may have been, he was impressed with the math. He cracked a smile.

I was fortunate the numbers worked out so nicely. Politics aside, the CFO learned a lesson that day. He really had misunderstood what the value-at-risk measure was telling him.

Modelers tend to have different backgrounds from traders or senior managers. We think in different ways. While designers of a model understand the meaning of its outputs on a technical level, users may have a less precise, more intuitive understanding. Modelers generally have a better sense of how reliable a model’s outputs are than do users, who may have unrealistic expectations. Also, if they don’t like what a model tells them, users are more likely to let emotions cloud their assessment of the model’s worth.

I once had a trader confide to me that a real-time value-at-risk measure I had implemented was wrong. He had checked his reported value-at-risk, placed a trade, and then noticed that the value-at-risk went down. I asked him if the new trade had been a hedge. When he said “yes,” I explained that the value-at-risk measure didn’t quantify the risk of each trade and then sum the results. It analyzed correlations to capture hedging and diversification effects. In this way, the value-at-risk measure understood that his last trade had been a hedge, which is why the value-at-risk had gone down. The trader stared at me uneasily, as if I were some used car salesman.

A common misperception with value-at-risk is a belief that it represents a maximum possible loss. But in the scheme of things, a quantile-of-loss is easier to grasp than other PMMRs, such as expected tail loss or standard deviation of return.

A common form of model application risk is use of an otherwise good value-at-risk measure with a portfolio for which it is unsuited. This might happen if an organization’s trading activities evolve over time, but their value-at-risk measure is not updated to reflect new instruments or trading strategies.

Alternatively, the environment in which a model is used may evolve, rendering its predictions less useful—the model was well specified for the old environment but now it is being mis-applied in a new environment. For example, a value-at-risk measure should employ key factors related to liquid, actively traded-instruments. But markets evolve; liquidity can dry up. A choice of key factors that is suitable today may not be suitable next year. Consider how the forward power market dried up in the wake of the 2001 Enron bankruptcy, or how the CDO market collapsed following the 2008 subprime mortgage debacle.


Categorize the following situations as relating to Type A, Type B or Type C model risk. Some relate to more than one category. Discuss each situation.

  1. A trading organization implements a value-at-risk measure for an options portfolio. It constructs a remapping based on the portfolio’s deltas and then applying a linear transformation.
  2. A firm calculates value-at-risk each day and includes it in a risk report, which is available to traders and management through the firm’s intranet. Most have never figured out how to access the risk report.
  3. A value-at-risk measure employs many key factors. Some of them are prices of certain novel instruments. A few years after the value-at-risk measure is implemented, turmoil in the markets causes trading in those instruments to cease. Without data, the value-at-risk measure is inoperable.
  4. An equity portfolio manager asks his staff to calculate his portfolio’s one-year 99.9%value-at-risk. This relates to a loss that will occur once every thousand years, but most of the stocks in his portfolio haven’t existed thirty years. With inadequate data, his staff cobbles together a number. The portfolio manager then asks his staff to calculate what the portfolio’s value-at-risk would be if he implemented various put options hedges. The staff has no idea how to extend their crude analysis to handle put options, but they don’t tell him this. Instead, they simply make up somevalue-at-risk numbers. The portfolio manager reviews these and implements a put options strategy whose cost he feels is warranted by the indicated reduction in value-at-risk.
  5. A firm implements its own one-day 97.5% USD value-at-risk measure, which employs a Monte Carlo transformation. A programmer coding the measure inadvertently hits an incorrect key, which causes the measure to always calculate the 94.5% quantile of loss instead of the intended 97.5% quantile of loss.
  6. A firm bases traders’ compensation on their risk-adjusted P&L’s. The risk adjustment is calculated from the traders’ value-at-risk, which is measured daily based on the composition of their portfolios at 4:30PM. The head of risk management eventually notices that traders hedge all their significant exposures with futures just before 4:30PM each day—and then lift the hedges, either later in the evening or early the next morning.



13.1 Motivation

Chapter 13

Model Risk, Testing and Validation

13.1  Motivation

Between 2000 and 2001, National Australia Bank took write downs totaling USD 1.2 billion on its US mortgage subsidiary HomeSide Lending. The losses were attributed to a series of errors in how the firm modeled its portfolio of mortgage servicing rights. This is not an isolated incident. In finance, models are widely used. Usually, they are formulas or mathematical algorithms. Examples include portfolio optimization models, option pricing formulas and value-at-risk measures. Flaws in such models cause firms to misprice assets, mishedge risks or enter into disadvantage trades. If traders or other personnel become aware that a model is flawed, they may exploit the situation fraudulently. Flawed risk management models can cause firms to act imprudently—either too aggressively or too cautiously. For banks, they can result in regulators imposing additional capital requirements.

Berkowitz and O’Brien (2002) used daily P&L andvalue-at-risk data from six major US banks between 1997 and 2000 to assess (backtest) the performance of the banks’ value-at-risk measures. They concluded that

Banks’ 99th percentilevalue-at-risk forecasts tend to be conservative, and, for some banks, are highly inaccurate.


12.7 Further Reading – Implementing Value-at-Risk

12.7  Further Reading – Implementingvalue-at-risk

Value-at-risk implementations differ significantly from one market to the next. Examples in this book illustrate techniques applicable to various markets. For some other markets, see

  • agriculturals – Wilson, Nganje and Hawes (2007)
  • credit default swaps – O’Neil (2010)
  • energies – Dahlgren, Liu and Lawarrée (2003) ans Vasey and Bruce (2010)
  • mortgage-backed securities – Han, Park and Kang (2007)
  • shipping – Abouarghoub (2013)

More generally, Levine (2007) provides a high-level discussion of implementing risk management systems, primarily for the capital markets. Vasey and Bruce (2010) is a wonderful book on sourcing vendor trading and risk management software. While it targets the energy trading industry exclusively, professionals in other industries will benefit from many of its practical insights.

Leffingwell and Widrig (1999) is a general book for IT professionals on discovering and documenting systems requirements. Use cases are discussed by Cockburn (2000) and Schneider and Winters (2001). Larman (2003) offers an overview of agile software development methods. The specific methods of Scrum, XP, and Test-Driven Development are discussed in Schwaber and Beedle (2001), Beck and Andres (2004), and Beck (2002), respectively.

12.6 Implementation

12.6  Implementation

Actual implementation of thevalue-at-risk system will be a different process depending on whether you install vendor software or internally developed your own system. However, since most implementations combine some vendor software and some internally developed software, your own implementation is likely to blend the two processes.

12.6.1 Internally Developed Software

The first step in internally developingvalue-at-risk software is for quantitative finance professionals to work with IT professionals to develop a design document. This will be the primary point of communication between business and IT professionals on the design team. The business professionals will provide the detailed know-how on the algorithms and processes required, while the IT professionals will provide expertise on how to best partition the technical solution.

The purpose of the document is to detail all inputs, mathematical formulas and outputs for thevalue-at-risk system. The document should be expressed in non-technical language to the extent possible, so as to be accessible to both finance and IT professionals. Authors should draw on both the requirements document, which should already exist, and this book, which provides detailed information on the analytics one might incorporate into avalue-at-risk system. If authors adopt the terminology and notation of this book, then the book can serve as a reference to support the design document.

If use cases are employed for the requirements process, the design documentation will be driven from the use cases. The design process will consist of expanding each use case with design details.

After this design document is finished, the authors will need to be available on an ongoing basis to answer the inevitable questions that arise during the implementation. You can expect that the requirements and design of the system will undergo revisions throughout the implementation process. Strive to make the requirements document complete, but plan for the “80-20 rule”—80% of the requirements will be defined in the requirements phase of the project, while 20% of the requirements will be discovered during the design and implementation process. For this reason, it is critical that the business experts participate in frequent reviews throughout all phases of development.

12.6.2 Agile Software Development Methods

There are a number of agile software development methods (e.g., Scrum, Extreme Programming (XP), Test-Driven Development) that have become popular for developing complex systems suchvalue-at-risk solutions. These methods can be effective forvalue-at-risk development because they employ frequent, focused software releases to deliver the solution incrementally.

The primary benefit to an agile approach is the immediate feedback and business value of each release. By delivering smaller software releases sooner, the firm realizes business value more quickly and also is able to adjust for missing/incorrect software requirements earlier rather than later in the development process.

There are only certain contexts where agile methods can be used effectively, so development teams pick and choose various aspects of these methodologies for development. An experienced team is needed for any of these methods, so you will need to consult your IT team to determine if agile methods make sense for your implementation.

12.6.3 Vendor Software

Vendors will generally take responsibility for implementing their own software, but your implementation team should be an active participant in the process. Vendor software is often configurable in different ways, and team members should be active in deciding how their system should be configured. They should receive training in the vendor software before it is implemented, so they are fully versed in its features and configuration options.


12.5 Build vs. Buy

12.5  Build vs. Buy

The question of whether to build or buy a system is often more about what to build and want to buy. If thevalue-at-risk implementation is occurring as part of a larger implementation of front-, middle- and/or back-office software, there is the question of whether to purchase all the technology from a single vendor or to select different components from different vendors. In either case, it is likely to make sense to implement some components or some interfaces between systems internally. A firm that builds its own system internally will still want to incorporate vendor toolsets, add-ons or database software into the system.

Depending on the industry and intended application for thevalue-at-risk system, there may be numerous vendor systems available, or there may be few. The best way to address the “what to build and what to buy” question is to research available vendor software and assess what might reasonably be incorporated into the planned system.

12.5.1 Vendor Software

The market for vendor software is fragmented both by application (limits monitoring, portfolio optimization, corporate reporting, etc.) and by mix of asset types (equities, natural gas, soft commodities, etc.). Avalue-at-risk implementation suitable for a commodities wholesaler to monitorvalue-at-risk limits would not be useful for, say, a portfolio manager interested in optimizing an equity portfolio. The problem is that user interfaces, key factors, remappings and ancillary analytics differ too much from one market and/or application to another. This market fragmentation, combined with the fact that budgets are sometimes modest, means that many market segments are not well served by vendors.

Banks and other large financial institutions installing systems for regulatory or internal reporting purposes have plenty of excellent vendor software to choose from. Motivated largely by regulatory requirements under the Basel Accords, financial institutions have a compelling need. Their requirements are fairly uniform, and they have large budgets for purchasing the software. Solutions are typically bundled with complete front-to-back office applications.

Energy trading firms installing systems for internal monitoring are also reasonably well served by vendor software, but quality is not as good. These firms don’t have an explicit regulatory requirement to monitor value-at-risk. Generally, it is auditors, rating agencies and counterparties that expect them to do so. With the need less explicit, budgets are smaller. As with banks,value-at-risk software is generally purchased bundled with other front-, middle- and/or back-office applications. In many cases an afterthought, thevalue-at-risk software that comes with these bundles can be rudimentary.

Another segment that is reasonably well served by vendor software is institutional investors and investment managers implementing software for asset allocation and portfolio optimization. The software tends to be a stand-alone package, perhaps bundled with a subscription for data and updates. It generally isn’t promoted asvalue-at-risk software, but it does have a value-at-risk measure embedded in its analytics.

12.5.2 Choosing Vendor Software

Your search for vendor software should start with a list of possible vendors. Consult industry contacts, vendor directories, or the Internet to identify candidates. You can initially screen vendors by turning to their websites. Once you have narrowed your list, send out a request for proposal (RFP).

The RFP should solicit information about the vendor’s ability to meet your requirements, as well as the cost and timeline for doing so. You need specific information about the software’s existing features. Be weary of functionality a vendor promises to implement in the future—it’s called “vaporware” for a reason. Ask how a vendor’s software is licensed. How is it documented? How can it be modified or customized by the user?

You also need detailed information on the vendor. How many years have they been in business? How are they owned? What is their existing installed base? What is their financial condition? Ask for references.

Send out the RFP to your list of vendors. While you await responses, go through the exercise of replying to your own RFP to assess your firm’s ability to implement software meeting the specified requirements internally.

Once you receive responses to the RFP, you can further narrow your options. At this point, you should contact references—not only those provided by the vendor but any you have discovered in your research. Vendors sometime identify clients on their websites or in other promotional materials. If you call around to contacts you have at other firms, you should be able to find existing or past users of any established vendor’s software.

The final step before settling on a vendor is to conduct vendor interviews along with product demonstrations. A vendor may send personnel to your offices, but it is preferable that your personnel visit theirs. Your team should include quantitative finance professionals who can discuss thevalue-at-risk analytics as well as IT professionals who can address systems issues. Visiting a vendors’ offices will allow you to assess the corporate culture, employee morale and other intangibles.

It is an unfortunate fact that good marketing is more effective for selling vendor software that is good analytics. Horror stories of software with fancy user interfaces backed by rudimentary analytics abound. Ask to see the software documentation, if it has not already been provided. Ask plenty of technical questions. Don’t accept bait-n-switch promises of proprietary analytics offering something better than value-at-risk. It isn’t. Don’t accept undocumented black-box software. There really aren’t any trade secrets invalue-at-risk technology, so vendors should have nothing to hide.

Do not rule out small or start-up firms. They may have cutting edge solutions you need, whereas more established firms may be straddled with yesterday’s technology. But keep in mind the possibility that a vendor might fail or be acquired. Even if a firm doesn’t fail, poor cash flow may prevent it from upgrading its software or providing adequate client support. Use the interview process to address these issues. Ask about provisions for these contingencies. It is, for example, not unheard of for small vendors to place their source code in escrow with a law firm to be made available to clients in the event they fail.

If any references were dissatisfied with the vendor, ask the reference if you can raise their experience with the vendor. Don’t rule out a vendor because of one or two bad incidents, but assess if they learned from mistakes and made changes to avoid similar situations in the future.

Do more than observe the product demonstration. Participate. Ask to see specific screens or functionality described in the vendor’s product literature. The product demonstration is one of the most important steps in selecting vendor software. If it goes poorly in the controlled environment of the vendor’s offices, that is a clear warning sign.

Once the interviews and product demonstrations are complete, follow-up with any final questions, and then select a vendor—or opt to develop the system internally.


12.4 Functional Requirements

12.4  Functional Requirements

Once the purpose of a planned value-at-risk measure is known, functional requirements can be drafted. These are the user’s requirements. They relate to inputs, analytics, outputs, interfaces with other systems, and audit trails. These motivate technology requirements, which are the information technology professional’s requirements. They address architecture, open standards, security, redundancy, and regulatory requirements applicable to financial systems. Below, we focus on functional requirements.

Requirements don’t have to be written all at once. High-level requirements may be sufficient to launch a vendor software search to “find what’s out there.” That search will help the implementation team refine the requirements, which should be fairly specific before any vendor software is purchased. If the search for vendor software is unsuccessful, detailed requirements can be drafted for an internally implemented system.

12.4.1 Requirements Format

There are many approaches for capturing and documenting requirements. One approach employs use cases. A use case describes how a human or external system (known as actors in the use case terminology) interacts with the system in question. By definition, a use case defines a system feature that delivers business value.

Use cases are an excellent medium for describing the functional behavior ofvalue-at-risk systems because they are easy for both business users and IT developers to understand. Use cases can also provide a natural segmentation for phasing the software development. Once use cases are defined, the business team can prioritize them and work with the IT team to define software releases that consist of collections of use cases.

Another benefit of use cases is they provide a strong basis for subsequently testing avalue-at-risk system. Defining and documenting a test plan for testing the completedvalue-at-risk solution can be a laborious process. Because use cases describe usage scenarios, they can be a starting point for test plans and enable the test team to quickly come up to speed on the expected behavior of the system.

12.4.2 Prototypes

As part of the requirements definition process, prototypes or proof-of-concept implementations can be useful. It is difficult to capture and communicate complex system requirements in the written word, so prototypes are valuable for ensuring that everyone understands the ultimate requirements for thevalue-at-risk system.

Different approaches can be used for prototypes. The form of the prototype is less important than communicating the system requirements effectively. Some examples of effective prototyping strategies include:

  • Developing Excel models that show concrete value-at-risk calculation examples
  • Creating “wire frame” mockups using Visio or PowerPoint that show sequences of user actions and system results
  • Describing output screens or reports using HTML or Microsoft Word documents

These prototypes, in conjunction with formal requirements, can provide a clear vision of thevalue-at-risk solution to be implemented.

However it is prepared, the requirements document should provide a comprehensive assessment of how the value-at-risk measure is to be used:

  • on a day-to-day basis by primary end-users,
  • by other users, perhaps less frequently, or
  • by systems interfaced with thevalue-at-risk system.

Below, we discuss some issues to consider.

12.4.3 Instrument Coverage

What instruments should the value-at-risk measure cover? A firm that trades in fixed income, equities and foreign exchange may only want a system for its fixed income portfolio. If it wants to calculate value-at-risk for all portfolios, does it want to implement a single system to cover them all or separate systems for each? Politically, separate departments may want to each “own” their system, but implementing three systems will take more time and cost more money than implementing a unified system. Also, a unified system could calculate firm-wide value-at-risk, taking into account hedging or diversification effects across asset classes.

If a firm holds illiquid assets such as private equity, real estate or power generating facilities, a determination must be made as to the extent, if any, these will be included in the portfolio for value-at-risk analyses. Recall the distinction between market risk and business risk from Chapter 1. Market risk is risk due to uncertainty in instruments’ market values. If an asset or liability is so illiquid that it cannot regularly be marked to market, by definition, it entails no market risk. It may entail business risk, but value-at-risk does not apply. That being said, many firms whose portfolios largely comprise liquid instruments may ascribe mark-to-model values to illiquid instruments for the purpose of incorporating them into the overall value-at-risk measure. Done on a very small scale this may be reasonable. However, if mark-to-model assets comprise more than 5% of a portfolio’s value, value-at-risk measurements can become distorted. An example of this is energy merchants who, around the turn of the century, implemented value-at-risk for spot and forward energy contracts. Some also ascribed mark-to-model values to pipelines, power lines, natural gas wells and power plants, so they could be incorporated intovalue-at-risk analyses. The economic value of those assets swamped the market value of the liquid instruments. The daily fluctuations in market value ascribed to them by the mark-to-model valuations were all but meaningless. The resulting value-at-risk measurements were worse than useless. They were misleading and contributed to poor decision making.

12.4.4 Frequency of Calculation

Large financial institutions tend to calculate value-at-risk at the end of each trading day. In a typical arrangement, calculations are performed overnight, so thevalue-at-risk results are available the next morning. Trading floors have many computers at their disposal, but that processing power supports trader analytics and pricing models during the day. At night, the computers tend to be idle, making that a good time to runvalue-at-risk analyses.

Value-at-risk can be monitored on an intraday basis. Continuous monitoring has value-at-risk calculated at closely spaced intervals, say every ten seconds or every minute. Discrete monitoring is less frequent, say every hour. Query-based monitoring occurs, not at scheduled times, but on a user-initiated basis. For example, a trader might want to assess the impact on his portfolio’s value-at-risk of a proposed trade.

If a firm engages in day trading—speculatively trading liquid instruments during the day but closing out positions each evening—end-of-day value-at-risk won’t reflect the market risks the firm is taking. In that case, intra-dayvalue-at-risk analysis may be appropriate.

Intra-day monitoring can pose its own problems, especially if the complexity of a portfolio precludes continuous monitoring. Consider the situation of a derivatives dealer. Throughout the day, its traders put on large derivatives transactions for clients and then proceed to delta hedge with futures or otherwise offset the exposures. Between the time when a client trade is placed and a hedge is applied, the portfolio’s value-at-risk will briefly be elevated, likely exceeding applicable end-of-dayvalue-at-risk limits. Discrete intraday monitoring of such a portfolio would be frustrating. By chance some of the points at which value-at-risk is calculated will fall in the interval between a trader placing a client trade and hedging it, giving a false impression the trader is taking inappropriate risk.

Obviously, functional requirements need to be reasonable. Real time or continuous monitoring may be too computationally burdensome to be feasible. Some banks’ portfolios are so complicated that it takes hours to calculate their value-at-risk once in an overnight run, and that is with networked computers sharing the load.

Real-time continuous monitoring is easy if a portfolio holds exclusively linear instruments, say futures or forwards. A linear value-at-risk measure will run in real time with no compromise on accuracy.

If a firm trades nonlinear instruments, say calls or swaptions, it cannot use a linear value-at-risk measure. Formally, a linear remapping can be applied to a non-linear portfolio, followed by a linear transformation, but output won’t be “approximate” so much as “wrong.” Consider the case of a delta hedged, negative gamma portfolio. Its market risk could be extreme, if the gamma is negative enough. But a linear remapping would treat it as a delta hedged portfolio with zero gamma. Ignoring vega effects,1 it would calculate the portfolio’svalue-at-risk as zero irrespective of the magnitude of the gamma.

Value-at-risk can be calculated rapidly for non-linear portfolios using a quadratic value-at-risk measure. To mitigate the significant error the quadratic remapping may introduce, it may be necessary to adopt a low quantile-off-loss value-at-risk metric with a short horizon—say 90% one-day value-at-risk. The most time consuming part of a quadraticvalue-at-risk computation is constructing a quadratic remapping. In practice, this requires multiple revaluations of the portfolio. Portfolios that hold only vanilla instruments, such as calls or caps, can be valued rapidly, perhaps in real time. Others, such as those holding numerous exotic derivatives or securitizations, can take much longer to value.

Another alternative for rapidly calculating value-at-risk for non-linear portfolios is to use a Monte Carlo value-at-risk measure combined with a holdings remapping and variance reduction. Unlike a quadratic value-at-risk measure, this solution entails no compromise on accuracy, and it can be used with any value-at-risk metric. Run times will surely be longer than those of a quadratic value-at-risk measure if the variance reduction entails its own embedded quadratic value-at-risk measure. However, for some portfolios, especially those holding exclusively vanilla instruments, run times should be brief.

With discrete intraday monitoring, the individualvalue-at-risk analyses do not have to be spaced far enough apart to allow one to be completed before commencing the next. Some or all of the analyses can be deferred and performed overnight, when computers are otherwise idle. A detailed graph of how the portfolio’s value-at-risk evolved during the day would be available the next morning.

12.4.5 Inputs

Production value-at-risk measures require as inputs

1. current and historical values for key factors, and

2. the portfolios composition.

The former may already be sourced by the firm for other purposes, or a source may need to be identified. The portfolio composition will be captured from the contract management system. That information may be usable as-is, or it may need to be supplemented. Depending on its source, assess the need for data to be filtered and cleaned. Indicate if inputs will be automated or manual.

The requirements document needn’t identify specific key factors to be used or describe what information is required on individual holdings. That is done in the design document.

12.4.6 Outputs

In addition to reporting value-at-risk for the overall portfolio, the system may need to report value-at-risk for individual trading book, traders or other sub-portfolios.

How will outputs be disseminated? If they will be included in a daily risk report, will they be transferred into that report manually, or will this be automated?

Outputs should include data to be used in backtesting the system. See Chapter 14.

12.4.7 Interfaces

Manyvalue-at-risk systems need to interface with multiple other systems to obtain inputs and provide outputs. Inputs include historical market data, current market data and portfolio holdings. Different outputs may be required for front office, middle office, back office, accounting, compliance and audit functions. Outputs also must be archived for backtesting purposes.


Below are described several portfolios. Explain any particular challenges or  considerations each might pose for implementing avalue-at-risk system.

  1. A pension plan has its assets invested 40% in equities, 30% in fixed income and 30% in commercial real estate.
  2. A limited partnership actively trades commodity futures during the day but mostly closes out positions at night.
  3. An investment bank holds a sizeable inventory of equities, including initial public offerings it has just bought to market.
  4. A wealthy individual collects fine paintings and wants to quantify the market risk in her collection.



12.3 Purpose

12.3  Purpose

The military doesn’t acquire a new transport plane without considering its purpose—do they need a heavy-lift aircraft to fly into established airports, or do they require a lighter plane to deliver small payloads to unfinished drop zones? A value-at-risk measure, like a transport plane, is a tool. Before we implement one, we need to know its purpose. This might be:

  • Regulatory reporting and regulatory capital adequacy requirements under the Basel Accords: The prescribed value-at-risk metric is 10-day 99%value-at-risk. Due to the practical challenges of modeling intra-horizon events, this is calculated as one-day 99%value-at-risk and then scaled by the square root of ten—essentially assuming a static portfolio for ten days and independent daily P&L’s.
  • Internal reporting, economic capital adequacy requirements andvalue-at-risk limits within financial institutions and other trading organizations: These are different applications, but a single value-at-risk measure is often implemented to support two or all three of them. Various quantile-of-loss value-at-risk metrics are used.
  • Corporate disclosures: Item 305 of the SEC’s Regulation S-K requires that large corporations disclose certain qualitative and quantitative information on market risks arising from interest rates, foreign exchange, commodities, and other sources. Quantitative information can be presented as tabular data on individual positions, a sensitivity analysis, or quantile-of-lossvalue-at-risk.
  • Quantitative asset-allocation or portfolio optimization models: These generally incorporate a value-at-risk measure, although it may not be called that. Harry Markowitz’s pioneering work in the 1950’s employed linear value-at-risk measures with either a one-year variance of return or one-year standard deviation of return value-at-risk metric. Today, these techniques are sometimes called risk budgeting.
  • Optimizing tracking error of investment portfolios against a benchmark: A standard deviation of return value-at-risk metric might be used. The value-at-risk measure is applied, not to the portfolio, but to the portfolio combined with a short position in the benchmark. Tracking error is optimized by adjusting the portfolio’s composition, balancingvalue-at-risk against the portfolio manager’s desire to make active bets.
  • Hedge optimization: Situations arise where some liquid instrument is used to hedge some less liquid position. If the correlation between the two is imperfect, some market risk will remain. The question arises as to how many units of the hedging instrument will provide the optimal hedge. This can be answered by applying a value-at-risk measure to the total position (original position plus the hedge) and adjusting the hedge to determine the number of units of the hedging instrument that minimizes thevalue-at-risk. One-day standard deviation of P&L is a typical value-at-risk metric for this purpose, but a quantile of loss might also be used.

Needless to say, different applications may call for very differentvalue-at-risk systems. On one extreme are financial institutions that can spend tens of millions of dollars on avalue-at-risk implementation for risk monitoring,value-at-risk limits and/or capital calculations. Much of the expense can be for interfacing with other systems, security, redundancy and coding a model library that can handle all the instruments the institution trades. Coding the actual analytics for calculatingvalue-at-risk can be a more modest task.

At the other extreme are simple spreadsheet value-at-risk measures. One of these might be used each week by a commodities wholesaler to track market risk, or once each reporting period by a corporation to satisfy its disclosure requirements. Spreadsheet value-at-risk measures tend to be linear value-at-risk measures, but add-on software may facilitate Monte Carlo or other analytics. Inputs tend to be manual. Security is minimal—perhaps “locking” the spreadsheet to prevent a user from inadvertently deleting or changing a formula.

This chapter describes how to implement avalue-at-risk system, focusing primarily on larger implementations that support financial risk management or regulatory reporting requirements for banks. We present the perspective of a financial risk manager who might be involved in the process.


11.7 Historical Simulation – Further Reading

11.7  Further Reading – Historical Simulation

Hendricks (1996) provides a detailed empirical comparison of inference procedures that employ UWMA, EWMA and historical realizations. Pritsker (2006) assesses historical simulation both theoretically and empirically. Finger (2006) raises concerns with historical simulation.


12.1 Motivation

Chapter 12

Implementing Value-at-Risk

12.1  Motivation

To be useful, a value-at-risk measure must be implemented, perhaps with pencil and paper computations, but more commonly as software. In this chapter, we turn to the topic of implementing a value-at-risk measure. The topic entails little or no math. Human capital and project management skills are important.

12.2  Preliminaries

A largevalue-at-risk implementation can be a multi-million dollar technology initiative. Technology initiatives sometimes fail, and the potential for yourvalue-at-risk implementation to fail should be a sobering motivator for going slowly, gaining commitment, involving the right people, and meticulous planning.

12.2.1 Go Slowly

Implementing a large-scalevalue-at-risk system is likely to take eighteen months to three years. Integrating it with other systems, such as a deal capture system or trade management system, will extend the time frame. The more interfaces that must be built, the longer the implementation time. Internally developing a system, as opposed to purchasing a vendor system off the shelf, may add more time.

Once the system has been installed, it needs to be tested. Bugs need to be fixed. Employees need to be trained. If the system is replacing an older system, it is a good idea to run both systems in parallel for a while.

It is important to manage expectations. Lay out a clear time-line at the onset, and include provisions for additional time, as events may dictate. If the implementation team commits to an overly aggressive schedule, they will feel pressure to cut corners when things fall behind.

12.2.2 Commitment

A largevalue-at-risk system cannot be implemented without commitment from the highest levels within a firm. Not only may it require a substantial budget, but professionals will have to be diverted from other work to contribute to the effort. Also, it is possible the implementation, or aspects of the implementation, may be challenged by front-office personnel or others whose work may be impacted by the new value-at-risk measure. Addressing such challenges requires political capital that can come only from the highest levels of the organization.

12.2.3 Implementation Team

Planning and implementing avalue-at-risk system should be managed by a team of finance and information technology (IT) professionals headed by an experienced project manager. Collectively, team members should have many years of experience with trading, risk management and information technology. Finance professionals should have strong quantitative skills and be fully versed in the mathematics ofvalue-at-risk—i.e. they should have read this book. The team should also include end-users and front-office personnel who will be impacted by the system. Not only will they contribute valuable skills and experience, but their active involvement will help anticipate or avoid political roadblocks.

IT professionals should optimally have prior experience implementing software in a finance or trading environment. Still, in many cases, they will have little or no prior experience withvalue-at-risk systems. It is important to take time prior to the implementation to train IT professionals in the practical workings of avalue-at-risk system. This up-front training should include real-life examples illustrating the mathematics and how the system will be used in practice.

Value-at-risk systems are often implemented as part of a larger initiative to implement other front-, middle- and/or back-office software. That larger initiative will have its own implementation team which will likely serve as the implementation team for thevalue-at-risk system. However, if that team dos not include individuals with expertise invalue-at-risk, they should include additional participants with that expertise when making decisions relating specifically to thevalue-at-risk system.


7.3.9 Roll-Off Effect

7.3.9  Roll-Off Effect

Roll-off effect is an anomaly that arrises with UWMA, EWMA and other types of inference procedures. If a fixed window of historical data is used by an inference procedure, there are periodic drops in calculated value-at-risk resulting from extreme historical data points expiring from the window. Suppose a value-at-risk measure uses UWMA based on 100 trading days of historical market data. Suppose further that 99 days ago markets fell sharply. That extreme behavior would be captured in historical data point –99r. With a 100-day window, value-at-risk would be calculated using historical data {–99r, …, –2r–1r0r}, and the extreme data point –99r would tend to boost the result. But today’s data point –99r will be tomorrow’s data pint –100r. Today, it is included in the data window, boosting calculated value-at-risk. Tomorrow, it will not be included in the data window, so it will no longer boost calculated value-at-risk. Unless something unexpected happens to offset the effect, tomorrow’s value-at-risk will drop. The extreme data point has “rolled off”, or expired, from the data window.

Roll-off effect can be disquieting for end users when it causes value-at-risk to drop when nothing has happened to reduce the riskiness of the portfolio. Explaining the technicalities of roll-off effect to end users can undermine confidence in avalue-at-risk system.

Roll-off effect can be mitigated by using more historical data, so each data point is weighted less. EWMA also helps, as it weights data points less immediately prior to their rolling off.

While we have described roll-off effect here in terms of its impact on inference procedures, it has a similar impact with historical simulation (discussed in Chapter 11).