13.3 Managing Model Risk

13.3  Managing Model Risk

Many front- and middle-office systems entail some degree of model risk, so when you implement a value-at-risk system, model risk should not be a new issue. Your firm should already have considerable infrastructure in place for addressing it. Below, we discuss a variety of strategies, primarily from the perspective of a practitioner.

13.3.1 Personnel

I was once involved in a credit risk model implementation. The credit department lacked people with the necessary quantitative skills, so design work was done by another department. For political reasons, the head of the credit department insisted on retaining control. She inserted herself into the model design process, going so far as to sketch out analytics with her people. These made no sense in the way 1 + 1 = 3 makes no sense. The design team sat through a presentation of her design, thanked her, and then quietly proceeded with their own design.

Perhaps it goes without saying, but an important step in implementing any quantitative model is making sure it is designed by competent individuals. Modeling is a specialized skill that bridges mathematics and real-world applications. Someone with strong math skills may not be a good modeler. There is an old joke about organized crime figures hiring a mathematician to find a way for them to make money at horse betting. After spending a year on this task, the mathematician calls the mobsters together and opens his presentation with “Consider a spherical race horse … ”

Non-quantitative professionals are not qualified to assess an individual’s modeling skills. When hiring modelers, involve individuals with proven modeling skills in the process. Have a policy that quantitative professionals must report to other quantitative professionals with proven modeling skills and the ability to assess work based on its technical merit.

13.3.2 Standard Assumptions and Modeling Procedures

While it is true that any model should be assessed based on the usefulness of its predictions and not on the reasonableness of its analytics, there is a flip side to this. Designing a model to conform to established practices—using proven assumptions and modeling techniques—will decrease model risk. Novice modelers sometimes employ techniques they invent or read about in theoretical or speculative literature. Such innovation is how theory progresses, but it is best left for academia or personal research conducted in one’s free time. When a bank or other trading institution implements value-at-risk, considerable resources are bought to bear, and time is limited. A failed implementation could severely limit the institution, hobbling its risk management—and hence its ability to take risk—for years.

13.3.3 Design Review

Large financial institutions employ teams of financial engineers or other quantitative staff whose full-time job is to review other people’s models. The focus tends to be on asset pricing models, but they can also review risk management models, and especially value-at-risk measures. Banks can expect their regulators to ask specifically how a value-at-risk measure’s design was independently reviewed and to see documentation of that review.

The review should be based on the design document (discussed in Section 12.6.1) that describes the value-at-risk measure. This needs to be a stand-alone document, operationally describing all inputs, outputs and calculations in sufficient detail that a value-at-risk measure can be implemented based on it alone. Do not attempt a review of a design document that is imprecise or incomplete. A value-at-risk measure has not been designed—it cannot be reviewed—until the design document is complete. Review should result in either recommendations for improving the model or approval for the model to be implemented. Keep in mind that the system requirements and design document are likely to evolve during implementation, so additional reviews will be necessary.

13.3.4 Testing

Complex systems such as value-at-risk measures can be difficult to test, so it is critical to define the testing environment and strategies early on in the implementation process. In some cases, creating the test environment may require days or weeks of development and setup, so it isn’t something to leave to the end.

Different forms of testing are performed throughout the implementation of a system:

  • unit testing is done by developers as they code. Its purpose is to ensure that individual components function as they are supposed to.
  • integration testing is done by developers as they finalize the software. Its purpose is to ensure that the components integrate correctly and the system works as a whole. This is the development team’s opportunity to ensure that everything will work properly during the system/regression testing phase.
  • system/regression testing is done by a separate quality assurance (QA) team to confirm that the system meets the functional requirements and works as a whole
  • stress/performance/load testing is done by developers and sometimes QA to ensure that the system can handle expected volumes of work and meets performance requirements. A system could pass the system test (i.e. meet all the functional requirements) but be slow. This testing ensures the system performs correctly under load.
  • user acceptance testing is done by business units, often with assistance from QA, to confirm the system meets all functional requirements. This is typically an abbreviated form of system/regression testing.

For value-at-risk applications, a common technique for system/regression testing and user acceptance testing is to build simulators to model aspects of the system that are expensive to replicate or not available in a non-production environment. This allows you to test systems in isolation prior to your final integration efforts. For example, instead of using actual real-time market data feeds, simulators can be developed that simulate sequences of real-time data. In addition to avoiding the use of expensive feeds, this approach gives you the ability to define and repeat certain sequences of data or transactions that test specific conditions in the system. While simulators are not a substitute for integration testing with actual live systems, they can be essential for developing the sorts of thorough testing processes required in value-at-risk implementations.

The system’s value-at-risk analytics need to be tested to ensure they reflect the formulas specified in the design document. The recommended approach is to implement a stripped down value-at-risk measure with the same analytics as in the design document. It may be possible to do this in Excel, Matlab or some similar environment that will facilitate participation by business professionals on the implementation team. When identical inputs are run through it and the production system, outputs need to match. If they don’t, that indicates there is a problem, either with the test value-at-risk measure or the production value-at-risk measure. Usually, it is with the stripped down test value-at-risk measure, but each discrepancy needs to be investigated. Even small discrepancies must be addressed. A bug that causes a discrepancy at the eighth decimal place for one set of inputs may cause one at the first decimal place with another.

It is critical that you take the time at the onset of implementing a value-at-risk measure to define and budget for the testing processes. Plan for a generous user acceptance testing period. There can be considerable push back from business units to limit their involvement in this phase, which can be mitigated by combining user acceptance testing with user training.

13.3.5 Parallel Testing

If a new value-at-risk system is replacing an old system, the two systems should be run in parallel for a few months to compare their performance. Output from the two systems should not be expected to match, since their analytics are different. Even wide discrepancies in output may not be cause for alarm, but they should be investigated. Users need to understand in what ways the new system performs differently from the system it is replacing. If output from the new system ever doesn’t make economic sense during this period, that should prompt a more exhaustive review.

Parallel testing can work well with agile software development. Business units can continue to rely on the legacy system while testing and familiarizing themselves with components of the new system as they are brought on-line.

13.3.6 Backtesting

Backtesting is performed on an ongoing basis once a value-at-risk measure is in production for a given portfolio. Portfolio value-at-risk measurements and corresponding P&Ls are recorded over time and statistical tests are applied to the data to assess how well the value-at-risk measure reflects the portfolio’s market risk. This is an important topic, which we cover fully in Chapter 13.

13.3.7 Ongoing Validation

Validation is the process of confirming that a model is implemented as designed and produces outputs that are meaningful or otherwise useful. For value-at-risk measures, this encompasses design review, software testing at both the system/regression and user acceptance stages, parallel testing and backtesting, all discussed above.

Validation also needs to be an ongoing process. value-at-risk measures will be modified from time to time, perhaps to add additional traded instruments to the model library, improve performance, or to reflect new modeling techniques. Proposed modifications need to be documented in an updated design document. That needs to be reviewed and approved. Once the modifications are coded, they need to be fully tested. For some modifications, it will make sense to parallel test the modified system against the unmodified system. The modified system then needs to be backtested on an ongoing basis

Even if a value-at-risk system isn’t modified, it needs to be periodically reviewed to check if developments in the environment in which it is used have rendered if obsolete or not as useful as it once was. These scheduled reviews should be based on the design document, read in light of what may have changed (within the firm, the markets, data sources, etc.) since the design document was first written. The review should also include interviews with users to determine how they are currently using the system, if those uses are consistent with its design, and if modifications to the value-at-risk measure might be called for.

13.3.8 Model Inventory

Internal auditors should maintain an inventory of all models used within a trading environment, and the value-at-risk measure should be included in that list. A model inventory facilitates periodic validation of all models.

13.3.9 Vendor Software

Software vendors will generally test their own code to ensure it is bug free but otherwise rely on clients to report problems with the software. Also, due to each user’s choice of settings and interfaces, eachvalue-at-risk implementation tends to be unique. For these reasons, Vendor software needs to be validated on and ongoing basis much like internally-developed software.

13.3.10 Communication and Training

A critical step for addressing Type C, model application risk, is employee training. This is especially true for value-at-risk measures, which often relate only tangentially to end-users’ primary job functions. Training should cover more than basic functionality. It should communicate the purpose of the value-at-risk measure and help end-users understand how the value-at-risk measure can help them in their work. As mentioned earlier, it may be advantageous to integrate training with user acceptance testing.