Standard Normal Table

Abouarghoub, Wessam (2013). Implementing the new science of risk management to tanker freight markets, doctoral thesis, University of the West of England.
Alexander, Carol O. (2001). Market Models, Chichester: John Wiley & Sons.
Alexander, Carol O. and A. M. Chibumba (1997). Multivariate orthogonal factor GARCH, working paper.
Allen, M. (1994). Building a role model, Risk, 7 (9), 73-80.
Bai, Jushan, and Shuzhong Shi (2011). Estimating high dimensional covariance matrices and its applications, Annals of Economics & Finance, 12 (2), 199-215.
Baxter, Martin and Andrew Rennie (1996). Financial Calculus: An Introduction to Derivative Pricing, Cambridge: Cambridge University Press.
Basel Committee on Banking Supervision (1995). An Internal Model-Based Approach to Market Risk Capital Requirements.
Basel Committee on Banking Supervision (1996a). Amendment to the Capital Accord to Incorporate Market Risks.
Basel Committee on Banking Supervision (1996b). Supervisory Framework for the Use of “Backtesting” in Conjunction With the Internal Models Approach to Market Risk Capital Requirements.
Beck, Kent (2002). Test Driven Development: By Example, Reading: Addison-Wesley.
Beck, Kent and Cynthia Andres (2004). Extreme Programming Explained: Embrace Change, 2nd Edition, Reading: Addison-Wesley.
Berkowitz, Jeremy (2001). Testing density forecasts, with applications to risk management, Journal of Business & Economic Statistics, 19 (4), 465-474.
Berkowitz, Jeremy, Peter Christoffersen and Denis Pelletier (2011). Evaluating value-at-risk models with desk- level data, Management Science 57 (12), 2213–2227.
Berkowitz, Jeremy and James O’Brien (2002). How accurate are value‐at‐risk models at commercial banks? Journal of Finance, 57 (3), 1093-1111.
Bernstein, Peter L. (1992). Capital Ideas: The Improbable Origins of Modern Wall Street, New York: Free Press.
Black, Fischer and Myron S. Scholes (1973). The pricing of options and corporate liabilities, Journal of Political Economy, 81 (3), 637-654.
Black, Fischer (1976). The Pricing of Commodity Contracts, Journal of Financial Economics, 3, 167-179.
Bollerslev, Tim (1986). Generalized autoregressive conditional heteroskedasticity, Journal of Econometrics, 31, 307-328.
Bollerslev, Tim (1990). Modelling the coherence in short-run nominal exchange rates: A multivariate generalized ARCH model, Review of Economics and Statistics, 72, 498-505.
Box, George E. P. and Norman R. Draper (1987). Empirical Model-Building and Response Surfaces, New York: Wiley.
Britten-Jones, Mark and Stephen M. Schaefer (1999). Nonlinear value-at-risk, European Finance Review, 2 (2), 161-187.
Brockwell, Peter J. and Richard A. Davis (2010). Introduction to Time Series and Forecasting, 2nd ed., New York: Springer.
Burden, Richard L. and J. Douglas Faires (2010). Numerical Analysis, 9th ed., Boston: PWS Publishing Company.
Burghardt, Galen and Bill Hoskins (1995). A question of bias, Risk, 8 (3), 63-70.
Campbell, Rachel, Kees Koedijk and Paul Kofman (2002). Increased correlation in bear markets, Financial Analysts Journal, 58 (1), 87-94.
Campbell, Sean D. (2005). Finance and Economics Discussion Series, Washington: Federal Reserve Board.
Cárdenas, Juan, Emmanuel Fruchard, Etienne Koehler, Christophe Michel, and Isabelle Thomazeau (1997).value-at-risk: One Step Beyond, Risk, 10 (10), 72-75.
Cárdenas, Juan, Emmanuel Fruchard, Jean-François Picron, Cecilia Reyes, Kristen Walters, and Weiming Yang (1999). Monte Carlo within a day, Risk, 12 (2), 55-59.
Chew, Lillian (1993). Made to measure, Risk, 6 (9), 78-79.
Christoffersen, Peter (1998). Evaluating interval forecasts. International Economic Review, 39 (4), 841-862.
Christoffersen, Peter and Denis Pelletier (2004). Backtesting value-at-risk: a duration-based approach, Journal of Financial Econometrics, 2 (1), 84-108.
Cockburn, Alistair (2000). Writing Effective Use Cases, Reading: Addison-Wesley.
Cornell, Bradford (1981). A note on taxes and the pricing of Treasury bill futures contracts, Journal of Finance, 36 (12), 1169-1176.
Cornell, Bradford and Marc R. Reinganum (1981). Forward and futures prices: Evidence from the foreign exchange markets, Journal of Finance, 36 (12), 1035-1045.
Cornish, E. A. and Ronald A. Fisher (1937). Moments and cumulants in the specification of distributions, Review of the International Statistical Institute, 5, 307-320.
Corrigan, Gerald (1992). Remarks before the 64th annual mid-Winter meeting of the New York State Bankers Association, January 30, Waldorf-Astoria, New York City: Federal Reserve Bank of New York.
Cotter, John and François Longin (2007). Implied Correlations fromvalue-at-risk. Working paper, University College Dublin.
Coveyou, R. R. and R. D. MacPherson (1967). Fourier analysis of uniform random number generators, Journal of the Association for Computing Machinery, 14, 100-119.
Cox, John C., Jonathan E. Ingersoll, Jr., and Stephen A. Ross (1981). The relation between forward prices and futures prices, Journal of Financial Economics, 9, 321-346.
Crnkovic, Cedomir and Jordan Drachman (1996). Quality control, Risk, 9 (9), 138-143.
Culp, Christopher (2001). The Risk Management Process: Business Strategy and Tactics, New York: John Wiley & Sons.
da Silva, Alan Cosme Rodrigues, Claudio Henrique da Silveira Barbedo, Gustavo Silva Araújo and Myrian Beatriz Eiras das Neves (2006). Internal Model validation in Brazil: analysis ofvalue-at-risk backtesting methodologies, Revista Brasileira de Finanças, 4 (1), 363-384.
Dahlgren, Robert, Chen-Ching Liu and Jacques Lawarrée (2003). Risk assessment in energy trading. IEEE Transactions on Power Systems, 18 (2), 503-511.
Dale, Richard (1996). Risk and Regulation in Global Securities Markets, Chichester: John Wiley & Sons.
Davidson, Clive (1996). The data game, Firmwide Risk Management, special supplement to Risk, 9 (7), 39-44.
Davies, Robert B. (1973). Numerical inversion of a characteristic function, Biometrika, 60, 415-417.
Dennis, J. E. and Robert B. Schnabel (1983). Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Englewood Cliffs: Prentice-Hall.
Ding, Z. (1994). Time series analysis of speculative returns, PhD thesis, San Diego: University of California.
Doherty, Neil A., (2000). Integrated Risk Management: Techniques and Strategies for Managing Corporate Risk, New York: McGraw-Hill.
Dowd, Kevin (2005). Measuring Market Risk, 2nd ed., Chichester: John Wiley & Sons.
Dusak, Katherine (1973). Futures trading and investor returns: an investigation of commodity market risk premiums, Journal of Political Economy, 81, 1387-1406.
Eckhardt, Roger (1987). Stan Ulam, John von Neumann, and the Monte Carlo method, Los Alamos Science, Special Issue (15), 131-137.
Eichenauer, J. and J. Lehn (1986). A non-linear congruential pseudo random number generator, Statistical Papers, 27, 315-326.
Eichenauer-Herrmann, J. (1993). Statistical independence of a new class of inversive congruential pseudorandom numbers, Mathematics of Computation, 60, 375-384.
Engle, Robert F. (1982). Autoregressive conditional heteroskedasticity with estimates of the variance of UK inflation, Econometrica, 50, 987-1008.
Engle, Robert F. (2000). Dynamic conditional correlation—A simple class of multivariate GARCH models, working paper.
Engle, Robert F. and K. F. Kroner (1995). Multivariate simultaneous generalized ARCH, Econometric Theory, 11, 122-150.
Engle, Robert F., Simone Manganelli (2004). CAViaR: conditional autoregressive value-at-risk by regression quantiles. Journal of Business and Economic Statistics, 22 (4), 367–381.
Engle, Robert F. and Kevin Sheppard (2001). Theoretical and empirical properties of dynamic conditional correlation multivariate GARCH, working paper.
Evans, Michael and Tim Swartz (2000). Approximating Integrals Via Monte Carlo and Deterministic Methods, Oxford: Oxford University Press.
Fallon, William (1996). Calculating value-at-risk, working paper.
Filliben, James J. (1975). Probability plot correlation coefficient test for normality, Technometrics, 17 (1), 111–117.
Fincke, U. and M. Pohst (1985). Improved methods for calculating vectors of short length in a lattice, including a complexity analysis, Mathematics of Computation, 44 (170), 463-471.
Finger, Christopher (1997). A methodology for stress correlation, Risk Metrics Monitor (Fourth Quarter), 3-11.
Finger, Christopher (2006). How historical simulation made me lazy, RiskMetrics Group Research Monthly.
Fishman, George S. (1996). Monte Carlo: Concepts, Algorithms, and Applications, New York: Springer-Verlag.
Forbes, Catherine, Merran Evans, Nicholas Hastings, and Brian Peacock (2010). Statistical Distributions, 4nd ed., New York: John Wiley & Sons.
Francis, Stephen C. (1985). Correspondence appearing in: United States House of Representatives (1985). Capital Adequacy Guidelines for Government Securities Dealers Proposed by the Federal Reserve Bank of New York: Hearings Before the Subcommittee on Domestic Monetary Policy of the Committee on Banking, Finance and Urban Affairs, Washington: US Government Printing Office, 251-252.
Franses, Philip Hans (1998). Time Series Models for Business and Economic Forecasting, Cambridge: Cambridge University Press.
Franses, Philip Hans and Dick van Dijk (2000). Non-Linear Time Series Models in Empirical Finance, Cambridge: Cambridge University Press.
French, Kenneth R. (1980). Stock returns and the weekend effect, Journal of Financial Economics, 8, 55-69.
French, Kenneth R. (1983). A comparison of futures and forward prices, Journal of Financial Economics, 12, 311-342.
French, Kenneth R. and Richard Roll (1986). Stock return variance: the arrival of information and the reaction of traders, Journal of Financial Economics, 17, 5-26.
Fuglsbjerg, Brian (2000). Variance reduction techniques for Monte Carlo estimates of value-at-risk, working paper.
Garbade, Kenneth D. (1986). Assessing risk and capital adequacy for Treasury securities, Topics in Money and Securities Markets, 22, New York: Bankers Trust.
Garman, Mark B. and Steven W. Kohlhagen (1983). Foreign currency option values, Journal of International Money and Finance, 2, 231-237.
Gärtner, von Bernd (1999). Ein reinfall mit computer-zufallszahlen, Mitteilungen der Deutschen Mathematiker-Vereinigung, 2, 55-60.
Geiss, Charles G. (1995). Distortion-free futures price series, Journal of Futures Markets, 15 (7), 805-831.
Gentle, James E. (1998). Numerical Linear Algebra for Applications in Statistics, New York: Springer-Verlag.
Gibbons, Michael R. and Patrick Hess (1981). Day of the week effect and asset returns, Journal of Business, 54, 579 – 596.
Glasserman, Paul (2003). Monte Carlo Methods in Financial Engineering, Springer: New York.
Glasserman, Paul, Philip Heidelberger, and Perwez Shahabuddin (2000). Variance reduction techniques for estimating value-at-risk, Management Science, 46 (10), 1349 – 136.
Goldfeld, Stephen M and Richard E. Quandt (1973). A Markov model for switching regressions, Journal of Econometrics, 1, 3-16.
Goldman Sachs and SBC Warburg Dillon Read (1998). The Practice of Risk Management, London: Euromoney Books.
Golub, Gene H. and Charles F. Van Loan (1996). Matrix Computations, 3rd ed., Baltimore: Johns Hopkins University Press.
Gridgeman, N. T. (1960). Geometric probability and the number p, Scripta Mathematica, 25, 183-195.
Group of 30 (1993). Derivatives: Practices and Principles, Washington: Group of 30.
Group of 30 (1994). Derivatives: Practices and Principles, Appendix III: Survey of Industry Practice, Washington: Group of 30.
Guldimann, Till M. (1995). Risk measurement framework, RiskMetrics—Technical Document, 3rd ed., New York: Morgan Guaranty, 6-45.
Guldimann, Till M. (2000). The story of RiskMetrics, Risk, 13 (1), 56-58.
Gupta, Anurag and Marti G. Subrahmanyam (2000). An empirical examination of the convexity bias in the pricing of interest rate swaps, Journal of Financial Economics, 55 (2), 239-279.
Haas, Marcus (2001). New methods in backtesting. Mimeo, Financial Engineering Research Center Caesar, Friedensplatz, Bonn.
Haas, M., 2005. Improved duration-based backtesting of value-at-risk. Journal of Risk, 8 (2), 17–38.
Han, Chulwoo, Frank C. Park, and Jangkoo Kang (2007). Efficient value-at-risk estimation for mortgage-backed securities, Journal of Risk 9 (3), 37-61.
Hall, Asaph (1873). On an experimental determination of p, Messenger of Mathematics, 2, 113-114.
Hamilton, James D. (1993). Estimation, inference and forecasting of time-series subject to changes in regime, in G. S. Maddala, C. R. Rao and H. D. Vinod (editors), Handbook of Statistics, vol. 11: Econometrics, New York: North-Holland.
Hamilton, James D. (1994). Time Series Analysis, Princeton: Princeton University Press.
Hammersley, J. M. and D. C. Handscomb (1964). Monte Carlo Methods, New York: John Wiley & Sons.
Hartman, Joel, and Jan Sedlak (2013). Forecasting conditional correlation for exchange rates using multivariate GARCH models with historical value-at-risk application, working paper.
Haug, Espen G. (1997). The Complete Guide to Option Pricing Formulas, 2nd ed., New York: McGraw-Hill.
Hellekalek, P. (1998). Good random number generators are (not so) easy to find, Mathematics and Computers in Simulation, 46, 485-505.
Hendricks, Darryll (1996). Evaluation of value-at-risk models using historical data, Federal Reserve Bank of New York Economic Policy Review, April.
Higham, Nicholas J. (2002). Computing the nearest correlation matrix—a problem from finance, IMA Journal of Numerical Analysis 22(3), 329-343.
Heron, Dan and Richard Irving (1996). Banks graspvalue-at-risk nettle, A Risk Special Supplement, Risk, June, pp. 16–21.
Holton, Glyn A. (1998). Simulating value-at-risk, Risk, 11 (5), 60-63.
Holton, Glyn A. (2004). Defining risk, Financial Analysts Journal, 60 (6), 19–25.
Hughston, Lane (1996). Vasicek And Beyond: Approaches to Building and Applying Interest Rate Models. London: Risk Publications.
Hughston, Lane (1999). Options: Classic Approaches to Pricing and Modelling. London: Risk Books.
Hull, John C. (2011). Options, Futures, and Other Derivatives, 8th ed., Englewood Cliffs: Prentice Hall.
Hull, John and Alan White (1998). Incorporating volatility updating into the historical simulation method forvalue-at-risk, Journal of Risk, 1 (1), 5-19.
Imhof, J. P. (1961). Computing the distribution of quadratic forms in normal variables, Biometrika, 48, 419-426.
James, Jessica and Nick Webber (2000). Interest Rate Modelling, Chichester: John Wiley & Sons.
Jamshidian, Farshid and Yu Zhu (1997). Scenario simulation: Theory and methodology, Finance and Stochastics, 1 (1), 43-67.
Jarrow, Robert A. (editor) (1998). Volatility: New Estimation Techniques for Pricing Derivatives, London: Risk Books.
Jarrow, Robert A. and George S. Oldfield (1981). Forward contracts and futures contracts, Journal of Financial Economics, 9, 373-382.
Jaschke, Stefan R. (2001). The Cornish-Fisher-expansion in the context of delta-gamma-normal approximations, Journal of Risk, 4(4), 33-52.
Jaschke, Stefan R. and Peter Mathé (2004). Stratified sampling for risk management, unpublished manuscript.
Johnson, Dallas E. (1998). Applied Multivariate Methods for Data Analysts, Pacific Grove: Duxbury Press.
Johnson, N. L. (1949). Systems of frequency curves generated by methods of translation, Biometrika, 36, 149-176.
Judge, George G., R. Carter Hill, William E. Griffiths, Helmut Lütkepohl, and Tsoung-Chao Lee (1988). The Theory and Practice of Econometrics . 2nd ed., New York: John Wiley & Sons.
Kercheval, Alec N. (2008). Optimal covariances in risk model aggregation, Proceedings of the Third IASTED International Conference on Financial Engineering and Applications, ACTA Press, Calgary, 30-35.
Klaassen, Franc (2000). Have exchange rates become more closely tied? Evidence from a new multivariate GARCH model, working paper.
Knuth, Donald E. (1997). Art of Computer Programming, Volume 2: Seminumerical Algorithms, 3rd ed., Vol. 2. Reading: Addison-Wesley.
Kolb, Robert W. (2006). Understanding Futures Markets, 6th ed., Malden: Blackwell.
Korn, Ralf and Mykhailo Pupashenko (2015). A new variance reduction technique for estimating value-at-risk, Applied Mathematical Finance, 22(1), 83-98.
Kupiec, Paul H. (1995). Techniques for verifying the accuracy of risk measurement models, Journal of Derivatives, 3 (2), 73–84.
Larman, Craig (2003). Agile and Iterative Development: A Manager’s Guide, Reading: Addison-Wesley.
Lad, Frank (1996). Operational Subjective Statistical Methods: A Mathematical, Philosophical, and Historical Introduction, New York: John Wiley & Sons.
Laplace, Pierre Simon Marquis de (1878-1912). Oeuvres Complètes de Laplace, Paris: Gauthier-Villars.
Leavens, Dickson H. (1945). Diversification of investments, Trusts and Estates, 80 (5), 469-473.
L’Ecuyer, Pierre. (1998). Random number generation, in Jerry Banks (editor), Handbook of Simulation: Principles, Methodology, Advances, Applications, and Practice, 1998, New York: John Wiley & Sons.
L’Ecuyer, Pierre. (1999). Good parameter sets for combined multiple recursive random number generators, Operations Research, 47 (1), 159-164.
L’Ecuyer, P., F Blouin and R. Couture (1993). A search for good multiple recursive random number generators, ACM Transactions on Modeling and Computer Simulation, 3 (2), 87-98.
Leffingwell, Dean and Don Widrig (1999). Managing Software Requirements: A Unified Approach, Reading: Addison-Wesley.
Lehmann, E. L. and Joseph P. Romano (2005). Testing Statistical Hypotheses, 3rd ed., New York: Springer.
Lehmer, D. H. (1951). Mathematical methods in large-scale computing units, Proceedings of a Second Symposium on Large-Scale Digital Calculating Machinery. Cambridge: Harvard University Press, 141-146.
Leipnik, R. B. (1991). Lognormal random variables, Journal of the Australian Mathematical Society, Series B, 32, 327-347.
Leong, Kenneth S. (1996). The right approach, Value-at-Risk, A Risk Special Supplement, Risk Magazine, June, 9–14.
Levine, Robert S. (2007). Implementing Systems Solutions for Financial Risk Management, London: Risk Books.
Lewis, P. A. W., A. S. Goodman and J. M. Miller (1969). A pseudo-random number generator for the System/360, IBM Systems Journal, 8 (2), 136-145.
Li, Qingna, Donghui Li and Houduo Qi (2010). Newton’s method for computing the nearest correlation matrix with a simple upper bound, Journal of Optimization Theory and Applications, 147 (3), 546-568.
Lietaer, Bernard A. (1971). Financial Management of Foreign Exchange: An Operational Technique to Reduce Risk, Cambridge: MIT Press.
Linsmeier, Thomas J. and Neil D. Pearson (1996). Risk Measurement: An Introduction to Value at Risk, unpublished manucript.
Lintner, J. (1965). The valuation of risk assets and the selection of risky investments in stock portfolios and capital budgets, Review of Economics and Statistics, 47: 13-37.
Ljung, G. M. and G. E. P. Box (1978). On a measure of lack of fit in time series models, Biometrika, 65 (2), 297-303.
Longerstaey, Jacques (1995). Mapping to describe positions, RiskMetrics—Technical Document, 3rd ed., New York: Morgan Guaranty, 107-156.
Lopez, Jose A. (1999). Methods for evaluating value-at-risk models, Federal Reserve Bank of San Francisco Economic Review, 2, 3-17.
Lyons, Richard K. (1995). Tests of microstructure hypotheses in the foreign exchange market, Journal of Financial Economics, 39, 321-351.
Macaulay, Frederick R. (1938). The Movements of Interest Rates. Bond Yields and Stock Prices in the United States since 1856, New York: National Bureau of Economic Research.
Ma, Christopher K., Jeffrey M. Mercer and Matthew A. Walker (1992). Rolling over futures contracts: A note, Journal of Futures Markets, 12 (2), 203-217.
Malz, Alan M. (2011). Financial Risk Management: Models, History, and Institutions, Chichester: John Wiley & Sons.
Mark, Robert (1991). Units of management. Balance Sheet (distributed in Risk, 4 (6)), 3-7.
Markowitz, Harry M. (1952). Portfolio Selection, Journal of Finance, 7 (1), 77-91.
Markowitz, Harry M. (1959). Portfolio Selection: Efficient Diversification of Investments, New York: John Wiley & Sons.
Markowitz, Harry M. (1999). The early history of portfolio theory: 1600-1960, Financial Analysts Journal, 55 (4), 5-16.
Marsaglia, G. (1968). Random numbers fall mainly in the planes, Proceedings of the National Academy of Sciences USA, 61, 25-28.
Marshall, Chris and Michael Siegel (1997). Value at risk: implementing a risk measurement standard, Journal of Derivatives, 4 (3), 91-110.
Mathai, A. M. and Serge B. Provost (1992). Quadratic Forms in Random Variables, New York: Marcel Dekker.
McLeod, A. I. and W. K. Li (1983). Diagnostic checking ARMA time series models using squared residual autocorrelations, Journal of Time Series Analysis, 4 (4), 269–273.
Metropolis, Nicholas (1987). The beginning of the Monte Carlo method, Los Alamos Science, Special Issue (15), 125-130.
Metropolis, Nicholas and Stanislaw Ulam (1949). The Monte Carlo method, Journal of the American Statistical Association, 44 (247), 335-341.
Mills, Terence C. (1999). The Econometric Modelling of Financial Time Series, 2nd ed., Cambridge: Cambridge University Press.
Mina, Jorge and Andrew Ulmer (1999). Delta-Gamma four ways. Technical report, RiskMetrics Group.
Mittnik, Stefan (2014).value-at-risk-implied tail-correlation matrices. Economics Letters, 122 (1), 69-73.
Molinari, Steven L. and Nelson S. Kibler (1983). Broker-dealers’ financial responsibility under the Uniform Net Capital Rule—a case for liquidity, Georgetown Law Journal, 72 (1), 1-37.
Morgan, Byron J. T. (1984). Elements of Simulation. London: Chapman & Hall.
Morgan Guaranty (1996). RiskMetrics—Technical Document, 4th ed., New York: Morgan Guaranty.
Mossin, Jan (1966). Equilibrium in a capital asset market, Econometrica, 34, 768-783.
Niederreiter, Harald (1992). Random Number Generation and Quasi-Monte Carlo Methods. Philadelphia: Society for Industrial and Applied Mathematics.
Office of the Comptroller of the Currency (2000). OCC Bulletin 2000–16: Model Validation, Washington: Office of the Comptroller of the Currency.
O’Neil, Catherine (2010). Measuring CDS value-at-risk. Risk Metrics Working Papers.
Opschoor, Anne, Dick van Dijk and Michel van der Wel (2013). Predicting covariance matrices with financial conditions indexes (No. TI 13-113/III, pp. 1-43). Tinbergen Institute Discussion Paper Series.
Pan, Jun and Darrell Duffie (1997). An Overview of value-at-risk, Journal of Derivatives, 4 (3), 7-49.
Park, Hun Y. and Andrew H. Chen (1985). Differences between futures and forward prices: A further investigation of the mark-to-market effects, Journal of Futures Markets, 5 (1), 77-88.
Park, Stephen K. and Keith W. Miller (1988). Random number generators: good ones are hard to find, Communications of the ACM, 31 (10), 1192-1201.
Patel, Jagdish K. (1996). Handbook of the Normal Distribution, 2nd ed., New York: Marcel Dekker.
Pérignon, Christophe, Zi Yin Deng and Zhi Jun Wang (2008). Do banks overstate their value-at-risk? Journal of Banking & Finance, 32 (5), 783-794.
Pérignon, Christophe and Daniel R. Smith (2010). The level and quality of Value-at-Risk disclosure by commercial banks, Journal of Banking & Finance, 34 (2), 362-377.
Pichler, Stefan and Karl Selitsch (2000). A comparison of analyticvalue-at-risk methodologies for portfolios that include options, Model Risk, Concepts, Calibration and Pricing, Rajna Gibson (editor), London: Risk Books.
Press, W., S. Teukolsky, W. Vetterling and B. Flannery (1995). Numerical Recipes in C: The Art of Scientific Computing, 2nd ed., Cambridge: Cambridge University Press.
Pritsker, Matthew (2006). The hidden dangers of historical simulation, Journal of banking & finance, 30 (2), 561-582.
Pupashenko, Mykhailo (2014). Variance reduction technique for estimating value-at-risk based on cross-entropy, Journal of Mathematics and System Science, 4(1), 37-48.
Qi, Houduo and Defeng Sun (2010). Correlation stress testing for value-at-risk: an unconstrained convex optimization approach, Computational Optimization and Applications, 45 (2), 427-462.
Questa, Giorgio S. (1999). Fixed Income Analysis for the Global Financial Market: Money Market, Foreign Exchange, Securities, and Derivatives, New York: John Wiley & Sons.
Rebonato, Riccardo and Peter Jäckel (1999). The most general methodology to create a valid correlation matrix for risk management and option pricing purposes, Journal of Risk, 2(2), 17-27.
Reuters (2000). An Introduction to The Commodities, Energy & Transport Markets, Singapore: John Wiley & Sons.
Rota, Gian-Carlo (1987). The lost cafe, Los Alamos Science, Special Issue (15), 23-32.
Rouvinez, Christophe (1997). Going Greek withvalue-at-risk, Risk, 10 (2), 57-65.
Roy, Arthur D. (1952). Safety first and the holding of assets, Econometrica, 20 (3), 431-449.
Røynstrand, Torgeir, Nils Petter Nordbø, Vidar Kristoffer Strat (2012). Evaluating power of value-at-risk backtests, masters thesis, Norwegian University of Science and Technology.
Rubinstein, Reuven Y. (2007). Simulation and the Monte Carlo Method, 2nd ed. New York: John Wiley & Sons.
Saff, E. B. and A. B. J. Kuijlaars (1997). Distributing many points on a sphere, Mathematical Intelligencer, 19 (1), 5-11.
Schneider Geri and Jason P. Winters (2001). Applying Use Cases: A Practical Guide, 2nd ed. Reading: Addison-Wesley.
Schrock, Nichols W. (1971). The theory of asset choice: simultaneous holding of short and long positions in the futures market, Journal of Political Economics, 79, 270-293.
Schwaber, Ken and Mike Beedle (2001). Agile Software Development with Scrum, Upper Saddle River: Prentice Hall.
Sharpe, William F. (1963). A simplified model for portfolio analysis, Management Science, 9, 277-293.
Sharpe, William F. (1964). Capital asset prices: A theory of market equilibrium under conditions of risk, Journal of Finance, 19 (3), 425-442.
Shirreff, David (1992). Swap and think, Risk, 5 (3), 29 – 35.
Singh, Manoj K. (1997). Value-at-risk using principal components analysis, Journal of Portfolio Management, 24 (1), 101-112.
Sironi, Andrea and Andrea Resti (2007). Risk Management and Shareholders’ Value in Banking, Chichester: John Wiley & Sons.
Solomon, H. and M.A. Stephens (1977). Distribution of a sum of weighted chi-square variables, Journal of the American Statistical Association, 72, 881-885.
Spanos, Aris (1999). Probability Theory and Statistical Inference: Econometric Modeling with Observational Data, Cambridge: Cambridge University Press.
Steinberg, Richard M. (2011). Governance, Risk Management, and Compliance, Chichester: John Wiley & Sons.
Stoyanov, Jordan (1997). Counterexamples in Probability, 2nd ed., Chichester: John Wiley & Sons.
Strang, Gilbert (2005). Linear Algebra and Its Applications, 4rd ed., Brooks Cole.
Stuart, Alan and J. Keith Ord (1994). Kendall’s Advanced Theory of Statistics, Volume 1: Distribution Theory, London: Arnold.
Student (1908a). The probable error of a mean. Biometrika, 6, 1-25.
Student (1908b). Probable error of a correlation coefficient. Biometrika, 6, 302-310.
Tobin, James (1958). Liquidity preference as behavior towards risk, The Review of Economic Studies, 25, 65-86.
Todhunter, Isaac. (1865). History of the mathematical theory of probability from the time of Pascal to that of Laplace. Cambridge: Cambridge University Press. Reprinted (1949), New York: Chelsea.
Treynor, Jack (1961). Towards a theory of market value of risky assets, unpublished manuscript.
Tsay, Ruey S (2013). Multivariate Time Series Analysis: With R and Financial Applications, John Wiley & Sons.
Vasey, G. M. and Andrew Bruce (2010). Trends in Energy Trading, Transaction and Risk Management Software 2009 – 2010, CreateSpace Independent Publishing Platform.
Viswanath, P. V. (1989). Taxes and the futures-forward price difference in the 91-day T-bill market, Journal of Money, Credit and Banking, 21 (2), 190-205.
Walmsley, Julian (2000). The Foreign Exchange and Money Markets Guide, 2nd ed., New York: John Wiley & Sons.
Wilmott, Paul, Jeff Dewynne, and Sam Howison (1993). Option Pricing: Mathematical Models and Computation, Oxford: Oxford Financial Press.
Wilson, Thomas (1992). Raroc remodeled, Risk, 5 (8), 112-119.
Wilson, Thomas (1993). Infinite wisdom, Risk, 6 (6), 37-45.
Wilson, Thomas (1994a). Debunking the myths, Risk, 7 (4), 67-73.
Wilson, Thomas (1994b). Common methods of calculating capital at risk, Risk, 7 (10), 78-80.
Wilson, William W., William E. Nganje and Cullen R. Hawes (2007). Value-at-risk in bakery procurement, Review of Agricultural Economics, 29 (3), 581-595.
Zangari, Peter (1994). Estimating volatilities and correlations, RiskMetrics—Technical Document, 2nd ed., New York: Morgan Guaranty, 43-66.
Zangari, Peter (1996a). Data and Related Statistical Issues, RiskMetrics—Technical Document, 4th ed., New York: Morgan Guaranty, 163-196.
Zangari, Peter (1996c). Market risk methodology, RiskMetrics—Technical Document, 4th ed., New York: Morgan Guaranty, 105-148.
Ziggel, Daniel, Tobias Berens, Gregor Weiss and Dominik Wied (2014). A new set of improved value-at-risk backtests, Journal of Banking and Finance, 48:29-41.
Glyn A. Holton is an author and consultant specializing in financial risk management. He is known for his groundbreaking paper Defining Risk. He wrote the definitive book on value-at-risk and distributes the second edition of that book freely online. He blogs at GlynHolton.com.
Holton, Glyn A. (2014). Value-at-Risk: Theory and Practice, second edition, e-book published by the author at www.value-at-risk.net.
Holton, Glyn A. (2003). Value-at-Risk: Theory and Practice, San Diego: Academic Press.
For an alternative discussion of backtesting, see Campbell (2005).
For some notable backtesting methodologies not discussed in this chapter, see Haas (2001), Engle and Manganelli (2004), and Ziggel et al. (2014). See also Christoffersen and Pelletier (2004), Haas (2005), and Berkowitz et al. (2011), who discuss duration-based backtesting methodologies. These are a form of exceedence independence test that assess if intervals between exceedences appear random.
Da Silva et al. (2006), Berkowitz et al. (2011) and Røynstrand et al. (2012) assesses the performance of various backtesting methodologies using actual and/or simulated P&L data.
Specifying a backtesting program for a trading organization can be an unsettling experience, plagued by data limitations and philosophical quandaries. Here we shall address issues and present practical advice on how to proceed.
Backtesting, as it is commonly practices, is hypothesis testing. It poses all the familiar challenges of hypothesis testing. Let’s focus on two:
The problems are related. Value-at-risk measures aren’t “valid” or “invalid,” just as the approximation 3.142 for π is not “valid” or “invalid.” Value-at-risk measures and approximations are either “useful” or “not useful,” and usefulness depends on context. For a carpenter, 3.142 may be a useful approximation of π, but it might not be for an astronomer. A particular value-at-risk measure may be useful for assessing the market risk of futures portfolios but not of portfolios containing options on those futures. While we generally speak of “backtesting a value-at-risk measure,” in fact we backtest a value-at-risk measure as applied to a particular portfolio.
With backtesting, we distinguish between those value-at-risk measures we will reject and those we will continue to use for a particular trading portfolio. Where we draw the line is a compromise to balance the risk of rejecting a “valid” value-at-risk measure against that or failing to reject an “invalid” value-at-risk measure. Never mind that this is a compromise over a contrived issue. It really isn’t a compromise at all. Researchers in the social sciences long ago adopted the convention of testing at the .05 or .01 significance level. Use of the .05 significance level predominates, but a researcher whose data is particularly strong may report results at the .01 significance level to emphasize the fact. Accordingly, there is no real debate about what significance level to use. In backtesting, we use the .05 significance level based solely on the established convention and the fact that backtest data is rarely good enough to warrant the .01 significance level.
Bluntly stated, we accept or reject value-at-risk measures based on a convention for how to compromise over a contrived issue. The convention is use of the .05 significance level. The compromise is about balancing the risks of Type I vs. Type II errors. The contrived issue is that of a particular value-at-risk measure being somehow “valid” or “invalid”.
These problems exist for hypothesis testing in fields other than finance. Social scientists embrace the hypothesis testing approach because there aren’t really good alternatives. In backtesting, we are fortunate to have two or three years of data on the performance of a value-at-risk measure. If historical data weren’t so limited, we could go beyond the contrived issue of value-at-risk measures being “valid” or “invalid” and truly assess the usefulness of individual value-at-risk measures. It is limited data, more than anything else, that drives us to accept the hypothesis testing approach to backtesting. Formal hypothesis testing largely substitutes convention for meaningful test design. This may be a weakness, but it is also a strength. Without extensive data, careful test design is impossible. Convention-driven hypothesis testing allows us to make decisions with limited data in a manner that, despite only loosely conforming to our needs, is consistent. Arguably, it represents the best option available to us for interpreting limited data.
The Basel Committee’s traffic light backtest doesn’t employ hypothesis testing. It is just a rule specified by regulators based on their intuitive sense of what seemed reasonable. Its graded response of increasing capital charges within the yellow zone avoids the stark “valid” or “invalid” distinction of hypothesis testing at the expense of creating an illusion of precision. With just α + 1 = 250 data points, it is difficult to draw any conclusion whatsoever about a value-at-risk measure, especially a value-at-risk measure that is supposed to experience just one exceedance every 100 days.
For banks, having their value-at-risk measure perform poorly on the traffic light test would cost more than elevated capital charges. Regulators might force them to go through an expensive and time consuming process of implementing a new value-at-risk measure. At a minimum, poor performance on the traffic light test would attract scrutiny, which banks generally want to avoid.
Rather than entrust such matters to luck, banks have tended to implement conservative value-at-risk measures whose coverage q* well exceeds the 0.99 quantile of loss they purport to measure. Some such measures are so conservative they practically never experience an exceedance.5 This all but guarantees the value-at-risk measures perform well on the traffic light test.
Lopez (1999) builds on the traffic light approach of more finely grading backtest results. Drawing on decision theory, he suggests that the accuracy of value-at-risk measures be gauged by how well they minimize a “loss function” reflective of the evaluator’s priorities, which might include avoiding extraordinary one-day losses or avoiding increased regulatory capital charges. While this is consistent with the goal of accepting or rejecting value-at-risk measures based on an assessment of their usefulness, it poses a risk of drawing conclusions not warranted by limited data available for backtesting.
Lopez’s approach compares
Depending on how the loss function is defined, this can be straightforward, or it can entail assumptions. For example, if the loss function is set equal to the number of exceedances experienced over the α + 1 observations, Lopez’s methodology reduces to a simple coverage test. For a more interesting—and problematic—loss function, define the magnitude of an exceedence as the maximum of 1) a portfolio’s actual loss minus the value-at-risk for that period, and 2) zero. A loss function based on the magnitude of exceedences addresses a concern of many managers: how bad can a loss be on days it exceeds reported value-at-risk? But evaluating a benchmark for such a loss function requires some assumptions as to how an accurate value-at-risk measure might have performed. Should a value-at-risk measure fail a backtest based on such a loss function, the question arises as to whether the problem resides with the value-at-risk measure or with the assumptions used to model the benchmark.
Joint tests are backtests that simultaneously assess two or more criteria for a value-at-risk measure—say coverage and exceedance independence. Such tests have been proposed by Christoffersen (1998) and Christoffersen and Pelletier (2004). Campbell (2005) recommends against their use:
While joint tests have the property that they will eventually detect a value-at-risk measure which violates either of these properties, this comes at the expense of a decreased ability to detect a value-at-risk measure which only violates one of the two properties. If, for example, a value-at-risk measure exhibits appropriate unconditional coverage but violates the independence property, then an independence test has a greater likelihood of detecting this inaccurate value-at-risk measure than a joint test.
When a value-at-risk measure is first implemented its performance will be closely monitored. Data will be insufficient for meaningful statistical analyses, but a graph such as Exhibit 14.1 can be updated monthly and monitored for signs of irregular performance. Parallel testing against a legacy value-at-risk measure is also appropriate. At this stage, the goal is primarily to address Type B model implementation risk. Coding or implementation errors can produce noticeable distortions in a value-at-risk measure’s performance, even over short periods of time.
At six months, coding or other implementation issues should have been identified and resolved. If any of these motivated substantive changes in the value-at-risk measure or its output, you will want to wait until six months after the last substantive change before performing any statistical backtests. Results from our recommended standard distribution test are likely to be the most meaningful at this point, as six months of data really isn’t enough for coverage or independence tests.
Perform another backtest at one year. Now include our recommended standard independence test. If you calculate value-at-risk at the 90% or 95% level, also include our recommended standard coverage test. Otherwise, wait two years before performing all three of our recommended standard tests. Continue to backtest annually using those three tests. Use all available data generated since the last substantive change to thevalue-at-risk system, up to a maximum of five years.
I recommend institutions use the three recommended standard tests described in this chapter. They are as good as any you will find in the literature, and better than most. Some widely cited backtests are flawed or ineffective. Banks will also need to perform the traffic light backtest, as required by their regulators. Backtests should be performed with both clean and dirty data.
Because they are performed at the .05 significance level, failure of any one of our recommended standard backtests is a strong indication of a material shortcoming in a value-at-risk measure’s performance. Your response will depend on the particular test failed, whether it was failed with clean or dirty data, and your assessment of the circumstances that caused the failure. A graph similar to Exhibit 14.10 is useful for diagnosing problems identified by coverage or distribution tests.
Failure of a clean test—or both a clean test and the corresponding dirty test—is indicative of a Type A (model design) or Type B (implementation) problem with the value-at-risk measure. Focus your analysis first on eliminating the possibility of an implementation or coding error. Only then address the possibility of Type A design shortcomings.
A design shortcoming may not necessarily dictate a fundamental change in the design of your value-at-risk measure. If your value-at-risk measure already incorporates sophisticated analytics suitable for your portfolio, modifying those analytics may not be productive. A review of your backtesting data may indicate that an ad hoc solution, such as multiplying output by a scalar, may fix the problem
For example, if your value-at-risk measure failed a clean recommended standard distribution test, and you are comfortable the model design is appropriate for your portfolio, you can go back and redo the distribution test using the same past value-at-risk measurements, but multiply each by a scalar w. Through trial and error, or some search routine, you can solve for that value w that optimizes performance on the test (i.e. maximizes the sample correlation between the nj and ). Going forward, scale value-at-risk measurements by that value w.
Some may feel uncomfortable with an ad hoc solution like this. Keep in mind that a value-at-risk measure is a practical tool. Our goal is not to develop some theoretically beautiful model for the complex dynamics of markets. All we require is a reasonable indication of market risk. The philosophy of science tells us to judge a model based on the usefulness of its predictions and not on the nature of its assumptions. If we can fix a value-at-risk measure by simply scaling its output, then there is every reason to do so.
Of course, this solution only applies if a value-at-risk measure is already sophisticated enough to capture relevant market dynamics. If a portfolio is exposed to vega risk or basis risk, and the value-at-risk measure isn’t designed to capture these, no amount of scaling of that value-at-risk measure’s output is going to solve the problem. If a Monte Carlo value-at-risk measure is so computationally intensive that there is only time for a sample of size 250 for each overnightvalue-at-risk analysis, the standard error will be enormous. Scaling the output will not solve this problem. The computations need to be streamlined—perhaps with a holdings remapping and/or variance reduction—and the sample size increased.
Tweaking a poorly designed value-at-risk measure is only going to produce another poorly designed value-at-risk measure. If a value-at-risk measure is fundamentally unsuited for the portfolio it is applied to, it needs to be fundamentally redesigned.
Some shortcomings of value-at-risk measures must be lived with. The standard UWMA and EWMA techniques for modeling covariance matrices do not address market heteroskedasticity well. As we indicated in Chapter 7, there are currently no good solutions to this problem. Today’s value-at-risk measures are slow in responding to rising market volatility. During such periods, they tend to experience clustered exceedances. Similarly, when volatilities decline, they again lag, and may experience few or no exceedances. These phenomena may cause a value-at-risk measure to fail an independence test. There is little that can be done about the problem.
Failure of a dirty test and not the corresponding clean test is an indication of a Type C model application problem.
This chapter, like the literature, has focused on backtesting of value-at-risk measures. If you employ some other PMMR, coverage and exceedance independence tests will not apply, but it may be possible to develop tests analogous to those tests for your particular PMMR. Our recommended standard distribution and independence tests are not limited to value-at-risk. They can be applied with most PMMRs.
Assume a one-day 95% EUR value-at-risk measure was used for a period of 125 trading days. Data gathered for backtesting is presented in Exhibit 14.8. We have already used the data from the second and third columns to construct Exhibit 14.1. We will now use the data to apply coverage, distribution and independence backtests.
To apply a coverage test, we need
The last value is obtained by summing the 0’s and 1’s in the fourth column of Exhibit 14.8.
It can also be obtained by visual inspection of Exhibit 14.1.
In Exhibit 14.3, we find that our recommended standard coverage test’s non-rejection interval for q = 0.95 and α + 1 = 125 is [2, 11]. Since our number of exceedances falls in this interval, we do not reject the value-at-risk measure.
In Exhibit 14.4, we find that the PF test’s non-rejection interval for q = 0.95 and α + 1 = 125 is [2, 12]. Since our number of exceedances falls in this interval, we do not reject the value-at-risk measure.
We cannot use the Basel Committee’s traffic light coverage test because it applies only to 99% value-at-risk measures.
For distribution testing, we apply [14.10] to the loss quantiles tu and arrange the results in ascending order to obtain the nj. Values for the are obtained from [14.11], with α + 1 = 125. Values for nj and
are presented in Exhibit 14.9.
These are plotted in Exhibit 14.10.
The graphical results are inconclusive. The points do fall near a line of slope one passing through the origin, but the fit isn’t particularly good. Is this due to the small sample size, or does it reflect shortcomings in the value-at-risk measure? For another perspective, we calculate the sample correlation between the nj and as .997. Consulting Exhibit 14.6, we do not reject the value-at-risk measure at either the .05 or the .01 significance levels.
Starting with Christoffersen’s test for independent tI, we use the data of Exhibit 14.8 to calculate α00 = 105, α01 = α10 = 9 and α11 = 1. From these, we calculate = 0.9211,
= 0.9000 and
= 0.9194. Our likelihood ratio is
[14.23]
so –2log(Λ) = 0.0517. This does not exceed 3.814, so we do not reject the value-at-risk measure.
Next, applying our recommended standard independence test, we use [14.21] to calculate values tn from the loss quantiles tu. Results are indicated in Exhibit 14.11.
We calculate the sample autocorrelations of the tn for lags 1 through 5 as indicated in Exhibit 14.12.
Our test statistic—the largest absolute value of the autocorrelations—is 0.132. This is less than the non-rejection value 0.274 obtained from Exhibit 14.7, so we do not reject the value-at-risk measure at the .05 significance level.
time | 99% VaR | P&L | time | 99% VaR | P&L | time | 99% VaR | P&L |
---|---|---|---|---|---|---|---|---|
t | at t – 1 | at t | t | at t – 1 | at t | t | at t – 1 | at t |
-124 | 3.468 | -2.107 | -82 | 4.693 | 0.252 | -40 | 1.401 | 0.683 |
-123 | 3.095 | -0.143 | -81 | 3.789 | -0.074 | -39 | 1.282 | 0.241 |
-122 | 3.245 | 0.894 | -80 | 4.897 | -0.153 | -38 | 1.524 | 0.118 |
-121 | 2.969 | 0.990 | -79 | 4.256 | 0.267 | -37 | 1.834 | -0.810 |
-120 | 3.472 | -0.060 | -78 | 4.537 | 1.804 | -36 | 1.534 | -0.455 |
-119 | 4.513 | -1.123 | -77 | 4.508 | -0.196 | -35 | 1.839 | -0.612 |
-118 | 3.418 | 1.090 | -76 | 5.010 | 0.887 | -34 | 1.585 | -0.108 |
-117 | 3.641 | -0.948 | -75 | 4.308 | 0.385 | -33 | 1.178 | 0.197 |
-116 | 3.226 | 0.230 | -74 | 5.361 | 0.030 | -32 | 0.801 | 0.136 |
-115 | 3.282 | 0.887 | -73 | 3.940 | -0.356 | -31 | 1.021 | 0.078 |
-114 | 3.047 | 0.352 | -72 | 2.890 | -0.279 | -30 | 0.848 | -0.041 |
-113 | 2.765 | -1.060 | -71 | 3.625 | -1.376 | -29 | 0.937 | 0.517 |
-112 | 2.437 | 0.113 | -70 | 3.332 | 1.031 | -28 | 1.194 | 0.053 |
-111 | 3.093 | 0.475 | -69 | 3.655 | -0.721 | -27 | 1.283 | -0.709 |
-110 | 2.407 | -1.587 | -68 | 3.857 | -0.465 | -26 | 1.362 | 0.189 |
-109 | 2.687 | -0.537 | -67 | 3.646 | 1.189 | -25 | 1.455 | 0.681 |
-108 | 2.326 | -0.854 | -66 | 3.611 | 1.787 | -24 | 1.280 | 0.079 |
-107 | 2.722 | 0.021 | -65 | 5.304 | -1.618 | -23 | 1.619 | -0.809 |
-106 | 2.699 | -0.762 | -64 | 4.849 | -1.711 | -22 | 1.901 | -0.018 |
-105 | 2.887 | 0.619 | -63 | 5.160 | 2.407 | -21 | 1.920 | -0.041 |
-104 | 2.168 | -0.414 | -62 | 4.643 | 1.974 | -20 | 2.114 | -0.714 |
-103 | 1.989 | -1.242 | -61 | 4.784 | 2.092 | -19 | 2.042 | 0.052 |
-102 | 1.987 | -0.375 | -60 | 3.804 | -0.861 | -18 | 1.852 | -2.103 |
-101 | 1.714 | -0.198 | -59 | 4.492 | 2.870 | -17 | 1.662 | 1.062 |
-100 | 2.315 | -0.231 | -58 | 4.701 | -2.246 | -16 | 2.310 | -1.014 |
-99 | 2.788 | 0.528 | -57 | 4.721 | 1.669 | -15 | 2.078 | -0.988 |
-98 | 2.855 | -1.024 | -56 | 4.446 | 1.352 | -14 | 2.460 | 2.662 |
-97 | 3.726 | 0.796 | -55 | 3.793 | -1.976 | -13 | 2.594 | -1.405 |
-96 | 2.734 | -0.057 | -54 | 3.833 | 0.022 | -12 | 1.609 | 2.165 |
-95 | 3.482 | -3.851 | -53 | 3.707 | 0.340 | -11 | 1.970 | -0.034 |
-94 | 3.342 | 0.914 | -52 | 3.805 | -5.143 | -10 | 1.776 | 1.260 |
-93 | 2.486 | -3.966 | -51 | 3.507 | 0.202 | -9 | 2.341 | 2.799 |
-92 | 3.455 | -1.853 | -50 | 3.158 | -0.411 | -8 | 2.335 | 1.797 |
-91 | 3.602 | 3.909 | -49 | 2.688 | 0.606 | -7 | 2.868 | 2.224 |
-90 | 4.021 | -3.818 | -48 | 2.308 | 0.169 | -6 | 2.866 | 2.663 |
-89 | 3.927 | -3.043 | -47 | 2.404 | 1.254 | -5 | 2.843 | -2.600 |
-88 | 3.929 | 0.624 | -46 | 2.079 | 0.010 | -4 | 2.380 | 0.403 |
-87 | 4.805 | -2.384 | -45 | 2.000 | 0.030 | -3 | 2.195 | -1.043 |
-86 | 3.857 | -1.463 | -44 | 1.446 | 0.399 | -2 | 2.107 | -2.325 |
-85 | 3.701 | -0.355 | -43 | 1.533 | 0.034 | -1 | 1.789 | -0.238 |
-84 | 3.481 | -5.738 | -42 | 1.412 | -0.498 | 0 | 2.107 | -1.145 |
-83 | 4.617 | -3.076 | -41 | 1.229 | -0.092 |
In this exercise you will perform several coverage backtests.
In this exercise, you will perform the graphical and recommended standard distribution tests of Section 14.4 using the data of Exhibit 14.13.
In this exercise, you will perform Christoffersen’s exceedences independence test using the data of Exhibit 14.13.
In this exercise, you will perform our recommended standard loss quantile independence test using the data of Exhibit 14.13.
Independence tests are a form of backtest that assess some form of independence in a value-at-risk measure’s performance from one period to the next. Independence of exceedances tI and independence of loss quantiles tU are separate forms of independence that might be tested for. We have already seen that coverage tests assume the former and most distribution tests assume the latter. If a value-at-risk measure fails an independence test, that can cast doubt on coverage or distribution backtest results obtained for that value-at-risk measure.
There is no way to directly test for independence, so null hypotheses address specific properties of independence—say exceedances not clustering or loss quantiles not being autocorrelated. Accordingly, backtests for independence can be judged, among other things, based on how broad their null hypotheses are.
Christoffersen’s (1998) independence test is a likelihood ratio test that looks for unusually frequent consecutive exceedances—i.e. instances when both t–1i = 1 and ti = 1 for some t. The test is well known, since it was first proposed in an often-cited endorsement of testing for independence of exceedances.
Extending our earlier notation q* for the coverage of a value-at-risk measure, we define
[14.12]
[14.13]
These are the value-at-risk measure’s conditional coverages—its actual probabilities of not experiencing an exceedance given that it did not (in the case of ) or did (in the case of
) experience an exceedance in the previous period. Our null hypothesis
is that
=
= q*.
If a value-at-risk measure is observed for α + 1 periods, there will be α pairs of consecutive observations (t–1i, ti). Disaggregate these as
[14.14]
where α00 is the number of pairs (t–1i, ti) of the form (0, 0); α01 is the number of the form (0, 1); etc. We want to test if
[14.15]
which would support our null hypothesis. We apply a likelihood ratio test as follows. Assuming doesn’t hold, we estimate
and
with
[14.16]
[14.17]
Assuming does hold, we estimate q* with
[14.18]
Our likelihood ratio is
[14.19]
[14.20]
and –2log(Λ) is approximately centrally chi-squared with one degree of freedom—that is –2log(Λ) ~ χ2(1,0)—assuming . The 0.95 quantile of the χ2(1,0) distribution is 3.841, so we reject
at the .05 significance level if –2log(Λ) ≥ 3.841. Similarly, we reject it at the .01 significance level if –2log(Λ) ≥ 6.635.
The test largely depends on the frequency with which consecutive exceedances are experienced. As these are inherently rare events, the test has limited power. Also, the test isn’t defined when there are no consecutive exceedances at all, which is common. Christoffersen doesn’t address this situation. In some cases it may be reasonable to simply accept the null hypothesis when there are no consecutive exceedances, but not always. For example, if you backtest a one-day 90% value-at-risk measure with 1,000 days of data, there should be about 10 instances of consecutive exceedances. If there are none, it might be inappropriate to accept the null hypothesis.
For a recommended standard test, we assess the independence of the values tN obtained by applying the inverse standard normal CDF to the loss quantiles tU:
[14.21]
Note that this is the same transformation we made with [14.10]. As before, given loss quantile data –mu, –m+1u, … , –1u, we apply [14.21] to obtain values –mn, –m+1n, … , –1n.
We adopt the null hypothesis that the autocorrelations
[14.22]
are all 0 for lags k = 1, 2, 3, 4 and 5. We test this hypothesis by calculating the sample autocorrelations of our data –mn, –m+1n, … , –1n for those same five lags. We take the maximum of the absolute values of the five sample autocorrelations. That is our test statistic. We reject the null hypothesis at the .05 significance level if the test statistic exceeds the non-rejection value indicated for sample size α + 1 in Exhibit 14.7.
Non-rejection values were calculated for each sample size α + 1 with a Monte Carlo analysis that found the 0.95 (for the .05 significance level) or 0.99 (for the .01 significance level) quantile for the test statistic assuming the null hypothesis.
In Christoffersen’s 1998 independence test, α01 routinely equals α10. Why is this, and what would cause them to differ?
Solution
A value-at-risk measure is to be backtested using Christoffersen’s 1998 independence test. Based on 250 days of exceedence data, α00 = 237, α01 = α10 = 5, and α11 = 2. Do we reject the value-at-risk measure at the .10 significance level?
Solution
A value-at-risk measure is to be backtested using our recommended standard independence test and 500 days of data. Values tn are calculated, and their sample autocorrelations are determined to be 0.034, –0.078, –0.124, 0.107 and 0.029 for lags 1 through 5, respectively. Do we reject the value-at-risk measure at the .05 significance level?
Solution