0.4 Notation

0.4 Notation

Good notation is more than a convention. It shapes how we think, streamlines problem solving, avoids misunderstandings and facilitates communication. Nowhere was this more evident than the introduction of Hindu-Arabic numerals into Europe during the Middle Ages. Because they made arithmetic computations so easy, the new numerals supplanted both Roman numerals and the abacus, which had been widely used in Europe. Another example comes from calculus. While Newton and Leibniz disputed who discovered the fundamental theorem of calculus, there is no dispute as to whose notation was superior. We still use Leibniz’s notation today.

One of the contributions of this book is consistent notation for expressing ideas related to value-at-risk. Once you master the notation reading the book, I encourage you to keep using it. The notation will guide your thinking and help you avoid pitfalls.

Value-at-risk draws on many branches of mathematics. Each offers its own notation conventions. Because these conflict, it is impossible to observe them all simultaneously. But the book’s notation consistently presents financial concepts related to calculus, linear algebra, probability, statistics, time series analysis, and numerical methods, drawing on existing conventions where possible.

Random quantities are indicated with capital English letters. If they are univariate random variables they are italic non-bold: QRSX, etc. If they are multivariate in some sense—random vectors, random matrices, stochastic processes—they are italic bold: QRSX, etc. Nonrandom quantities are indicated with lowercase italic letters. These are nonbold for scalars: qrsx, etc. They are bold for vectors, matrices, or time series: qrsx, etc.

With this notation, if a random variable is denoted X, a specific realization of that random variable may be denoted x. Such notational correspondence between random quantities and realizations of those random quantities is employed throughout the book.

Components of vectors or matrices are distinguished with subscripts. Consider the random vector

[0.1]

or the matrix

[0.2]

Time also enters the equation. To avoid confusion, I do not indicate time with subscripts. Instead, I use superscripts that precede the rest of the symbol. For example, the Australian dollar Libor curve evolves over time. We may represent its value at time t as

[0.3]

The value at time 3 of 1-month AUD Libor is The entire curve at time 1 is denoted 1R. The univariate stochastic process representing 1-week AUD Libor over time is represented R2. The 15-dimensional stochastic process representing the entire curve over time is denoted R. Time 0 is generally considered the current time. At time 0, current and past Libor curves are known. As nonrandom vectors, they are represented with lowercase bold letters: If time is measured in days, yesterday’s value of 12-month AUD Libor is denoted

The advantage of using preceding superscripts to denote time is clarity. By keeping time and component indices physically separate, my notation ensures one will never be confused for the other. Use of preceding superscripts is unconventional but not without precedent. Actuarial notation makes extensive use of preceding superscripts.

Much other notation is standardized, as will become evident as the book unfolds. Some frequently occurring notation is summarized below.

lognatural logarithm
n!factorial of an integer n, which is given by the product n! = 1·2·3 ··· (n –1)·n, with 0! = 1
B(m, p)binomial distribution for the number of “successes” in m trials, each with probability p of success
U(a, b)uniform distribution on the interval (a, b)
Un(Ω)n-dimensional uniform distribution on the region Ω
N(μ, σ2)normal distribution with mean μ and variance σ2
Λ(μ, σ2)lognormal distribution with mean μ and variance σ2
χ2(ν, δ2)chi-squared distribution with ν degrees of freedom and
non-centrality parameter δ2
Nn(μ, Σ)joint-normal distribution with mean vector μ and covariance matrix Σ
1Prandom variable for a portfolio’s value at time 1
0pportfolio value at time 0
( 0p, 1P)a portfolio
1Lrandom variable for portfolio loss: 0p1P
1Rrandom vector of key factors (key vector)
0rvector of key factor values at time 0
1Srandom vector of asset values at time 1 (asset vector)
0svector of asset values at time 0
Any of these might indicate a risk vector that is not a key vector.
E( )unconditional expected value
tE( )expected value conditional on information available at time t
std( )unconditional standard deviation
tstd( )standard deviation conditional on information available at time t
var( )unconditional variance
tvar( )variance conditional on information available at time t
tμunconditional mean of the time t term of a stochastic process
t|t–kμmean of the time t term of a stochastic process conditional on information available at time tk
tΣunconditional covariance matrix of the time t term of a stochastic process
t|t–kΣcovariance matrix of the time t term of a stochastic process conditional on information available at time tk
θfrequently used to denote a portfolio mapping function
frequently used to denote a (non-portfolio) mapping function
ωportfolio holdings
tφ( )unconditional PDF of the time t term of a stochastic process
t|t–kφ( )PDF of the time t term of a stochastic process conditional on information available at time tk
tΦ( )unconditional CDF of the time t term of a stochastic process
t|t–kΦ( )CDF of the time t term of a stochastic process conditional on information available at time tk
Tildes can be placed above or between symbols. Placed above indicates a remapping; for example, denotes a remapping of 1P = θ(1R). Placed between indicates that a random variable or random vector has a particular distribution; for example, X ~ N(0,1) indicates that random variable X is standard normal.
indicates that a random variable or random vector has a particular distribution, conditional on information available at time t. For example, indicates that, conditional on information available at time 0, 1X is standard normal.
indicates that analytic software more sophisticated than a spreadsheet may be useful in solving an exercise

For more detailed explanations of notation, Section 2.2 addresses general mathematical notation used throughout the book. Section 4.6 elaborates on notation for time series analysis. Section 1.8 explains the notation of value-at-risk measures.

Currencies are indicated with standard codes:

Exhibit 0.1: currency codes

Where applicable, millions are indicated as MM. For example, 3.5 million Japanese yen is indicated JPY 3.5MM.

Exchange rates are indicated as fractions, so an exchange rate of 1.62 USD/GBP indicates that one British pound is worth 1.62 US dollars.

Acronyms used include those shown in Exhibit 0.2.

BBABritish Bankers Association
CADCapital Adequacy Directive
CBOTChicago Board of Trade
CDFCumulative Distribution Function
CMEChicago Mercantile Exchange
CSCECoffee, Sugar and Cocoa Exchange
EICCGexplicit inversive congruential generator
ETLexpected tail loss
ICGinversive congruential generator
IIDindependent and identically distributed
IPEInternational Petroleum Exchange
LCGlinear congruential generator
LiborLondon Interbank Offered Rate
LIFFELondon International Financial Futures and Optons Exchange
LMELondon Metals Exchange
MGFmoment generating function
MLmaximum likelihood
MRGmultiple recursive generator
MSEmean squared error
NYBOTNew York Board of Trade
NYMEXNew York Mercantile Exchange
NYSENew York Stock Exchange
OTCover the counter
P&Lprofit and loss
PDFprobability density function
PFprobability function
PMMRprobabilistic metric of market risk
RAROCrisk-adjusted return on capital
RORACreturn on risk-adjusted capital
SECSecurities and exchange Commission
TSEToronto Stock Exchange
UNCRUniform Net Capital Rule
VaRvalue-at-risk
WCEWinnipeg Commodities Exchange
Exhibit 0.2: Acronyms

 

0.3 How To Read This Book

0.3 How To Read This Book

Recognize that this is a book. In printed form, it would be about 500 pages. If you read it from start to finish, and do all the 100 or so exercises, you will come away with expertise in a substantial body of quantitative finance. To facilitate reading, the book is implemented as a responsive website that displays beautifully on a cellphone, tablet or desktop.

But you may just want to use the book as a reference. If that is your plan, I recommend you first familiarize yourself with the book’s notation, terminology and bottom-up approach to explaining value-at-risk. All three are carefully thought-out and worth learning in their own right. Read the next section, Section 0.4, for a quick introduction to notation. Then proceed to Chapter 1. It starts with basic concepts but rapidly becomes sophisticated. It blends in some light fare about risk management and the history of value-at-risk, but its focus is introducing terminology and the bottom-up approach to explaining value-at-risk. Once you have mastered Chapter 1, you can understand any other part of the book. Proceed to the table of contents or search feature to find topics that interest you. If you come across mathematics you are unfamiliar with, look to the chapters of Part II for explanations. They cover essential math.

To read the book in its entirety, here is a roadmap. The book is broken into four parts:

  • Part I – Overview (Chapters 0 – 1)
  • Part II – Essential Mathematics (Chapters 2 – 5)
  • Part III – Value-at-Risk (Chapters 6 – 11)
  • Part IV – Implementation and Validation (Chapters 12 – 14)

You are now reading Section 0.3 of the Preface, which is Chapter 0 in Part I. Read the next section, Section 0.4, which introduces notation, and then proceed to Chapter 1. Depending upon your math skills, some of its quantitative examples may be a bit intimidating. Skim them and move on. You can return later.

Part II covers essential mathematics that is anticipated in Part I and used extensively in Parts III and IV. If you need to review basic calculus, linear algebra or probability, there are references at the ends of Chapters 2 and 3.

To avoid seeming “cookbookish”, I have treated the math of Part II as a stand-alone topic and resisted the temptation to immediately illustrate concepts with value-at-risk applications. This should serve you well, since much of the math can be used in value-at-risk measures in different ways. Most of it is invaluable for financial applications unrelated to value-at-risk, so it is worth learning in its own right. However, Part II is focused. Only topics that will be relevant later in the book are covered. It is not essential that you master all of Part II before proceeding to Part III. Some readers may want to skim the math and refer back to it as needed.

Part III is a bottom-up explanation of how to design value-at-risk measures. Don’t attempt it until you have mastered Chapter 1. If you have difficulty with the mathematics of Chapter 1, learn that mathematics in Part II and then reread Chapter 1 before proceeding to Part III. Part III has a lot of technical depth. You may want to read it twice. Go quickly through the material the first time to get an overview of how it all fits together. Read it more carefully the second time to gain deeper understanding.

Part IV can be read at any time after you have read Chapter 1. Refer back to Part II for discussions of relevant mathematics as necessary.

Exercises are an essential part of the text. I encourage you to work through as many as time permits. Doing so will accelerate learning and provide insights that are difficult to achieve through reading alone. Most exercises can be performed with pencil and paper or a spreadsheet. For a few, more sophisticated analytical software will be useful. Such exercises are indicated with a symbol . Solutions are provided for all exercises. Just click on the “Solution” button at the end of each exercise.

Permission is granted for use of this book, its exercises and solutions in any public or private classroom setting. However, none of the content may be sold, repackaged or hosted on an independent platform. The preparation of derivative works based on any of the content is prohibited.

0.2 Voldemort and the Second Edition

0.2 Voldemort and the Second Edition

The first edition of this book was published in hardcover by Acadeic Press/Elsevier in 2003. Readers of my blog are familiar with the bizarre circumstances that caused that first edition to be abruptly pulled from the market. A consistent pattern of problems had emerged with the book’s marketing and distribution. For example, the publisher distributed an incorrect ISBN number. They distributed incorrect shelving instructions for the book, indicating that bookstores should shelve it under marketing or radical economics. The book was not cataloged with the US Library of Congress.

Eventually, I sat down and wrote a list of all the things someone might do if they wanted to sabotage the marketing for a book. I then investigated how this book was being marketed and found that every item on that list had occurred. The book had not been listed it the catalog Books In Print. Professors who requested review copies never received them, even after repeated requests and intervention by me. Misinformation was posted to on-line bookstores. The list went on and on.

Behind the scenes, the author of a competing book on value-at-risk lurked. Dubbed “Junior” by me and “Voldemort” by one reader of my blog, this ethically challenged individual had a prior business relationship with one employee of the publisher. I won’t hazard a guess as to whether Junior paid bribes. I can think of no other explanation.

The situation was so bizarre and extreme that, when I approached the publisher’s senior management, they readily gave me back the copyright to the book. The book was pulled from the market, causing the price of used copies available on-line to skyrocket to over $1,000. This unsatisfactory state of affairs persisted for a few years because I didn’t have time to deal with it. I eventually decided that the only way to ensure people had access to the book was to post it for free on the Internet.

That was my plan. This second edition came about because a number of people mentioned that, even with the book freely available on the Internet, they would gladly pay for a hardcover copy they could hold in their hands—this was back in 2009 when e-books were still a novelty. I found time to approach a new publisher, Chapman & Hall/CRC, and we decided that we would produce a hardcover second edition to supplement the free on-line first edition. As things turned out, it is the new second edition that you are now reading for free on-line. I will explain how that happened below, but first let me tell you what is new in the second edition.

Much is new. Four new chapters cover historical simulation, implementation, model risk and backtesting. Chapters 3 and 4 have been expanded with new material on probability and statistics. Other minor changes appear throughout the book. There are 22 new exercises, with solutions provided. Scattered amidst the new content, I have provided new details about the history of value-at-risk as well as a number of personal anecdotes from my consulting work.

In the first edition, I defined value-at-risk more broadly than most authors. While the common definition of value-at-risk is that it represents a quantile of loss over some specified horizon, my definition encompassed many other probabilistic metrics of market risk, including expected tail loss (ETL) and variance of return. I embraced the broader definition because the computations to calculate value-at-risk, ETL, variance of return and other metrics are essentially identical. They differ only in the very last step. Accordingly, techniques described in this book can be used to calculate any of these metrics, which is why I extended the definition of value-at-risk in the first edition to encompass them all. There was, however, some protest over that decision. In this second edition, I revert to the common definition. I now use the term “probabilistic metric of market risk” (PMMR) to refer to the broader category or market risk metrics that includes value-at-risk, ETL and variance of return, among others. See Section 1.4.

Notation remains unchanged from the first edition with one exception. Readers familiar with the first edition will notice at the end of Section 4.6.1 that I have modified how I use tildes to denote conditional distributions.

It has become customary to use the term “value-at-risk” to refer to any quantile-of-loss metric of financial risk, so people speak of value-at-risk metrics of credit risk or value-at-risk metrics of operational risk. I am less extravagant. In the first edition, I stated firmly that I defined value-at-risk as applicable to market risk only. At the time—back in 2003—“credit VaR” measures were flourishing. These are measures of credit risk that purport to reflect, say, the 0.99 quantile of a portfolio’s one-year loss to defaults. The fact is that defaults evolve over a business cycle, and one business cycle is different from the next. A 0.99 quantile of a portfolio’s credit risk would be the loss due to defaults that a portfolio would be expected to suffer over a particular year at a particular stage of one business cycle out of a hundred business cycles. Even if we had twenty years of relevant default data for each instrument or issuer, that would only represent three to five business cycles—too few data points to have statistical relevance. Such meager data cannot possibly support claims about a portfolio’s 0.99 quantile of one-year loss due to defaults. I don’t care how elaborate the mathematics of these measures might be. Mathematics is no substitute for empirical data. Any assertions about the 0.99 quantile of a portfolio’s one-year loss due to defaults is meaningless. In other words, most so-called credit VaR measures are meaningless. We don’t have to consider their analytics to reach this conclusion. It follows immediately from the paucity of default data.

This was my position in 2003. The 2008 meltdown in credit markets vindicated that position. Credit VaR measures, and related credit pricing models, spectacularly failed to warn of what was coming. Now there is talk of “building better models.” Such chatter is inevitable, I suppose, but it is nonsense. Truth be told, there was never any risk in the collateralized debt obligations (CDO’s) that crippled the credit markets in 2008. They were guaranteed to fail. If you jump from an airplane without a parachute, you aren’t taking risk. There is no uncertainty about the outcome. We don’t need models to tell us any of this.

A strength of this book is its numerous real-world examples from capital, energy and commodity markets. Some of these were historical examples. Others were current, at least for 2003. Much has changed in a decade. Due to a wave of mergers, many exchanges mentioned in those examples no longer exist as independent entities. A few instruments also no longer exist or otherwise lack the liquidity they once had. I have not updated examples to reflect such changes. What would be the point? Further changes would soon render even the updated examples outdated. I merely ask readers to treat outdated examples as historical examples. All retain the relevance and insights that they had a decade ago.

As I sent the manuscript for the second edition to Chapman & Hall/CRC, I was mindful that publishing is an insular business. People know each other from one publisher to the next. If bribes motivated the problems at the old publisher, they might purchase problems at the new publisher. I monitored developments closely and was disappointed when irregularities soon emerged with advance marketing for the second edition. As with the previous publisher, I received ever more elaborate excuses as problems mounted. For me, the clincher was when “author information” appeared on on-line bookstores, erroneously indicating I was a professor at the University of Maryland. This was a minor “mistake”, but the publisher of the first edition had done exactly the same thing, erroneously indicated that I was a professor at the University of Washington. The stark similarity of these two supposedly random “mistakes”—coming on top of everything else—had the feel of someone thumbing his nose at me … dropping an unmistakable hint that Voldemort was back in the driver’s seat. I approached the new publisher’s management and proposed we terminate our publishing agreement, even before the second edition went to press. They did the expedient thing and agreed. That is how I decided to make the entire second edition available for free on the Internet.

0.1 What We’re About

Chapter 0

Preface

0.1 What We’re About

A watershed in the history of value-at-risk (VaR) was the publication of J.P. Morgan’s RiskMetrics Technical Document. Writing in the third edition of that document, Guldimann (1995) went beyond explaining RiskMetrics and described certain alternative “methods” for calculating value-at-risk. Authors of magazine articles, research papers and software marketing materials similarly described how value-at-risk might be calculated using various “methods”. Early “methods” abounded, and they were given an assortment of names. But authors soon winnowed the list to three practical “methods” in widespread use:

  • the parametric method,
  • the historical simulation method, and
  • the structured Monte Carlo method.

Describitng three “methods” for calculating value-at-risk is simple, intuitive and direct. Only one truly new “method” has been introduced since 1995. This might be termed the “quadratic method.” Rouvinez (1997) ultimately published it.

For some time, I felt the top-down “methods” approach for explaining value-at-risk was flawed. Suppose pioneers of aviation had settled on describing three “methods” for building airplanes:

  • the monoplane method,
  • the biplane method, and
  • the triplane method.

That would have missed so much. What about different engine technologies? What about different construction materials? What about so many other features, such as retractable landing gear, pressurized cabins, swept-back wings, fly-by-wire, etc.?

Top-down explanations—such as the “methods” approaches for explaining value-at-risk or aircraft design—are appealing because they go directly to results, but they lead nowhere after that. They inevitably narrow discussion. By comparison, bottom-up explanations build a foundation for deep understanding and further research. I felt that value-at-risk long ago outgrew the top-down “methods” approach of explanation. I wrote this book to provide a flexible bottom-up explanation of value-at-risk.

And I had a second goal for the book: I wanted it to be the first advanced text on value-at-risk, suitable for quantitative professionals.

The book has its origins in 1997, when I first put pen to paper. It took six years to write the first edition, but it achieved my two goals. It described from the bottom up how to design scalable production value-at-risk measures for real trading organizations. Practical, detailed examples were drawn from markets around the world, such as Euro deposits, Pacific Basin equities, physical coffees, and North American natural gas. Sophisticated techniques were presented in book form for the first time, including variance reduction for Monte Carlo value-at-risk measures, quadratic (so-called “delta-gamma”) methods for nonlinear portfolios, and essential remapping techniques. Real-world challenges relating to market data, portfolio mappings, multicollinearity, and intra-horizon events were addressed in detail. Exercises reinforced concepts and walked readers step-by-step through sophisticated computations. For practitioners, researchers, and students, the first edition was an authoritative guide to implementing real-world value-at-risk measures.