# 1.7 Examples

Let’s consider some examples of risk measures. These will introduce basic concepts and standard notation. They will also illustrate a framework for thinking about value-at-risk measures (and, more generally, measures of PMMRs), which we shall formalize in Section 1.8.

###### 1.7.1 Example: The Leavens PMMR

Value-at-risk metrics first emerged in finance during the 1980s, but they were preceded by various other PMMRs, including Markowitz’s (1952) variance of simple return. Even earlier, Leavens (1945) published a paper describing the benefits of diversification. He accompanied his explanations with a simple numerical example:

Measure time t in appropriate units. Let time t = 0 be the current time. Leavens considers a portfolio of 10 bonds over some horizon [0, 1]. Each bond will either mature at time 1 for USD 1000 or default and be worthless. Events of default are assumed independent. The portfolio’s market value at time 1 is given by the sum of the individual bonds’ accumulated values at time 1:

[1.6]

Let’s express this relationship in matrix notation. Let be a random vector with components . Let ω be a row vector whose components are the portfolio’s holdings in each bond. Since the portfolio holds one of each, ω has a particularly simple form:

[1.7]

With this matrix notation, [1.6] becomes the product:

[1.8]

Let 1|0ϕi denote the probability function, conditional on information available at time 0, of the ith bond’s value at time 1:

[1.9]

Measured in USD 1000s; the portfolio’s value has a binomial distribution with parameters n = 10 and p = 0.9. The probability function is graphed in Exhibit 1.3.

Exhibit 1.3: The market value (measured in USD 1000s) of Leavens’ bond portfolio has a binomial distribution with parameters 10 and 0.9.

Writing for a non-technical audience, Leavens does not explicitly identify a risk metric, but he speaks repeatedly of the “spread between probable losses and gains.” He seems to have the standard deviation of portfolio market value in mind. For the portfolio in his example, that PMMR has the value USD 948.69.

Our next two examples are more technical. Many readers will find them simple. Other readers—those whose mathematical background is not as strong—may find them more challenging. A note for each group:

• For the first group, the examples may tell you things you already know, but in a new way. They introduce notation and a framework for thinking about value-at-risk that will be employed throughout the text. At points, explanations may appear more involved than the immediate problem requires. Embrace this complexity. The framework we start to develop in the examples will be invaluable in later chapters when we consider more complicated value-at-risk measures.
• For the second group, you do not need to master the examples on a first reading. Don’t think of them as a main course. They are not even an appetizer. We are taking you back into the kitchen to sample a recipe or two. Don’t linger. Taste and move on. In Chapters 2 through 5, we will step back and explain the mathematics used in the examples—and used in value-at-risk measures generally. A purpose of the examples is to provide practical motivation for those upcoming discussions.

There is a useful formula that we will use in the next two examples. We introduce it here for use in those examples but will cover it again in more detail in Section 3.5.

Let X be a random vector with covariance matrix Σ. Define random variable Y as a linear polynomial

[1.10]

of X, where b is an n-dimensional row vector and a is a scalar. The variance of Y is given by

[1.11]

where a prime ′ indicates transposition. Formula [1.11] is a quintessential formula for describing how correlated risks combine, but there is a caveat. It only applies if Y is a linear polynomial of X.

###### Exercises
1.10

Using a spreadsheet, extend Leavens’ analysis to a bond portfolio that holds 20 bonds.

1. Graph the resulting probability function for .
2. Value Leavens’ “spread between probable losses and gains” PMMR for the portfolio?

1.11

Using only the information provided in the example, which of the following PMMR’s could we evaluate for Leavens’ bond portfolio:

1. 95% quantile of loss;
2. variance of portfolio value;
3. standard deviation of simple return.

# 1.5 Risk Limits

In a context where risk taking is authorized, risk limits are bounds placed on that risk taking.

Suppose a pension fund hires an outside investment manager to invest some of its assets in intermediate corporate bonds. The fund wants the manager to take risk on its behalf, but it has a specific form of risk in mind. It doesn’t want the manager investing in equities, precious metals, or cocoa futures. It communicates its intentions with contractually binding investment guidelines. These specify acceptable investments. They also place bounds on risk taking, such as requirements that:

• the portfolio’s duration always be less than 7 years;
• all bonds have a credit rating of BBB or better.

The first is an example of a market risk limit; the second of a credit risk limit

A risk limit has three components:

1. a risk metric,
2. a risk measure that supports the risk metric, and
3. a bound—a value for the risk metric that is not to be breached.

At any point in time, a limit’s utilization is the actual amount of risk being taken, as quantified by the risk measure. Any instance where utilization breaches the risk limit is called a limit violation.

A bank’s corporate lending department is authorized to lend to a specific counterparty subject to a credit exposure limit of GBP 10MM. For this purpose, the bank measures credit exposure as the sum amount of outstanding loans and loan commitments to the counterparty. The lending department lends the counterparty GBP 8MM, causing its utilization of the limit to be GBP 8MM. Since the limit is GBP 10MM, the lending department has remaining authority to lend up to GBP 2MM to the counterparty.

A metals trading firm authorizes a trader to take gold price risk subject to a 2000 troy ounce delta limit. Using a specified measure of delta, his portfolio’s delta is calculated at 4:30 PM each trading day. Utilization is calculated as the absolute value of the portfolio’s delta.

###### 1.5.1  Market Risk Limits

For monitoring market risk, many organizations segment portfolios in some manner. They may do so by trader and trading desk. Commodities trading firms may do so by delivery point and geographic region. A hierarchy of market risk limits is typically specified to parallel such segmentation, with each segment of the portfolio having its own limits. Limits generally increase in size as you move up the hierarchy—from traders to desks to the overall portfolio, or from individual delivery points to geographic regions to the overall portfolio.

Exhibit 1.1 illustrates how a hierarchy of market risk limits might be implemented for a trading unit. A risk metric is selected, and risk limits are specified based upon this. Each limit is depicted with a cylinder. The height of the cylinder corresponds to the size of the limit. The trading unit has three trading desks, each with its own limit. There are also limits for individual traders, but only those for trading desk A are shown. The extent to which each cylinder is shaded green corresponds to the utilization of that limit. Trader A3 is utilizing almost all his limit. Trader A4 is utilizing little of hers.

Exhibit 1.1: A hierarchy of market risk limits is illustrated for a hypothetical trading unit. A risk metric—value-at-risk, delta, etc.—is chosen. Risk limits are specified for the portfolio and sub-portfolios based upon this. The limits are depicted with cylinders. The height of each cylinder corresponds to the size of the limit. The degree to which it is shaded black indicates current utilization of the limit. Fractions next to each cylinder indicate utilization and limit size. Units are not indicated here, as these will depend upon the particular risk metric used. Individual traders have limits, but only those for traders on desk A are indicated in the exhibit.

For such a hierarchy of risk limits to work, an organization must have a suitable risk measure to calculate utilization of each risk limit on an ongoing basis. Below, we describe three types of market risk limits, culminating withvalue-at-risk limits.

###### 1.5.2  Stop-Loss Limits

A stop-loss limit indicates an amount of money that a portfolio’s single-period market loss should not exceed. Various periods may be used, and sometimes multiple stop-loss limits are specified for different periods. A trader might be given the following stop-loss limits:

• 1-day EUR 0.5MM;
• 1-week EUR 1.0MM;
• 1-month EUR 3.0MM:

A limit violation occurs whenever a portfolio’s single-period market loss exceeds a stop-loss limit. In such an event, a trader is usually required to hedge material exposures—hence the name stop-loss limit. Stop-loss limits have shortcomings:

• Single-period market loss is a retrospective measure of risk. It only indicates risk after financial consequences of that risk have been realized.
• Single-period loss provides an inconsistent indication of risk. If a portfolio suffers a large loss over a given period, this is a clear indication of risk. If the portfolio does not suffer a large loss, this does not indicate an absence of risk!
• Traders cannot control the specific losses they incur, so it is difficult to hold them accountable for isolated stop-loss limit violations.

Despite their shortcomings, stop-loss limits are simple and convenient to use. Non-specialists easily understand stop-loss limits. A single proxy for risk—experienced loss—can be applied consistently across different types of exposures and with different trading strategies. Calculating utilization is as simple (or difficult, in some cases) as marking a portfolio to market. For these reasons, stop-loss limits are widely implemented by trading organizations.

###### 1.5.3  Exposure Limits

Exposure limits are limits based upon an exposure risk metric. For limiting market risk, common metrics include: duration, convexity, delta, gamma, and vega. Crude exposure limits may also be based upon notional amounts. These are called notional limits. Many exposure metrics can take on positive or negative values, so utilization may be defined as the absolute value of exposure.

Exposure limits address many of the shortcomings of stop-loss limits. They are prospective, indicating risk before its financial consequences are realized. Also, exposure metrics provide a reasonably consistent indication of risk. For the most part, traders can be held accountable for exposure limit violations because they largely control their portfolio’s exposures. There are rare exceptions. A sudden market rise may cause a positive-gamma portfolio’s delta to increase, resulting in an unintended delta limit violation. For the most part, utilization of exposure limits is easy to calculate. There may be analytic formulas for certain exposure metrics. At worst, a portfolio must be valued under multiple market scenarios with some form of interpolation applied to assess exposure.

Exposure limits pose a number of problems:

• At higher levels of portfolio aggregation, exposure limits can multiply. While a trader who transacts in 30 stocks might require 30 delta limits, the entire trading floor he works on might transact in 3,000 stocks, requiring 3,000 delta limits.
• Different exposure limits may be required to address dissimilar exposures or different trading strategies. For example, delta might need to be supplemented with duration or rho to address yield curve risk. It might need to be supplemented with gamma or vega to address options risk.
• Custom exposure limits may be required to address specialized trading strategies such as cross-hedging, spread trading or pairs trading that reduce risk by taking offsetting positions in correlated assets. In such contexts, any delta limits must be large to accommodate each of the offsetting positions. Being so large, they cannot ensure reasonable hedging consistent with the intended trading strategy.
• With the exception of notional limits, non-specialists do not easily understand exposure limits. For example, it is difficult to know what might be a reasonable delta limit for an electricity trading desk if you don’t have both a technical understanding of what delta means and practical familiarity with the typical size of market fluctuations in the electricity market.
###### 1.5.4  Value-at-Risk Limits

Value-at-risk is used for a variety of tasks, but supporting risk limits is its quintessential purpose. When risk limits are measured in terms of value-at-risk, they are called value-at-risk limits. These combine many of the advantages of exposure limits and stop-loss limits.

Like exposure metrics, value-at-risk metrics are prospective. They indicate risk before its economic consequences are realized. Also like exposure metrics, value-at-risk metrics provide a reasonably consistent indication of risk. Finally, as long as utilization is calculated for traders in a timely and ongoing manner, it is reasonable to hold those traders accountable forvalue-at-risk limit violations. As with exposure limits, there are rare exceptions. Consider a trader with a negative gamma position. While she is responsible for hedging the position on an ongoing basis, it is possible that a sudden move in the underlier will cause an unanticipated spike in value-at-risk.

As with stop-loss limits, non-specialists may intuitively understandvalue-at-risk limits. If a portfolio has 1-day 90% USDvalue-at-risk of 7.5MM, a non-specialist can understand that such a portfolio will lose less than USD 7.5MM an average of 9 days out of 10. As with stop-loss limits, a single limit can suffice at each level of portfolio aggregation—at the position level, trader level, trading desk level, sub-portfolio level and portfolio level. Andvalue-at-risk limits are uniformly applicable to all sources of market risk and all trading strategies. Of course, for value-at-risk, such generality is theoretical. The ability of a particular value-at-risk measure to address the market risk associated with specific instruments or trading strategies depends on the generality and sophistication of that particular value-at-risk measure.

This brings us to the drawbacks ofvalue-at-risk limits:

• Depending on the level of generality and/or sophistication required of value-at-risk measures, they can be difficult to implement. This book is a testament to (and hopefully a palliative for) the potentially complicated analytics value-at-risk measures require.
• Utilization of somevalue-at-risk limits may be computationally expensive to calculate. While value-at-risk can be calculated in real time or near-real time for many portfolios, it may take minutes or hours to calculate for others.
• While most risk metrics entail some model risk or potential for manipulation, the complexity of value-at-risk measures make them particularly vulnerable.

The last point was illustrated in the aftermath of JPMorgan’s 2012 “London Whale” trading scandal. It came to light that bank employees had manipulated portfolio valuations, undermining stop-loss limits. They had also replaced a value-at-risk measure with a rudimentary spreadsheet, which further understated risk.

###### 1.5.5  Summary Comparison

Exhibit 1.2 summarizes the strengths and weakness of stop-loss, exposure, andvalue-at-risk limits.

Exhibit 1.2: Comparison of stop-loss, exposure, andvalue-at-risk limits.

# 1.4 Value-at-Risk

Suppose an investment fund indicates that, based on the composition of its portfolio and on current market conditions, there is a 90% probability it will either make a profit or otherwise not lose more than USD 2.3MM over the next trading day. This is an example of a value-at-risk (VaR) measurement. For a given time period and probability, a value-at-risk measure purports to indicate an amount of money such that there is that probability of the portfolio not losing more than that amount of money over that time period. Stated another way, value-at-risk purports to indicate a quantile of the probability distribution for a portfolio’s loss over a specified time period.

To specify a value-at-risk metric, we must identify three things:

1. The period of time over which a possible loss will be calculated—1 day, 2 weeks, 1 month, etc. This is called the value-at-risk horizon. In our example, the value-at-risk horizon is one trading day.
2. A quantile of that possible loss. In the example, the portfolio’s value-at-risk is expressed as a .90 quantile of loss.
3. The currency in which the possible loss is denominated. This is called the base currency. In our example, the base currency is USD.

In this book, we measure time in units equal to the length of the value-at-risk horizon, which always starts at time 0 and ends at time 1. We adopt the following convention for naming value-at-risk metrics: the metric’s name is given as the horizon, quantile, and currency, in that order, followed by “VaR” or “value-at-risk”. If the horizon is expressed in days without qualification, these are understood to be trading days. The quantile q is generally indicated as a percentage. Based on this convention, the value-at-risk metric of the investment fund in our example above is one-day 90% USD value-at-risk. If a British bank calculates value-at-risk as the 0.99 quantile of loss over ten trading days, as required under the Basel Accords, this would be called 10-day 99% GBPvalue-at-risk.

###### 1.4.1 Probabilistic Metrics of Market Risk (PMMRs)

Value-at-risk is one example of a category of risk metrics that we might call probabilistic metrics of market risk (PMMRs). While this book focuses on value-at-risk, we shall see that the computations one performs to calculate value-at-risk are mostly identical to those you would perform to calculate any PMMR. In this section, we formalize the notion of value-at-risk metrics by first formalizing PMMRs. This will provide a general perspective for understanding value-at-risk in a context of other familiar market risk metrics.

Suppose a portfolio were to remain untraded for a certain period—say from the current time 0 to some future time 1. The portfolio’s market value at the start of the period is known. Its market value at the end of the period is unknown. It is a random variable. As a random variable, we may assign it a probability distribution conditional upon information available at time 0. We might quantify the portfolio’s market risk with some real-valued parameter of that conditional distribution.

Formally, we define a PMMR as a real-valued function of:

• the distribution of conditional on information available at time 0; and
• the portfolio’s current value .

Standard deviation of , conditional on information available at time 0, is an example:

[1.1]

Volatility, defined as the standard deviation of a portfolio’s simple return 1Z, is a PMMR:

[1.2]

If we define portfolio loss as

[1.3]

then the conditional standard deviation of 1L is a PMMR:

[1.4]

###### 1.4.2 Value-at-Risk as a PMMR

Let and denote cumulative distribution functions (CDFs) of 1P and 1L, conditional on information available at time 0. The preceding superscripts 1|0 are a convention to alert you that the distributions are “for random variables at time 1 but conditional on information available at time 0.”

If these conditional CDFs are strictly increasing, their inverses and exist and provide quantiles of 1P and 1L. As we have already indicated, value-at-risk metrics represent a q-quantile of loss 1L, and this satisfies the definition of PMMR:

[1.5]

Recall that risk measures are categorized according to the metrics they support. Having defined value-at-risk metrics, we define value-at-risk as the category of risk measures that are intended to support value-at-risk metrics. If a risk measure is intended to support a metric that is a value-at-risk metric, then the measure is a value-at-risk measure. If we apply a value-at-risk measure to a portfolio, the value obtained is called a value-at-risk measurement or, less precisely, the portfolio’s value-at-risk.

To use a value-at-risk measure, we must implement it. We must secure necessary inputs, code software, and install the software on computers and related hardware. The result is a value-at-risk implementation.

###### Exercises
1.7

Which of the following represent PMMRs?

1. conditional variance of a portfolio’s USD market value 1 week from today;
2. conditional standard deviation of a portfolio’s JPY net cash flow over the next month.
3. beta, as defined by Sharpe’s (1964) Capital Asset Pricing Model, conditional on information available at time 0.
4. expected tail loss (ETL), which is defined as the expected value of a portfolio’s loss over a specified horizon, assuming the loss exceeds the portfolio’s value-at-risk for that same horizon.1

1.8

Using the naming convention indicated in the text, name the following value-at-risk metric: conditional 0.99 quantile of a portfolio’s loss, measured in GBP, over the next trading day.

1.9

As part of specifying a value-at-risk metric, we must indicate a base currency. This makes sense because value-at-risk indicates an amount of money that might be lost. It is measured in units of currency. But what about other PMMRs? Consider, for example, a 1-day standard deviation of simple return. A portfolio’s return is a unitless quantity; so is its conditional standard deviation of return. Must we specify a base currency for this PMMR?

# 1.3 Market Risk

Business activities entail a variety of risks. For convenience, we distinguish between different categories of risk: market risk, credit risk, liquidity risk, etc. Although such categorization is convenient, it is only informal. Usage and definitions vary. Boundaries between categories are blurred. A loss due to widening credit spreads may reasonably be called a market loss or a credit loss, so market risk and credit risk overlap. Liquidity risk compounds other risks, such as market risk and credit risk. It cannot be divorced from the risks it compounds. A convenient distinction for us to make is that between market risk and business risk.

Market risk is exposure to the uncertain market value of a portfolio. Suppose a trader holds a portfolio of commodity forwards. She knows what its market value is today, but she is uncertain as to its market value a week from today. She faces market risk.

Business risk is exposure to uncertainty in economic value that cannot be marked-to-market. The distinction between market risk and business risk parallels the distinction between market-value accounting and book-value accounting. Suppose a New England electricity wholesaler is long a forward contract for on-peak electricity delivered over the next 12 months. There is an active forward market for such electricity, so the contract can be marked to market daily. Daily profits and losses on the contract reflect market risk. Suppose the firm also owns a power plant with an expected useful life of 30 years. Power plants change hands infrequently, and electricity forward curves don’t exist out to 30 years. The plant cannot be marked to market on a regular basis. In the absence of market values, market risk is not a meaningful notion. Uncertainty in the economic value of the power plant represents business risk.

Most risk metrics apply to a specific category of risks. There are market risk metrics, credit risk metrics, etc. We do not categorize risk measures according to the specific operations those measures entail. We characterize them according to the risk metrics they are intended to support. Gamma—as used by options traders—is a metric of market risk. There are various operations by which we might calculate gamma. We might:

• use a closed form solution related to the Black-Scholes formula;
• value the portfolio at three different underlier values and interpolate a quadratic polynomial; etc.

Each method defines a risk measure. We categorize them all as measures of gamma, not based upon the specific operations that define them, but simply because they all are intended to support gamma as a risk metric.

###### Exercises
1.6

Describe two different risk measures, both of which are intended to support duration as a risk metric.

# 1.2 Risk Measures

In the context of risk measurement, we distinguish between:

• a risk measure, which is the operation that assigns a value to a risk, and
• a risk metric, which is the attribute of risk that is being measured.

Just as duration and size are attributes of a meeting that might be measured, volatility and credit exposure are attributes of bond risk that might be measured. Volatility and credit exposure are risk metrics. Other examples of risk metrics are delta, beta and duration. Any procedure for calculating these is a risk measure. For any risk metric, there may be multiple risk measures. There are, for example, different ways that the delta of a portfolio might be calculated. Each represents a different risk measure for the single risk metric called delta.

According to Holton (2004), risk has two components:

1. exposure, and
2. uncertainty.

If we swim in shark-infested waters, we are exposed to bodily injury or death from a shark attack. We are uncertain because we don’t know if we will be attacked. Being both exposed and uncertain, we face risk.

Risk metrics typically take one of three forms:

• those that quantify exposure;
• those that quantify uncertainty;
• those that quantify exposure and uncertainty in some combined manner.

Probability of rain is a risk metric that only quantifies uncertainty. It does not address our exposure to rain, which depends upon whether or not we have outdoor plans.

Credit exposure is a risk metric that only quantifies exposure. It indicates how much money we might lose if a counterparty were to default. It says nothing about our uncertainty as to whether or not the counterparty will default.

Risk metrics that quantify uncertainty—either alone or in combination with exposure—are usually probabilistic. Many summarize risk with a parameter of some probability distribution. Standard deviation of tomorrow’s spot price of copper is a risk metric that quantifies uncertainty. It does so with a standard deviation. Average highway deaths per passenger-mile is a risk metric that quantifies uncertainty and exposure. We may interpret it as reflecting the mean of a probability distribution.

###### Exercises
1.2

Give an example of a situation that entails uncertainty but not exposure, and hence no risk.

1.3

Give an example of a situation that entails exposure but not uncertainty, and hence no risk.

1.4

In our example of the deaths per passenger-mile risk metric, for what random variable’s probability distribution may we interpret it as reflecting a mean?

1.5

Give three examples of risk metrics that quantify financial risks:

1. one that quantifies exposure;
2. one that quantifies uncertainty; and
3. one that quantifies uncertainty combined with exposure.

# 1.1 Measures

Measures are widely used in science and in every-day activities. While it is common to speak of measuring things, we actually measure attributes of things. For example, we don’t measure a meeting, but we may measure the duration of a meeting or the size of a meeting. Duration and size are attributes.

A measure is an operationally defined procedure for assigning values. An attribute is that which is being measured—the object of the measurement.

A highway patrolman points a Doppler radar at an approaching automobile. The radar transmits microwaves, which are reflected off the auto and return to the radar. By comparing the wavelength of the transmitted microwaves to that of the reflected microwaves, the radar generates a number, which it displays. This entire process is a measure. An interpretation of that number—speed in miles/hour—is an attribute.

There are measures of length, temperature, mass, time, speed, strength, aptitude, etc. Assigned values are usually numbers, but can be elements of any ordered set. Shoe widths are sometimes assigned values from the ordered set {A, B, C, D, E}.

Let’s consider our first exercise.

###### Exercises
1.1

Describe a measure and corresponding attribute that might be used in weather forecasting.

# 0.4 Notation

Good notation is more than a convention. It shapes how we think, streamlines problem solving, avoids misunderstandings and facilitates communication. Nowhere was this more evident than the introduction of Hindu-Arabic numerals into Europe during the Middle Ages. Because they made arithmetic computations so easy, the new numerals supplanted both Roman numerals and the abacus, which had been widely used in Europe. Another example comes from calculus. While Newton and Leibniz disputed who discovered the fundamental theorem of calculus, there is no dispute as to whose notation was superior. We still use Leibniz’s notation today.

One of the contributions of this book is consistent notation for expressing ideas related to value-at-risk. Once you master the notation reading the book, I encourage you to keep using it. The notation will guide your thinking and help you avoid pitfalls.

Value-at-risk draws on many branches of mathematics. Each offers its own notation conventions. Because these conflict, it is impossible to observe them all simultaneously. But the book’s notation consistently presents financial concepts related to calculus, linear algebra, probability, statistics, time series analysis, and numerical methods, drawing on existing conventions where possible.

Random quantities are indicated with capital English letters. If they are univariate random variables they are italic non-bold: QRSX, etc. If they are multivariate in some sense—random vectors, random matrices, stochastic processes—they are italic bold: QRSX, etc. Nonrandom quantities are indicated with lowercase italic letters. These are nonbold for scalars: qrsx, etc. They are bold for vectors, matrices, or time series: qrsx, etc.

With this notation, if a random variable is denoted X, a specific realization of that random variable may be denoted x. Such notational correspondence between random quantities and realizations of those random quantities is employed throughout the book.

Components of vectors or matrices are distinguished with subscripts. Consider the random vector

[0.1]

or the matrix

[0.2]

Time also enters the equation. To avoid confusion, I do not indicate time with subscripts. Instead, I use superscripts that precede the rest of the symbol. For example, the Australian dollar Libor curve evolves over time. We may represent its value at time t as

[0.3]

The value at time 3 of 1-month AUD Libor is The entire curve at time 1 is denoted 1R. The univariate stochastic process representing 1-week AUD Libor over time is represented R2. The 15-dimensional stochastic process representing the entire curve over time is denoted R. Time 0 is generally considered the current time. At time 0, current and past Libor curves are known. As nonrandom vectors, they are represented with lowercase bold letters: If time is measured in days, yesterday’s value of 12-month AUD Libor is denoted

The advantage of using preceding superscripts to denote time is clarity. By keeping time and component indices physically separate, my notation ensures one will never be confused for the other. Use of preceding superscripts is unconventional but not without precedent. Actuarial notation makes extensive use of preceding superscripts.

Much other notation is standardized, as will become evident as the book unfolds. Some frequently occurring notation is summarized below.

 log natural logarithm n! factorial of an integer n, which is given by the product n! = 1·2·3 ··· (n –1)·n, with 0! = 1 B(m, p) binomial distribution for the number of “successes” in m trials, each with probability p of success U(a, b) uniform distribution on the interval (a, b) Un(Ω) n-dimensional uniform distribution on the region Ω N(μ, σ2) normal distribution with mean μ and variance σ2 Λ(μ, σ2) lognormal distribution with mean μ and variance σ2 χ2(ν, δ2) chi-squared distribution with ν degrees of freedom and non-centrality parameter δ2 Nn(μ, Σ) joint-normal distribution with mean vector μ and covariance matrix Σ 1P random variable for a portfolio’s value at time 1 0p portfolio value at time 0 ( 0p, 1P) a portfolio 1L random variable for portfolio loss: 0p – 1P 1R random vector of key factors (key vector) 0r vector of key factor values at time 0 1S random vector of asset values at time 1 (asset vector) 0s vector of asset values at time 0 Any of these might indicate a risk vector that is not a key vector. E( ) unconditional expected value tE( ) expected value conditional on information available at time t std( ) unconditional standard deviation tstd( ) standard deviation conditional on information available at time t var( ) unconditional variance tvar( ) variance conditional on information available at time t tμ unconditional mean of the time t term of a stochastic process t|t–kμ mean of the time t term of a stochastic process conditional on information available at time t – k t Σ unconditional covariance matrix of the time t term of a stochastic process t|t–kΣ covariance matrix of the time t term of a stochastic process conditional on information available at time t – k θ frequently used to denote a portfolio mapping function frequently used to denote a (non-portfolio) mapping function ω portfolio holdings tφ( ) unconditional PDF of the time t term of a stochastic process t|t–kφ( ) PDF of the time t term of a stochastic process conditional on information available at time t – k tΦ( ) unconditional CDF of the time t term of a stochastic process t|t–kΦ( ) CDF of the time t term of a stochastic process conditional on information available at time t – k Tildes can be placed above or between symbols. Placed above indicates a remapping; for example, denotes a remapping of 1P = θ(1R). Placed between indicates that a random variable or random vector has a particular distribution; for example, X ~ N(0,1) indicates that random variable X is standard normal. indicates that a random variable or random vector has a particular distribution, conditional on information available at time t. For example, indicates that, conditional on information available at time 0, 1X is standard normal. indicates that analytic software more sophisticated than a spreadsheet may be useful in solving an exercise

For more detailed explanations of notation, Section 2.2 addresses general mathematical notation used throughout the book. Section 4.6 elaborates on notation for time series analysis. Section 1.8 explains the notation of value-at-risk measures.

Currencies are indicated with standard codes:

Exhibit 0.1: currency codes

Where applicable, millions are indicated as MM. For example, 3.5 million Japanese yen is indicated JPY 3.5MM.

Exchange rates are indicated as fractions, so an exchange rate of 1.62 USD/GBP indicates that one British pound is worth 1.62 US dollars.

Acronyms used include those shown in Exhibit 0.2.

 BBA British Bankers Association CAD Capital Adequacy Directive CBOT Chicago Board of Trade CDF Cumulative Distribution Function CME Chicago Mercantile Exchange CSCE Coffee, Sugar and Cocoa Exchange EICCG explicit inversive congruential generator ETL expected tail loss ICG inversive congruential generator IID independent and identically distributed IPE International Petroleum Exchange LCG linear congruential generator Libor London Interbank Offered Rate LIFFE London International Financial Futures and Optons Exchange LME London Metals Exchange MGF moment generating function ML maximum likelihood MRG multiple recursive generator MSE mean squared error NYBOT New York Board of Trade NYMEX New York Mercantile Exchange NYSE New York Stock Exchange OTC over the counter P&L profit and loss PDF probability density function PF probability function PMMR probabilistic metric of market risk RAROC risk-adjusted return on capital RORAC return on risk-adjusted capital SEC Securities and exchange Commission TSE Toronto Stock Exchange UNCR Uniform Net Capital Rule VaR value-at-risk WCE Winnipeg Commodities Exchange Exhibit 0.2: Acronyms

# 0.3 How To Read This Book

Recognize that this is a book. In printed form, it would be about 500 pages. If you read it from start to finish, and do all the 100 or so exercises, you will come away with expertise in a substantial body of quantitative finance. To facilitate reading, the book is implemented as a responsive website that displays beautifully on a cellphone, tablet or desktop.

But you may just want to use the book as a reference. If that is your plan, I recommend you first familiarize yourself with the book’s notation, terminology and bottom-up approach to explaining value-at-risk. All three are carefully thought-out and worth learning in their own right. Read the next section, Section 0.4, for a quick introduction to notation. Then proceed to Chapter 1. It starts with basic concepts but rapidly becomes sophisticated. It blends in some light fare about risk management and the history of value-at-risk, but its focus is introducing terminology and the bottom-up approach to explaining value-at-risk. Once you have mastered Chapter 1, you can understand any other part of the book. Proceed to the table of contents or search feature to find topics that interest you. If you come across mathematics you are unfamiliar with, look to the chapters of Part II for explanations. They cover essential math.

To read the book in its entirety, here is a roadmap. The book is broken into four parts:

• Part I – Overview (Chapters 0 – 1)
• Part II – Essential Mathematics (Chapters 2 – 5)
• Part III – Value-at-Risk (Chapters 6 – 11)
• Part IV – Implementation and Validation (Chapters 12 – 14)

You are now reading Section 0.3 of the Preface, which is Chapter 0 in Part I. Read the next section, Section 0.4, which introduces notation, and then proceed to Chapter 1. Depending upon your math skills, some of its quantitative examples may be a bit intimidating. Skim them and move on. You can return later.

Part II covers essential mathematics that is anticipated in Part I and used extensively in Parts III and IV. If you need to review basic calculus, linear algebra or probability, there are references at the ends of Chapters 2 and 3.

To avoid seeming “cookbookish”, I have treated the math of Part II as a stand-alone topic and resisted the temptation to immediately illustrate concepts with value-at-risk applications. This should serve you well, since much of the math can be used in value-at-risk measures in different ways. Most of it is invaluable for financial applications unrelated to value-at-risk, so it is worth learning in its own right. However, Part II is focused. Only topics that will be relevant later in the book are covered. It is not essential that you master all of Part II before proceeding to Part III. Some readers may want to skim the math and refer back to it as needed.

Part III is a bottom-up explanation of how to design value-at-risk measures. Don’t attempt it until you have mastered Chapter 1. If you have difficulty with the mathematics of Chapter 1, learn that mathematics in Part II and then reread Chapter 1 before proceeding to Part III. Part III has a lot of technical depth. You may want to read it twice. Go quickly through the material the first time to get an overview of how it all fits together. Read it more carefully the second time to gain deeper understanding.

Part IV can be read at any time after you have read Chapter 1. Refer back to Part II for discussions of relevant mathematics as necessary.

Exercises are an essential part of the text. I encourage you to work through as many as time permits. Doing so will accelerate learning and provide insights that are difficult to achieve through reading alone. Most exercises can be performed with pencil and paper or a spreadsheet. For a few, more sophisticated analytical software will be useful. Such exercises are indicated with a symbol . Solutions are provided for all exercises. Just click on the “Solution” button at the end of each exercise.

Permission is granted for use of this book, its exercises and solutions in any public or private classroom setting. However, none of the content may be sold, repackaged or hosted on an independent platform. The preparation of derivative works based on any of the content is prohibited.

# 0.2 Voldemort and the Second Edition

The first edition of this book was published in hardcover by Acadeic Press/Elsevier in 2003. Readers of my blog are familiar with the bizarre circumstances that caused that first edition to be abruptly pulled from the market. A consistent pattern of problems had emerged with the book’s marketing and distribution. For example, the publisher distributed an incorrect ISBN number. They distributed incorrect shelving instructions for the book, indicating that bookstores should shelve it under marketing or radical economics. The book was not cataloged with the US Library of Congress.

Eventually, I sat down and wrote a list of all the things someone might do if they wanted to sabotage the marketing for a book. I then investigated how this book was being marketed and found that every item on that list had occurred. The book had not been listed it the catalog Books In Print. Professors who requested review copies never received them, even after repeated requests and intervention by me. Misinformation was posted to on-line bookstores. The list went on and on.

Behind the scenes, the author of a competing book on value-at-risk lurked. Dubbed “Junior” by me and “Voldemort” by one reader of my blog, this ethically challenged individual had a prior business relationship with one employee of the publisher. I won’t hazard a guess as to whether Junior paid bribes. I can think of no other explanation.

The situation was so bizarre and extreme that, when I approached the publisher’s senior management, they readily gave me back the copyright to the book. The book was pulled from the market, causing the price of used copies available on-line to skyrocket to over \$1,000. This unsatisfactory state of affairs persisted for a few years because I didn’t have time to deal with it. I eventually decided that the only way to ensure people had access to the book was to post it for free on the Internet.

That was my plan. This second edition came about because a number of people mentioned that, even with the book freely available on the Internet, they would gladly pay for a hardcover copy they could hold in their hands—this was back in 2009 when e-books were still a novelty. I found time to approach a new publisher, Chapman & Hall/CRC, and we decided that we would produce a hardcover second edition to supplement the free on-line first edition. As things turned out, it is the new second edition that you are now reading for free on-line. I will explain how that happened below, but first let me tell you what is new in the second edition.

Much is new. Four new chapters cover historical simulation, implementation, model risk and backtesting. Chapters 3 and 4 have been expanded with new material on probability and statistics. Other minor changes appear throughout the book. There are 22 new exercises, with solutions provided. Scattered amidst the new content, I have provided new details about the history of value-at-risk as well as a number of personal anecdotes from my consulting work.

In the first edition, I defined value-at-risk more broadly than most authors. While the common definition of value-at-risk is that it represents a quantile of loss over some specified horizon, my definition encompassed many other probabilistic metrics of market risk, including expected tail loss (ETL) and variance of return. I embraced the broader definition because the computations to calculate value-at-risk, ETL, variance of return and other metrics are essentially identical. They differ only in the very last step. Accordingly, techniques described in this book can be used to calculate any of these metrics, which is why I extended the definition of value-at-risk in the first edition to encompass them all. There was, however, some protest over that decision. In this second edition, I revert to the common definition. I now use the term “probabilistic metric of market risk” (PMMR) to refer to the broader category or market risk metrics that includes value-at-risk, ETL and variance of return, among others. See Section 1.4.

Notation remains unchanged from the first edition with one exception. Readers familiar with the first edition will notice at the end of Section 4.6.1 that I have modified how I use tildes to denote conditional distributions.

It has become customary to use the term “value-at-risk” to refer to any quantile-of-loss metric of financial risk, so people speak of value-at-risk metrics of credit risk or value-at-risk metrics of operational risk. I am less extravagant. In the first edition, I stated firmly that I defined value-at-risk as applicable to market risk only. At the time—back in 2003—“credit VaR” measures were flourishing. These are measures of credit risk that purport to reflect, say, the 0.99 quantile of a portfolio’s one-year loss to defaults. The fact is that defaults evolve over a business cycle, and one business cycle is different from the next. A 0.99 quantile of a portfolio’s credit risk would be the loss due to defaults that a portfolio would be expected to suffer over a particular year at a particular stage of one business cycle out of a hundred business cycles. Even if we had twenty years of relevant default data for each instrument or issuer, that would only represent three to five business cycles—too few data points to have statistical relevance. Such meager data cannot possibly support claims about a portfolio’s 0.99 quantile of one-year loss due to defaults. I don’t care how elaborate the mathematics of these measures might be. Mathematics is no substitute for empirical data. Any assertions about the 0.99 quantile of a portfolio’s one-year loss due to defaults is meaningless. In other words, most so-called credit VaR measures are meaningless. We don’t have to consider their analytics to reach this conclusion. It follows immediately from the paucity of default data.

This was my position in 2003. The 2008 meltdown in credit markets vindicated that position. Credit VaR measures, and related credit pricing models, spectacularly failed to warn of what was coming. Now there is talk of “building better models.” Such chatter is inevitable, I suppose, but it is nonsense. Truth be told, there was never any risk in the collateralized debt obligations (CDO’s) that crippled the credit markets in 2008. They were guaranteed to fail. If you jump from an airplane without a parachute, you aren’t taking risk. There is no uncertainty about the outcome. We don’t need models to tell us any of this.

A strength of this book is its numerous real-world examples from capital, energy and commodity markets. Some of these were historical examples. Others were current, at least for 2003. Much has changed in a decade. Due to a wave of mergers, many exchanges mentioned in those examples no longer exist as independent entities. A few instruments also no longer exist or otherwise lack the liquidity they once had. I have not updated examples to reflect such changes. What would be the point? Further changes would soon render even the updated examples outdated. I merely ask readers to treat outdated examples as historical examples. All retain the relevance and insights that they had a decade ago.

As I sent the manuscript for the second edition to Chapman & Hall/CRC, I was mindful that publishing is an insular business. People know each other from one publisher to the next. If bribes motivated the problems at the old publisher, they might purchase problems at the new publisher. I monitored developments closely and was disappointed when irregularities soon emerged with advance marketing for the second edition. As with the previous publisher, I received ever more elaborate excuses as problems mounted. For me, the clincher was when “author information” appeared on on-line bookstores, erroneously indicating I was a professor at the University of Maryland. This was a minor “mistake”, but the publisher of the first edition had done exactly the same thing, erroneously indicated that I was a professor at the University of Washington. The stark similarity of these two supposedly random “mistakes”—coming on top of everything else—had the feel of someone thumbing his nose at me … dropping an unmistakable hint that Voldemort was back in the driver’s seat. I approached the new publisher’s management and proposed we terminate our publishing agreement, even before the second edition went to press. They did the expedient thing and agreed. That is how I decided to make the entire second edition available for free on the Internet.

### Chapter 0

#### Preface

A watershed in the history of value-at-risk (VaR) was the publication of J.P. Morgan’s RiskMetrics Technical Document. Writing in the third edition of that document, Guldimann (1995) went beyond explaining RiskMetrics and described certain alternative “methods” for calculating value-at-risk. Authors of magazine articles, research papers and software marketing materials similarly described how value-at-risk might be calculated using various “methods”. Early “methods” abounded, and they were given an assortment of names. But authors soon winnowed the list to three practical “methods” in widespread use:

• the parametric method,
• the historical simulation method, and
• the structured Monte Carlo method.

Describitng three “methods” for calculating value-at-risk is simple, intuitive and direct. Only one truly new “method” has been introduced since 1995. This might be termed the “quadratic method.” Rouvinez (1997) ultimately published it.

For some time, I felt the top-down “methods” approach for explaining value-at-risk was flawed. Suppose pioneers of aviation had settled on describing three “methods” for building airplanes:

• the monoplane method,
• the biplane method, and
• the triplane method.

That would have missed so much. What about different engine technologies? What about different construction materials? What about so many other features, such as retractable landing gear, pressurized cabins, swept-back wings, fly-by-wire, etc.?

Top-down explanations—such as the “methods” approaches for explaining value-at-risk or aircraft design—are appealing because they go directly to results, but they lead nowhere after that. They inevitably narrow discussion. By comparison, bottom-up explanations build a foundation for deep understanding and further research. I felt that value-at-risk long ago outgrew the top-down “methods” approach of explanation. I wrote this book to provide a flexible bottom-up explanation of value-at-risk.

And I had a second goal for the book: I wanted it to be the first advanced text on value-at-risk, suitable for quantitative professionals.

The book has its origins in 1997, when I first put pen to paper. It took six years to write the first edition, but it achieved my two goals. It described from the bottom up how to design scalable production value-at-risk measures for real trading organizations. Practical, detailed examples were drawn from markets around the world, such as Euro deposits, Pacific Basin equities, physical coffees, and North American natural gas. Sophisticated techniques were presented in book form for the first time, including variance reduction for Monte Carlo value-at-risk measures, quadratic (so-called “delta-gamma”) methods for nonlinear portfolios, and essential remapping techniques. Real-world challenges relating to market data, portfolio mappings, multicollinearity, and intra-horizon events were addressed in detail. Exercises reinforced concepts and walked readers step-by-step through sophisticated computations. For practitioners, researchers, and students, the first edition was an authoritative guide to implementing real-world value-at-risk measures.