# 7.3 Current Practice

Consider key vector ^{1}** R**, which is one term in a stochastic process

**. We have historical data {**

*R*^{–}

^{α}

**, … ,**

*r*^{–1}

**,**

*r*^{0}

**}. Assume the data is spaced at intervals of one unit of time, so the spacing of historical data equals the length of the value-at-risk horizon.**

*r*In this section, we describe current practice for characterizing the distribution of ^{1}** R** conditional on information available at time 0 where the characterization takes one of the two forms:

- a fully specified conditional distribution;
- a conditional mean vector
^{1| 0}**μ**and conditional covariance matrix^{1| 0}**Σ**.

The first is used with quadratic and numerical transformation procedures. The second is used with linear transformation procedures. See Chapter 10. We discuss both forms of characterization together because constructing either entails largely the same steps.

###### 7.3.1 Choice of Distribution

Various distributions are used to model key factors, including the normal, lognormal, mixed normal, and Student *t* distributions. By far, the most widely used assumption is that key factors are conditionally joint-normal: ^{1}**R***N _{n}*(

^{1| 0}

**μ**,

^{1| 0}

**Σ**). The assumption is necessary if a quadratic transformation is to be used, but it is also widely assumed with Monte Carlo transformations.

In practice, we do not attempt to empirically justify a joint-normal assumption for key factors. Empirically estimating a conditional covariance matrix ^{1| 0}**Σ** is challenging enough! Instead, we invoke standard financial models. If conditional marginal distributions of individual key factors ^{1}*R _{i}* can be assumed normal, we accept that

^{1}

**is joint-normal. In practice, this is usually reasonable.**

*R*There are various models for financial variables. These often assume that prices of primary (i.e. non-derivative) instruments are conditionally lognormal, in which case their log returns are conditionally normal. If risk factors represent prices, interest rates, exchange rates, or implied volatilities, we might directly model them as lognormal key factors. Alternatively, we might model their log returns as conditionally normal key factors. A spread that is necessarily nonnegative (such as a credit spread) might be treated similarly. A spread that can be positive or negative (such as the price spread between two growths of coffee) might be directly modeled as a conditionally normal key factor.

###### 7.3.2 Conditional Mean Vectors

The conditional mean vector ^{1| 0}**μ** is specified component-by-component in a manner that depends upon what individual key factors represent. If key factor ^{1}*R _{i}* represents a price, interest rate, foreign exchange rate, implied volatility, or spread, it is common to assume

[7.1]

This zero-drift assumption may be inappropriate for certain key factors, especially over longer horizons. Suppose our value-at-risk horizon is 2 weeks and ^{1}*R _{i}* represents a stock price. If we assume stock prices rise on average 8% per year, then we might set

[7.2]

We may ascribe exchange rates nonzero drifts based upon interest rate parity. If prices, spreads, or implied volatilities exhibit seasonal rises and falls, this will induce short-term drifts that can also be reflected in ^{1| 0}μ* _{i}*.

If key factors represent differences or returns, it is often reasonable to assume

[7.3]

Over longer value-at-risk horizons, this may also be adjusted to reflect nonzero drifts, as appropriate.

In the same manner that conditional mean vector ^{1| 0}**μ** is constructed, so can conditional mean vectors ^{t | t–1}**μ** be constructed for previous points in time. We use these hereafter.

###### 7.3.3 White Noise

Whereas construction of conditional mean vector ^{1| 0}**μ** is based largely upon theoretical arguments for how risk factors should behave, construction of a conditional covariance matrix ^{1| 0}**Σ** relies more on empirical evidence. If we are to apply techniques of time series analysis, it is convenient to apply them to a process that we consider to be white noise, as defined in Section 4.8. Depending upon what its components represent, ** R** may not satisfy this criterion. Usually, we can define a linear polynomial

[7.4]

that does. Here * ^{t}b* is a diagonal

*n×*

*n*matrix, and

*is a vector. Then, if we estimate the conditional covariance matrix of , we can obtain the conditional covariance matrix*

^{t}**a**^{1| 0}

**Σ**of

^{1}

**as**

*R*[7.5]

We define linear polynomial [7.4] component by component, depending upon what each component *R** _{i}* of

**represents. If a component**

*R*

*R**can already be considered white noise, then we set*

_{i}[7.6]

If it represents a return or spread with nonzero conditional mean, it is generally reasonable to subtract the conditional mean

[7.7]

If a component represents a price, interest rate, exchange rate, or implied volatility, it is more often reasonable to set

[7.8]

If ^{t | t–1}μ* _{i}* =

^{t–1}

*r*, this is a simple return.

_{i}Having defined linear polynomial [7.4], we apply it to data {^{–α} ** r**, … ,

^{–2}

**,**

*r*^{–1}

**,**

*r*^{0}

**} to obtain data {, … ,**

*r*^{–2},

^{–1},

^{0}} to which we may apply methods of time series analysis. Note that may equal α or α – 1. It will equal α – 1 if the first data point

^{–α}

**is used to define a conditional mean**

*r*^{–α +1| –α }

**μ**for

^{–α +1}

**.**

*R*Assuming we can transform ** R** to white noise , then will be covariance stationary with (conditional and unconditional) mean

**0**. It will be unconditionally homoskedastic, with constant unconditional covariance matrix .