### Chapter 2

#### Mathematical Preliminaries

# 2.1 Motivation

In this chapter, we describe a number of techniques of applied mathematics. Some may already be familiar to you. All will play a role in subsequent discussions of VaR. This opening section places them in that context. Recall Section 1.8, which described a framework for modeling VaR. It includes a general schematic describing VaR measures, which we reproduce in Exhibit 2.1.

A **mapping procedure** specifies a **primary portfolio mapping** . Sometimes, the **mapping function** θ or **key vector** is computationally expensive to work with, making the subsequent application of a **transformation procedure** impractical. A solution is to replace the primary mapping with an approximation , which we call a **portfolio remapping**. There are many ways this might be done. Several are described in Chapter 9. In anticipation of that discussion, the present chapter covers a variety of techniques that are useful for constructing approximations. These include:

- gradient and gradient-Hessian approximations,
- ordinary interpolation,
- ordinary least squares.

Principal component analysis offers another technique of approximation. It is probabilistic, so we discuss it in Chapter 3. The present chapter covers eigenvalues and eigenvectors, which anticipate that discussion.

We considered Monte Carlo transformations in Section 1.6. That discussion was largely intuitive. To apply variance reduction techniques within Monte Carlo transformations, we will need a more formal understanding of the Monte Carlo method . That more formal understanding will come in the context of numerical integration. In anticipation of that discussion, the present chapter discusses the famous change-of-variables formula for definite integrals as well as deterministic techniques of numerical integration in one and multiple dimensions. One of those techniques—the trapezoidal rule—will be employed with quadratic transformations in Chapter 10.

Other topics covered in this chapter are:

- the Cholesky factorization, which allows us to take the “square root” of a covariance matrix in Chapter 3;
- cubic splines, which offer an alternative to ordinary interpolation for modeling term structures in Chapter 6;
- complex numbers, which are used to define characteristic functions in Chapter 3; and
- Newton’s method, which is a practical tool for solving nonlinear equations.